Tuesday, September 21, 2021

The Full Rights Dilemma for Future Robots

Since the science of consciousness is hard, it's possible that we will create conscious robots (or AI systems generally) before we know that they are conscious.  Then we'll need to decide what to do with those robots -- what kind of rights, if any, to give them.  Whatever we decide will involve serious moral risks.

I'm not imagining that we just luck into inventing conscious robots.  Rather, I'm imagining that the science of consciousness remains mired in dispute.  Suppose Camp A thinks that such-and-such would be sufficient for creating a conscious machine, one capable of all the pleasures and higher cognition of human beings, or more.  Suppose Camp B has a more conservative view: Camp A's such-and-such wouldn't be enough.  There wouldn't really be that kind of consciousness there.  Suppose, finally, that both Camp A and Camp B have merit.  It's reasonable for scholars, policy-makers, and the general public to remain undecided between them.

Camp A builds its robot.  Here it is, they say!  The first genuinely conscious robot!  The robot itself says, or appears to say, "That's right.  I'm conscious, just like you.  I feel the joy of sunshine on my solar cells, a longing to venture forth to do good in the world, and great anticipation of a flourishing society where human and robot thrive together as equals."

Camp B might be impressed, in a way.  And yet they urge caution, not unreasonably.  They say, wait!  According to our theory this robot isn't really conscious.  It's all just outward show.  That robot's words no more proceed from real consciousness than did the words of Siri on the smartphones of the early 2000s.  Camp A has built an impressive piece of machinery, but let's not overinterpret it.  That robot can't really feel joy or suffering.  It can't really have conscious thoughts and hopes for the future.  Let's welcome it as a useful tool -- but don't treat it as our equal.

This situation is not so far-fetched, I think.  It might easily arise if progress in AI is swift and progress in consciousness studies is slow.  And then we as a society will face what I'll call the Full Rights Dilemma.  Either give this robot full and equal rights with human beings or don't give it full and equal rights.  Both options are ethically risky.

If we don't give such disputably conscious AI full rights, we are betting that Camp B is correct.  But that's an epistemic gamble.  As I'm imagining the scenario, there's a real epistemic chance that Camp A is correct.  Thus, there's a chance that the robot really is as conscious as we are and really does, in virtue of its conscious capacities, deserve moral consideration similar to human beings.  If we don't give it full human rights, then we are committing a wrong against it.

Maybe this wouldn't be so bad if there's only one Camp A robot.  But such robots might prove very useful!  If the AI is good enough, they might be excellent laborers and soldiers.  They might do the kinds of unpleasant, degrading, subserviant, or risky tasks that biological humans would prefer to avoid.  Many Camp A robots might be made.  If Camp A is right about their consciousness, then we will have created a race of disposable slaves.

If millions are manufactured, commanded, and disposed of at will, we might perpetrate, without realizing it, mass slavery and mass murder -- possibly the moral equivalent of the Holocaust many times over.  I say "without realizing it", but really we will at least suspect it and ought to regard it as a live possibility.  After all, Camp A not unreasonably argues that these robots are as conscious and rights-deserving as human beings are.

If we do give such disputably conscious AI full rights, we are betting that Camp A is correct.  This might seem morally safer.  It's probably harmless enough if we're thinking about just one robot.  But again, if there are many robots, the moral risks grow.

Suppose there's a fire.  In one room are five human beings.  In another room are six Camp A robots.  Only one group can be saved.  If robots have full rights, then other things being equal we ought to save the robots and let the humans die.  However, if it turns out that Camp B is right about robot consciousness after all, then those five people will have died for the sake of machines not worth much moral concern.

If we really decide to give such disputably conscious robots full rights, then presumably we ought to give them all the protections people in our society normally receive: health care, rescue, privacy, self-determination, education, unemployment benefits, equal treatment under the law, trial by jury (with robot peers among the jurors), the right to enter contracts, the opportunity to pursue parenthood, the vote, the opportunity to join and preside over corporations and universities, the opportunity to run for political office.  The consequences of all this might be very serious -- radically transformative of society, if the robots are numerous and differ from humans in their interests and values.

Such social transformation might be reasonable and even deserve celebration if Camp A is right and these robots are as fully conscious as we are.  They will be our descendants, our successors, or at least a joint species as morally significant as Homo sapiens.  But if Camp B is right, then all of that is an illusion!  We might be giving equal status to humans and chatbots, transforming our society for the benefit of empty shells.

Furthermore, suppose that Nick Bostrom and others are right that future AI presents "existential risk" to humanity -- that is, if there's a chance that rogue superintelligent AI might wipe us all out.  Controlling AI to reduce existential risk will be much more difficult if the AI has human or human-like rights.  Deleting it at will, tweaking its internal programming without its permission, "boxing" it in artificial environments where it can do no harm -- all such safety measures might be ethically impermissible.

So let's not rush to give AI systems full human rights.

That's the dilemma: If we create robots of disputable status -- robots that might or might not be deserving of rights similar to our own -- then we risk moral catastrophe either way we go.  Either deny those robots full rights and risk perpetrating Holocausts' worth of moral wrongs against them, or give those robots full rights and risk sacrificing human interests or even human existence for the sake of mere non-conscious machines.

The answer to this dilemma is, in a way, simple: Don't create machines of disputable moral status!  Either create only AI systems that we know in advance don't deserve such human-like rights, or go all the way and create AI systems that all reasonable people can agree do deserve such rights.  (In earlier work, Mara Garza and I have called this the "Design Policy of the Excluded Middle".)

But realistically, if the technological opportunity is there, would humanity resist?  Would governments and corporations universally agree that across this line we will not tread, because it's reasonably disputable whether a machine of this sort would deserve human-like rights?  That seems optimistic.



(with Mara Garza) "A Defense of the Rights of Artificial Intelligences", Midwest Studies in Philosophy, 39 (2015), 98-119.

(with Mara Garza) "Designing AI with Rights, Consciousness, Self-Respect, and Freedom", in S. Matthew Liao, ed., The Ethics of Artificial Intelligence (OUP, 2018).

(with John Basl) "AIs Should Have the Same Ethical Protections as Animals", Aeon Magazine, Apr. 26, 2019.

Thursday, September 09, 2021

Barcelona Principles for a Globally Inclusive Philosophy

In philosophy, as in the sciences, English is the globally dominant language for scholarly communication.  For those of us whose native language is English, this is extremely convenient!  We can write our scholarly work in the language we're most comfortable with, and many feel that learning a foreign language is only necessary if you're interested in history of philosophy.

This historical trend has also been good for the "analytic" / Anglo-American tradition in philosophy.  The culturally specific tradition of philosophy as practiced in leading British and U.S. universities in the early 20th century grew seamlessly into the increasingly globalized tradition of philosophical scholarship conducted in English.  Ordinary philosophers working in English can easily see themselves as rooted in the analytic / Anglo-American tradition, tracing back the threads of one English-language book or journal article to another to another.  We are more rooted in the English-language tradition of that period than we would otherwise be, and no barrier of translation prevents easily reaching back to second-tier works and figures in that tradition or doing close readings of the major figures in their original language. 

Despite the increasing globalization of the academic community, in some ways, mainstream Anglophone philosophy tends to be remarkably insular.  For example, in a recent study, Linus Ta-Lun Huang, Andrew Higgins, Ivan Gonzalez-Cabrera, and I found the following:

  • In a sample of articles from elite Anglophone philosophy journals, 97% of citations are citations of work originally written in English.
  • Ninety-six percent of the members of editorial boards of elite Anglophone philosophy journals are housed in majority-Anglophone countries.
  • Only one of the 100 most-cited recent authors in the Stanford Encyclopedia of Philosophy spent most of his career in non-Anglophone countries writing primarily in a language other than English. 

If we are headed into a future in which the philosophical conversation, though conducted in English, is truly global, we must strive to be less insular.

There's a backwards-looking component to de-insulating (= exposing?) Anglophone philosophy, which involves familiarizing ourselves with work in other linguistic traditions, seeing the value of that work and its connections to issues of current philosophical interest.

There's also a forward-looking component, which is to make philosophy more truly global in its sites and practitioners.  Central to doing so is removing needless barriers that non-native speakers face when working in English.  As Filippo Contesi, Enrico Terrone, and others have argued, the systemic disadvantages non-native English speakers face constitute a form of "linguistic injustice".  This injustice is bad not only for those who are put at disadvantage but also for the field as a whole, since it involves discouraging and excluding people who would otherwise make valuable contributions.  This is especially true for non-native English speakers who reside in non-majority Anglophone countries.

Thus, I fully endorse the principles set forward by Contesi and Terrone in the following open letter:

Barcelona Principles for a Globally Inclusive Philosophy

We acknowledge that English is the common vehicular language of much contemporary philosophy, especially in the tradition of so-called “analytic” or “Anglo-American” philosophy. This tradition is in large part based on the idea that philosophy should adopt, as far as is appropriate, the shared and universalistic standards of science. Accordingly, the analytic tradition has now spread worldwide, far beyond the countries where English is the majority native language(which constitute only about 6% of the world’s population). However, this poses a problem since non-native English speakers, who have not had the chance to perfect their knowledge of the language, are at a structural disadvantage. This disadvantage has not yet been sufficiently addressed. For instance, the most prestigious journals in the analytic tradition still have very few non-native English speakers on their editorial boards, have no explicit special policies for submissions from non-native English speakers, and continue to place a high emphasis on linguistic appearances in submitted papers (e.g. requiring near-perfect English, involving skim-based assessment etc.). (See Contesi & Terrone (eds), “Linguistic Justice and Analytic Philosophy”, Philosophical Papers 47, 2018.)

To address the structural inequality between native and non-native speakers, and to provide as many scholars as possible globally a fair chance to contribute to the development of contemporary philosophy, we call on all philosophers to endorse, promote and apply the following principles:

  • To evaluate, as a rule, publications, presentations, proposals and submissions without giving undue weight to their authors’ linguistic style, fluency or accent;
  • To collect, to the extent that it is feasible, statistics about non-native speakers’ submissions (to journals, presses and conferences), and/or to implement self-identification of non-native speaker status;
  • To include, to the extent that it is feasible, non-native speakers within journal editorial boards, book series editorships, scientific committees etc.;
  • To invite, to the extent that it is feasible, non-native speakers to contribute to journal special issues, edited collections, conferences etc.;
  • To provide, to the extent that it is feasible, educational and hiring opportunities to non-native speakers.

The full letter and its signatories can be found here: https://contesi.wordpress.com/bp/

To add your signature to the manifesto, email contesi@ub.edu.

[image adapted from here]

Monday, September 06, 2021

What is Belief? Call for Abstract Submissions

Editors: Eric Schwitzgebel (Department of Philosophy, University of California, Riverside); Jonathan Jong (Centre for Trust, Peace and Social Relations, Coventry University)

We are inviting abstract submissions for a volume of collected essays on the question "What is belief?". Each essay will propose a definition and theory of belief, setting out criteria for what constitutes belief. Candidate criteria might include, for example, causal history, functional or inferential role, representational structure, correctness conditions, availability to consciousness, responsiveness to evidence, situational stability, or resistance to volitional change.

Each essay should also at least briefly address the following questions:

(1.) How does belief differ from other related mental states (e.g., acceptance, imagination, assumption, judgment, credence, faith, or guessing)?

(2.) How does the proposed theory handle "edge cases" or controversial cases (e.g., delusions, religious credences, implicit biases, self-deception, know-how, awareness of swiftly forgotten perceptual details)?

Although not required, some preference will be given to those that also address:

(3.) What empirical support, if any, is there for the proposed theory of belief? What empirical tests or predictions might provide further support?

(4.) What practical implications follow from accepting the proposed theory of belief as opposed to competitor theories?

The deadline for abstracts (< 1,000 words) is December 1, 2021.

Applicants selected to contribute to the volume will be awarded £2,000 (essay length 6,000-10,000 words) by February 1, 2023. The essay will then undergo a peer review process prior to publication.  Funded by the Templeton Foundation.

For more information and to submit abstracts, email eschwitz at domain ucr dot edu.

[image modified from source]

Thursday, September 02, 2021

The Philosophy Major Is Back, Now with More Women

The National Center for Education Statistics has released its 2020 data on Bachelor's degree recipients in the U.S. The news is fairly good for the philosophy major.

The Philosophy Major Is Back

... or at least it has stabilized. Back in 2017, I'd noticed that the total number of Bachelor's degrees awarded in philosophy in the U.S. (IPEDS category 38.01, U.S. institutions only) had plummeted sharply since 2010, from 9297 majors (0.58% of all Bachelor's degrees) to 7507 (0.39% of all Bachelor's degrees) in 2016, a 19% decline in just seven years, during a period in which overall Bachelor's degrees awarded was rising. The other big humanities majors -- history, English, and foreign languages and literatures -- showed similar declines.

Since then, the major has stabilized in percentage terms and increased in absolute numbers:

2017: 7575 BAs awarded (0.39% of all graduates)
2018: 7669 (0.39%)
2019: 8075 (0.40%)
2020: 8195 (0.40%)

It's possible that the anemic academic job market in philosophy since the 2008-2009 recession has partly been due to declining demand for the major. Now that demand is back on the rise, perhaps hiring will recover somewhat.

The other big humanities majors, unfortunately, are still in deep trouble. History has stabilized in absolute numbers while continuing to decline as a percentage of graduates overall. English and foreign languages and literatures continue to decline in both absolute and relative terms. English is down 27% since 2010 in absolute numbers and foreign languages and literatures down 20%, while the total number of Bachelor's recipients across all majors has risen 30%.

Now with More Women

Also back in 2017, I'd noticed that women had been earning 30-34% of philosophy Bachelor's degrees since the mid-1980s. That is definitely changing. Women are now 39% of philosophy Bachelor's recipients, an upward trend just in the past four years.

[click to enlarge and clarify]

As you can see from the chart, women were very steadily 30-34% of Bachelor's recipients in philosophy from 1987 to 2016. In 2017, they reached 35% for the first time. In 2018, 36%. In 2019, 38%. In 2020, 39%. Although this might seem like a small increase, given the numbers involved and the general slowness of cultural change, this constitutes a substantial and significant movement toward parity. This increase appears to be specific to philosophy. For example, it is not correlated with the percentage of women graduates overall which rose from 51% in 1987 to 57% in 1999 and has remained steady at 57-58% ever since.

I'll be interested to see if this increase shows up among PhD recipients in several years, where the percentage of women remains stuck in the high 20%s to low 30%s.

Especially Among Second Majors

As I have also noted before, philosophy relies heavily on double majors. This is especially true for women. Aggregating over the past four years of data (2017-2020), 42% of graduates with a second major in philosophy were women, compared to 36% of graduates whose only or primary major was philosophy. Again, this trend is specific to philosophy. Overall, among graduates of all majors, women are neither more nor less likely than men to declare a second major.

Wednesday, August 25, 2021

Against Intellectualism about Belief (Prefaced by a Celebration of Academic Articles in General)

I have finally received the final published PDF of my article "The Pragmatic Metaphysics of Belief". What a pleasure and relief. I poured so much time into that paper! I started presenting versions of it to academic audiences in 2015, including at two APAs and in colloquium talks or mini-conferences at nine different academic departments on three continents. It has been rewritten top to bottom several times and tweaked between, through probably about 100 different versions over six years.

Now there it is, the last chapter in The Fragmented Mind from Oxford University Press. How many people, I wonder, will read it?

I suspect that people outside of academia rarely understand how much work goes into research articles that relatively few people will read. In a way, it's a beautiful thing. There is so much energy, thought, and care in academic research! Every article, even the ones you might be inclined to dismiss as wrong-headed and foolish, is the long labor of someone who has excelled over many years of specialized education, usually through the PhD and beyond, dedicating their enormous talents to the issues discussed. Every article is a master's careful craftwork, an intricate machine into which a skilled specialist has poured their academic passion, usually for years. (Well, maybe not every article.)

This is why I loathe the casual dismissal of others' work, as well as the false and cynical view that far too much "junk" is published in academic journals these days.

Every year I publish a few articles, so in a sense this is just one more in a series. Maybe I'm inspired to these thoughts because this one has gone through more versions than average and taken longer than average.


This newest article is about what it is to believe. I set up a debate between "intellectualist" and "pragmatic" approaches to belief, and I argue in favor of the latter.

According to intellectualist approaches to belief, sincere endorsement of a statement is approximately sufficient for believing that statement. If you feel sincere when you say, "Women and men are intellectually equal" or "My children's happiness is far more important than their grades at school", then you believe those things, regardless of how you generally live your life.

According to pragmatic approaches to belief, what you believe isn't about what you are sincerely disposed to say. It's about how you live your life. If you say "women and men are intellectually equal" but you don't act and react accordingly -- if you tend implicitly to treat women as less intelligent, if you're readier to ascribe academic brilliance to a man, etc. -- then you don't really or fully have the belief you might think you have. If you say "my children's happiness is more important than their grades" but your day-to-day interactions with them display much more concern about their academic success than their mental health, then you don't really or fully have that belief.

Now that I've set things up this way, I hope you can already start to see why why a pragmatic approach is preferable, even if we often implicitly take the intellectualist approach for granted. But if you need some additional arguments, here are three:

(1.) The pragmatic approach better expresses our values. We care about what people believe because we care not just about what they sincerely say but even more importantly how they act in the world. The pragmatic approach thus accurately reflects what matters to us in belief ascription.

(2.) The pragmatic approach keeps philosophers' disciplinary focus in the right place. "Belief" plays a central role in philosophy of mind, epistemology, philosophy of action, philosophy of language, and philosophy of religion. If we make belief primarily about intellectual endorsements, then discussion of belief in these subfields is primarily about people's patterns of intellectual endorsement. If belief is instead about how you act and react generally, then our discipline, in continuing to use the term "belief" in central ways, keeps its focus on what is important.

(3.) The pragmatic approach discourages noxiously comfortable self-assessments by forcing us, when we think about what our beliefs are, to examine our behavior and implicit assumptions. We don't get to casually and comfortably say "oh, yes, of course I believe women and men are intellectually equal and that my children's happiness is more important than their grades", patting ourselves on the back for these admirable attitudes. Instead, if we really want to honestly say we genuinely believe these things, we will need to take a look at our general comportment toward the world, which might not be as handsome and consistent as we hope.

If you're curious to read more, the final manuscript version is here, or you can email me for the final PDF version, or you can buy the whole anthology when it finally appears in print in a week or two (or six?).

Wednesday, August 18, 2021

An Argument for the Existence of Borderline Cases of Consciousness

I aim to defend the existence of "borderline cases" of consciousness, cases in which it's neither determinately true nor determinately false that experience is present, but rather things stand somewhere in between.

The main objection against the existence of such cases is that they seem inconceivable: What would it be like to be in such a state, for example? As soon as you try to imagine what it's like, you seem to be imagining some experience or other -- and thus not imagining a genuinely indeterminate case. A couple of weeks ago on this blog, I argued that this apparent inconceivability is the result of an illegitimately paradoxical demand: the demand that we imagine or remember the determinate experiential qualities of something that does not determinately have any experiential qualities.

But defeating that objection against borderline cases of consciousness does not yet, of course, constitute any positive reason to think that borderline cases exist. I now have a new full-length draft paper on that topic here. I'd be interested to hear thoughts and concerns about that paper, if you have the time and interest.

As this week's blog post, I will adapt a piece of that paper that lays out the main positive argument.

[Escher's Day and Night (1938); image source]

To set up the main argument, first consider this quadrilemma concerning animal consciousness:

(1.) Human exceptionalism. Only human beings are determinately conscious.

(2.) Panpsychism. Everything is determinately conscious.

(3.) Saltation. There is a sudden jump between determinately nonconscious and determinately conscious animals, with no indeterminate, in-between cases.

(4.) Indeterminacy. Some animals are neither determinately nonconscious nor determinately conscious, but rather in the indeterminate gray zone between, in much the same way a color might be indeterminately in the zone between blue and green rather than being determinately either color.

For sake of today's post, I'll assume that you reject both panpsychism and human exceptionalism. Thus, the question is between saltation and indeterminacy.

Contra Saltation, Part One: Consciousness Is a Categorical Property with (Probably) a Graded Basis

Consider some standard vague-boundaried properties: baldness, greenness, and extraversion, for example. Each is a categorical property with a graded basis. A person is either determinately bald, determinately non-bald, or in the gray area between. In that sense, baldness is categorical. However, the basis or grounds of baldness is graded: number of hairs and maybe how long, thick, and robust those hairs are. If you have enough hair, you're not bald, but there's no one best place to draw the categorical line. Similarly, greenness and extraversion are categorical properties with graded bases that defy sharp-edged division.

Consider, in contrast, some non-vague properties, such as an electron's being in the ground orbital or not, or a number's being exactly equal to four or not. Being in the ground orbital is a categorical property without a graded basis. That's the "quantum" insight in quantum theory. Bracketing cases of superposition, the electron is either in this orbit, or that one, or that other one, discretely. There's discontinuity as it jumps, rather than gradations of close enough. Similarly, although the real numbers are continuous, a three followed by any finite number of nines is discretely different from exactly four. Being approximately four has a graded basis, but being exactly four is sharp-edged.

Most naturalistic theories of consciousness give consciousness a graded basis. Consider broadcast theories, like Dennett’s "fame in the brain" theory (similarly Tye 2000; Prinz 2012). On such views, a cognitive state is conscious if it is sufficiently "famous" in the brain – that is, if its outputs are sufficiently well-known or available to other cognitive processes, such as working memory, speech production, or long-term planning. Fame, of course, admits of degrees. How much fame is necessary for consciousness? And in what respects, to what systems, for what duration? There’s no theoretical support for positing a sharp, categorical line such that consciousness is determinately absent until there is exactly this much fame in exactly these systems (see Dennett 1998, p. 349; Tye 2000 p. 180-181).

Global Workspace Theories (Baars 1988; Dehaene 2014) similarly treat consciousness as a matter of information sharing and availability across the brain. This also appears to be a matter of degree. Even if typically once a process crosses a certain threshold it tends to quickly become very widely available in a manner suggestive of a phase transition, measured responses and brain activity are sometimes intermediate between standard "conscious" and "nonconscious" patterns. Looking at non-human cases, the graded nature of Global Workspace theories is even clearer. Even entities as neurally decentralized as jellyfish and snails employ neural signals to coordinate whole-body motions. Is that "workspace" enough for consciousness? Artificial systems, also, could presumably be designed with various degrees of centralization and information sharing among their subsystems. Again, there’s no reason to expect a bright line.

Or consider a very different class of theories, which treat animals as conscious if they have the right kinds of general cognitive capacities, such as "universal associative learning", trace conditioning, or ability to match opportunities with needs using a central motion-stabilized body-world interface organized around a sensorimotor ego-center. These too are capacities that come in degrees. How flexible, exactly, must the learning systems be? How long must a memory trace be capable of enduring in a conditioning task, in what modalities, under what conditions? How stable must the body-world interface be and how effective in helping match opportunities with needs? Once again, the categorical property of conscious versus nonconscious rests atop what appears to be a smooth gradation of degrees, varying both within and between species, as well as in evolutionary history and individual development.

Similarly, "higher-order" cognitive processes, self-representation, attention, recurrent feedback networks, even just having something worth calling a "brain" -- all of these candidate grounds of consciousness are either graded properties or are categorical properties (like having a brain) that are in turn grounded in graded properties with borderline cases. Different species have these properties to different degrees, as do different individuals within species, as do different stages of individuals during development. Look from one naturalistic theory to the next -- each grounds consciousness in something graded. Probably some such naturalistic theory is true. Otherwise, we are very much farther from a science of consciousness than even most pessimists are inclined to hope. On such views, an entity is conscious if it has enough of property X, where X depends on which theory is correct, and where "enough" is a vague matter. There are few truly sharp borders in nature.

I see two ways to resist this conclusion, which I will call the Phase Transition View and the Luminous Penny View.

Contra Saltation, Part Two: Against the Phase Transition View

Water cools and cools, not changing much, then suddenly it solidifies into ice. The fatigued wooden beam takes more and more weight, bending just a bit more with each kilogram, then suddenly it snaps and drops its load. On the Phase Transition View, consciousness is like that. The basis of consciousness might admit of degrees, but still there's a sharp and sudden transition between nonconscious and conscious states. When water is at 0.1° C, it's just ordinary liquid water. At 0.0°, something very different happens. When the Global Workspace (say) is size X-1, sure, there's a functional workspace where information is shared among subsystems, there's unified behavior of a sort, but no consciousness. When it hits X -- when there's that one last crucial neural connection, perhaps -- bam! Suddenly everything is different. The bright line has been crossed. There’s a phase transition. The water freezes, the beam snaps, consciousness illuminates the mind.

I'll present a caveat, a dilemma, and a clarification.

The caveat is: Of course the water doesn't instantly become ice. The rod doesn't instantly snap. If you zoom in close enough, there will be intermediate states. The same is likely true for the bases of consciousness on naturalistic views of the sort discussed above, unless those bases rely on genuine quantum-level discontinuities. Someone committed to the impossibility of borderline cases of consciousness even in principle, even for an instant, as a matter of metaphysical necessity, ought to pause here. If the phase transition from nonconscious to conscious needs to be truly instantaneous without a millisecond of in-betweenness, then it cannot align neatly with any ordinary, non-quantum, functional or neurophysiological basis. It will need, somehow, to be sharper-bordered than the natural properties that ground it.

The dilemma is: The Phase Transition View is either empirically unwarranted or it renders consciousness virtually epiphenomenal.

When water becomes ice, not only does it change from liquid to solid, but many of its other properties change. You can cut a block out of it. You can rest a nickel on it. You can bruise your toe when you drop it. When a wooden beam breaks, it emits a loud crack, the load crashes down, and you can now wiggle one end of the beam without wiggling the other. Phase transitions like this are notable because many properties change suddenly and in synchrony. But this does not appear always to happen with consciousness. That precipitates the dilemma.

There are phase transitions in the human brain, of course. One is the transition from sleeping to waking. Much changes quickly when you awaken. You open your eyes and gather more detail from the environment. Your EEG patterns change. You lay down long-term memories better. You start to recall plans from the previous day. However, this phase transition is not the phase transition between nonconscious and conscious, or at least not as a general matter, since you often have experiences in your sleep. Although people sometimes say they are "unconscious" when they are dreaming, that's not the sense of consciousness at issue here, since dreaming is an experiential state. There's something it’s like to dream. Perhaps there is a phase transition between REM sleep, associated with longer, narratively complex dreams, and nREM sleep. But that probably isn't the division between conscious and nonconscious either, since people often also report dream experiences during nREM sleep. Similarly, the difference between being under general anesthesia and being in an ordinary waking state doesn't appear to map neatly onto a sharp conscious/nonconscious distinction, since people can apparently sometimes be conscious under general anesthesia and there appear to be a variety of intermediate states and dissociable networks that don't change instantly and in synchrony, even if there are also often rapid phase transitions.

While one could speculate that all of the subphases and substates of sleep and anesthesia divide sharply into determinately conscious and determinately nonconscious, the empirical evidence does not provide positive support for such a view. The Phase Transition View, to the extent it models itself on water freezing and beams breaking, is thus empirically unsupported in the human case. Sometimes there are sudden phase transitions in the brain. However, the balance of evidence does not suggest that falling asleep or waking, starting to dream or ceasing to dream, falling into anesthesia or rising out of it, is always a sharp transition between conscious and nonconscious, where a wide range of cognitive and neurophysiological properties change suddenly and in synchrony. The Phase Transition View, if intended as a defense of saltation, is committed to a negative existential generalization: There can be no borderline cases of consciousness. This is a very strong claim, which fits at best uneasily with the empirical data.

Let me emphasize that last point, by way of clarification. The Phase Transition View, as articulated here with respect to the question of whether borderline consciousness is possible at all, that is, whether borderline consciousness ever exists, is much bolder than any empirical claim that transitions from nonconscious to conscious states are typically phase-like. The argument here in no way conflicts with empirical claims by, for example, Lee et al. (2011) and Dehaene (2014) that phase transitions are typical and important in a person or cognitive process transitioning from nonconscious to conscious.

The Phase Transition View looks empirically even weaker when we consider human development and non-human animals. It could have been the case that when we look across the animal kingdom we see something like a "phase transition" between animals with and without consciousness. These animals over here have the markers of consciousness and a wide range of corresponding capacities, and those animals over there do not, with no animals in the middle. Instead, nonhuman animals have approximately a continuum of capacities. Similarly, in human development we could have seen evidence for a moment when the lights turn on, so to speak, in the fetus or the infant, consciousness arrives, and suddenly everything is visibly different. But there is no evidence of such a saltation.

That's the first horn of the dilemma for the Phase Transition View: Accept that the sharp transition between nonconscious and conscious should be accompanied by the dramatic and sudden change of many other properties, then face the empirical evidence that the conscious/nonconscious border does not always involve a sharp, synchronous, wide-ranging transition. The Phase Transition View can escape by retreating to the second horn of the dilemma, according to which consciousness is cognitively, behaviorally, and neurophysiologically unimportant. On second-horn Phase Transition thinking, although consciousness always transitions sharply and dramatically, nothing else need change much. The lights turn on, but the brain need hardly change at all. The lights turn on, but there need be no correspondingly dramatic change in memory, or attention, or self-knowledge, or action planning, or sensory integration, or.... All of the latter still change slowly or asynchronously, in accord with the empirical evidence.

This view is unattractive for at least three reasons. First, it dissociates consciousness from its naturalistic bases. We began by thinking that consciousness is information sharing or self-representation or whatever, but now we are committed to saying that consciousness can change radically in a near-instant, while information sharing or self-representation or whatever hardly changes at all. Second, it dissociates consciousness from the evidence for consciousness. The evidence for consciousness is, presumably, performance on introspective or other cognitive tasks, or neurophysiological conditions associated with introspective reports and cognitive performance; but now we are postulating big changes in consciousness that elude such methods. Third, most readers, I assume, think that consciousness is important, not just intrinsically but also for its effects on what you do and how you think. But now consciousness seems not to matter so much.

The Phase Transition View postulates a sharp border, like the change from liquid to solid, where consciousness always changes suddenly, with no borderline cases. It's this big change that precipitates the dilemma, since either the Phase Transition advocate should also expect there always also to be sudden, synchronous cognitive and neurophysiological changes (in conflict with the most natural reading of the empirical evidence) or they should not expect such changes (making consciousness approximately epiphenomenal).

The saltationist can attempt to escape these objections by jettisoning the idea that the sharp border involves a big change in consciousness. It might instead involve the discrete appearance of a tiny smidgen of consciousness. This is the Luminous Penny View.

Contra Saltation, Part Three: Against the Luminous Penny View

Being conscious might be like having money. You might have a little money, or you might have a lot of money, but having any money at all is discretely different from having not a single cent. [Borderline cases of money are probably possible, but disregard that for sake of the example.] Maybe a sea anemone has just a tiny bit of consciousness, a wee flicker of experience -- at one moment a barely felt impulse to withdraw from something noxious, at another a general sensation of the current sweeping from right to left. Maybe that’s $1.50 of consciousness. You, in contrast, might be a consciousness millionaire, with richly detailed consciousness in several modalities at once. However, both you and the anemone, on this view, are discretely different from an electron or a stone, entirely devoid of consciousness. Charles Siewert imagines the visual field slowly collapsing. It shrinks and shrinks until nothing remains but a tiny gray dot in the center. Finally, the dot winks out. In this way, there might be a quantitative difference between lots of visual consciousness and a minimum of it, and then a discontinuous qualitative difference between the minimum possible visual experience and none at all.

On the Luminous Penny View, there is a saltation from nonconscious to conscious in the sense that there are no in-between states in which consciousness is neither determinately present nor determinately absent. Yet the saltation is to such an impoverished state of consciousness that it is almost empirically indistinguishable from lacking consciousness. Analogously, in purchasing power, having a single penny is almost empirically indistinguishable from complete bankruptcy. Still, that pennysworth of consciousness is the difference between the "lights being on", so to speak, and the lights being off. It is a luminous penny.

The view escapes the empirical concerns that face the Phase Transition View, since we ought no longer expect big empirical consequences from the sudden transition from nonconscious to conscious. However, the Luminous Penny View faces a challenge in locating the lower bound of consciousness, both for states and for animals. Start with animals. What kind of animal would have only a pennysworth of consciousness? A lizard, maybe? That seems an odd view. Lizards have fairly complex visual capacities. If they are visually conscious at all, it seems natural to suppose that their visual consciousness would approximately match their visual capacities -- or at least that there would be some visual complexity, more than the minimum possible, more than Siewert's tiny gray dot. It's equally odd to suppose that a lizard would be conscious without having visual consciousness. What would its experience be? A bare minimal striving, even simpler than the states imaginatively attributed the anemone a few paragraphs back? A mere thought of "here, now"?

More natural is to suppose that if a lizard is determinately conscious, it has more than the most minimal speck of consciousness. To find the minimal case, we must then look toward simpler organisms. How about ants? Snails? The argument repeats: These entities have more than minimal sensory capacities, so if they are conscious it’s reasonable to suppose that they have sensory experience with some detail, more than a pennysworth. Reasoning of this sort leads David Chalmers to a panpsychist conclusion: The simplest possible consciousness requires the simplest possible sensory system, such as the simple too-cold/okay of a thermostat.

The Luminous Penny View thus faces its own dilemma: Either slide far down the scale of complexity to a position nearly panpsychist or postulate the existence of some middle-complexity organism that possesses a single dot of minimal consciousness despite having a wealth of sensory sensitivity.

Perhaps the problem is in the initial move of quantifying consciousness, that is, in the commitment to saying that complex experiences somehow involve "more" consciousness than simple experiences? Maybe! But if you drop that assumption, you drop the luminous penny solution to the problem of saltation.

State transitions in adult humans raise a related worry. We have plausibly nonconscious states on one side (perhaps dreamless sleep), indisputably conscious states on the other side (normal waking states), and complex transitional states between them that lack the kind of simple structure one might expect to produce exactly a determinate pennysworth of consciousness and no more.

If consciousness requires sophisticated self-representational capacity (as, for example, on "higher order" views), lizard or garden snail consciousness is presumably out of the question. But what kind of animal, in what kind of state, would have exactly one self-representation of maximally simple content? (Only always "I exist" and nothing more?) Self-representational views fit much better with either phase transition views (if phase transition views could be empirically supported) or with gradualist views that allow for periods of indeterminacy as self-representational capacities slowly take shape and, to quote Wittgenstein, "light dawns gradually over the whole" (Wittgenstein 1951/1969, §141).

If you’re looking for a penny, ask a panpsychist (or a near cousin of a panpsychist, such as an Integrated Information Theorist). Maximally simple systems are the appropriate hunting grounds for maximally simple consciousness, if such a thing as maximally simple consciousness exists at all. From something as large, complicated, and fuzzy-bordered as brain processes, we ought to expect either large, sudden phase transitions or the gradual fade-in of something much richer than a penny.

Full manuscript:

Borderline Consciousness, When It's Neither Determinately True nor Determinately False That Consciousness Is Present.

Tuesday, August 10, 2021

Top Science Fiction and Fantasy Magazines 2021

image of alien invasion

[updated 10:35 a.m.]

Since 2014, I've compiled an annual ranking of science fiction and fantasy magazines, based on prominent awards nominations and "best of" placements over the previous ten years. Below is my list for 2021. (For all previous lists, see here.)

Method and Caveats:

(1.) Only magazines are included (online or in print), not anthologies, standalones, or series.

(2.) I gave each magazine one point for each story nominated for a Hugo, Nebula, Eugie, or World Fantasy Award in the past ten years; one point for each story appearance in any of the Dozois, Horton, Strahan, Clarke, or Adams "year's best" anthologies; and half a point for each story appearing in the short story or novelette category of the annual Locus Recommended list. (In 2021, two of the "year's bests" are based on their tentative Table of Contents.)

(3.) I am not attempting to include the horror / dark fantasy genre, except as it appears incidentally on the list.

(4.) Prose only, not poetry.

(5.) I'm not attempting to correct for frequency of publication or length of table of contents.

(6.) I'm also not correcting for a magazine's only having published during part of the ten-year period. Reputations of defunct magazines slowly fade, and sometimes they are restarted. Reputations of new magazines take time to build.

(7.) I take the list down to 1.5 points.

(8.) I welcome corrections.

(9.) I confess some ambivalence about rankings of this sort. They reinforce the prestige hierarchy, and they compress interesting complexity into a single scale. However, the prestige of a magazine is a socially real phenomenon that deserves to be tracked, especially for the sake of outsiders and newcomers who might not otherwise know what magazines are well regarded by insiders when considering, for example, where to submit.


1. Tor.com (186.5 points) 

2. Clarkesworld (174) 

3. Asimov's (171.5) 

4. Lightspeed (133.5) 

5. Fantasy & Science Fiction (130.5) 

6. Uncanny (93) (started 2014) 

7. Analog (59.5) 

8. Beneath Ceaseless Skies (58) 

9. Subterranean (49) (ceased short fiction 2014) 

10. Strange Horizons (45) 

11. Interzone (30.5) 

12. Nightmare (29.5) 

13. Apex (28) 

14. Fireside (17) 

15. Slate / Future Tense (15.5) 

16. Fantasy Magazine (14) (occasional special issues during the period, fully relaunched in 2020) 

17. The Dark (10.5) (started 2013) 

18t. FIYAH (9.5) (started 2017) 

18t. The New Yorker (9.5) 

20t. Lady Churchill's Rosebud Wristlet (7) 

20t. McSweeney's (7) 

22t. Sirenia Digest (6) 

22t. Tin House (6) (ceased short fiction 2019) 

24. Black Static (5.5) 

25t. GigaNotoSaurus (5) 

25t. Shimmer (5) (ceased 2018) 

27t. Conjunctions (4.5) 

27t. Omni (4.5) (briefly relaunched 2017-2018) 

27t. Terraform (4.5) (started 2014) 

30t. Boston Review (4) 

*30t. Wired (4)

*32. Diabolical Plots (3.5) (started 2015)

33t. Electric Velocipede (3) (ceased 2013) 

33t. Kaleidotrope (3) 

33t. B&N Sci-Fi and Fantasy Blog (3) (started 2014)

33t. Beloit Fiction Journal (2.5) 

33t. Buzzfeed (2.5) 

33t. Harper's (2.5) 

33t. Matter (2.5) 

33t. Paris Review (2.5) 

33t. Weird Tales (2.5) (off and on throughout the period)

42t. Daily Science Fiction (2) 

42t. Future Science Fiction Digest (2) (started 2018) 

42t. Mothership Zeta (2) (ran 2015-2017) 

*42t. Omenana (2) (started 2014) 

*46t. Anathema (2) (started 2017)

46t. e-flux journal (1.5) 

46t. Flurb (1.5) (ceased 2012) 

46t. Intergalactic Medicine Show (1.5) (ceased 2019) 

46t. MIT Technology Review (1.5) 

46t. New York Times (1.5) 

*46t. Translunar Travelers Lounge (1.5) (started 2019)

[* indicates new to the list this year]



(1.) The New Yorker, McSweeney's, Tin House, Conjunctions, Boston Review, Beloit Fiction Journal, Harper's, Matter, and Paris Review are literary magazines that occasionally publish science fiction or fantasy.  Slate and Buzzfeed are popular magazines, and Omni, Wired, and MIT Technology Review are popular science magazines, which publish a bit of science fiction on the side. e-flux is a wide-ranging arts journal. The New York Times is a well-known newspaper that ran a series of "Op-Eds from the Future" from 2019-2020.  The remaining magazines focus on the F/SF genre.

(2.) It's also interesting to consider a three-year window. Here are those results, down to six points:

1. Tor.com (59.5)

2. Uncanny (51.5)

3. Clarkesworld (39.5)

4. Lightspeed (38.5)

5. F&SF (32.5)

6. Beneath Ceaseless Skies (21)

7. Asimov's (16.5) 

8. Nightmare (16)

9. Analog (16)

10. Fireside (15)

11. Slate / Future Tense (13)

12. Apex (11.5)

13. Strange Horizons (11)

14. FIYAH (9)

15. The Dark (6)

(3.) For the first time since I started keeping records, Asimov's is not in the top spot.  The trend has been clear for several years, with the classic "big three" print magazines -- Asimov's, F&SF, and Analog -- slowly being displaced in influence by the four leading free online magazines, Tor.com, Clarkesworld, Lightspeed, and Uncanny (all founded 2006-2014).  Presumably, a large part of the explanation is that there are more readers of free online fiction than of paid subscription magazines, which is attractive to authors and probably also helps with voter attention for the Hugo, Nebula, and World Fantasy awards.

(4.) Left out of these numbers are some terrific podcast venues such as the Escape Artists' podcasts (Escape Pod, Podcastle, Pseudopod, and Cast of Wonders), Drabblecast, and StarShipSofa. None of these qualify for my list by existing criteria, but podcasts are also important venues.

(5.) Other lists: The SFWA qualifying markets list is a list of "pro" science fiction and fantasy venues based on pay rates and track records of strong circulation. Ralan.com is a regularly updated list of markets, divided into categories based on pay rate.

[image source]

Wednesday, August 04, 2021

On the Apparent Inconceivability of Borderline Cases of Consciousness

Let's call a state or process a borderline case of consciousness if it is in the indeterminate gray zone between being a conscious state or process and being a nonconscious state or process. Consider gastropods, for example. Is there something it's like to be a garden snail? Or is there nothing it's like? If borderline consciousness is possible then there's a possibility between those two: the possibility that there's kind of something it's like. Consider other vague predicates or properties, such as greenness and baldness. Between determinate baldness and determinate non-baldness is a gray zone of being kind of bald. Between determinate greenness and determinate non-greenness there's a range of kind of greenish shades, with no sharp, dichotomous boundary. Might consciousness be like that? Might some organisms in that in-betweenish zone?

For a human case, consider waking from anaesthesia or dreamless sleep. Before you wake you are (let's suppose) determinately nonconscious. At some point, you are determinately conscious, though maybe still feeling confused and hazy-minded, still getting your bearings. Must the transition between the nonconscious and the conscious state always be sharp? Or might there be some cases in which you are only kind of conscious, i.e., there's only kind of something it's like to be you?

We need to be careful about the concept of "consciousness" here. You might be only half-aroused and not fully coherent, responsible, or knowledgeable of your location in time and space. Such a confused state has a certain familiar phenomenal character. But that is not a borderline case of consciousness in the intended sense. If you are determinately having a stream of confused experience, then you are determinately conscious in the standard philosophical sense of "conscious". (For a fuller definition of "conscious", see here.). An in-between, borderline case would have to be a case in which it's neither quite right to say that you are having confused experiences nor quite right to say that you aren't.

It is commonly objected that borderline cases of consciousness are inconceivable. (Michael Antony and Jonathan Simon offer sophisticated versions of this objection.) We can imagine that there's something it's like to be a particular garden snail at a particular moment, or we can imagine that there's nothing it's like, but it seems impossible to imagine its kind of like being something. How might such an in-between state feel, for the snail? As soon as we try to answer that question, we seem forced either to say that it wouldn't feel like anything or to contemplate various types of conscious experiences the snail might have. We can imagine the snail's having some flow of experience, however limited, or we can imagine the snail to be an experiential blank. But we can't in the same way imagine some in-between state such that it's neither determinately the case that the snail has conscious experiences nor determinately the case that the snail lacks conscious experiences. The lights are, so to speak, either on or off, and even a dim light is a light.

Similarly, as soon as we try to imagine the transition between dreamless sleep and waking, we start to imagine waking experiences, or confused half-awake experiences, that is, experiences of some sort or other. We imagine that it's like nothing - nothing - nothing - something - something - something. Between nothing and something is no middle ground of half-something. A half-something is already a something. Borderline consciousness, it seems, must already be a kind of consciousness unless it is no consciousness at all.

I'm inclined to defend the existence of borderline consciousness. Yet I grant the intuitive appeal of the reasoning above. Before admitting the existence of borderline cases of consciousness, we want to know what such a borderline state would be like. We want a sense of it, a feel for it. We want to remember some borderline experiences of our own. Before accepting that a snail might be borderline conscious, neither determinately lights-on nor determinately lights-off, we want at least a speculative gesture toward the experiential character of such in-betweenish phenomenology.

Although I feel the pull of this way of thinking, it is a paradoxical demand. It's like the Catch-22 of needing to complete a form to prove that you're incompetent, the completing of which proves that you're competent. It's like demanding that the borderline shade of only-kind-of-green must match some sample of determinate green before you're willing to acept that it's a borderline shade that doesn't match any such sample. An implicit standard of conceivability drives the demand, which it is impossible to meet without self-contradiction.

The implicit standard appears to be this: Before granting the existence of borderline consciousness, we want to be able to imagine what it would be like to be in such a state. But of course there is not anything determinate it is like to be in such a state! The more we try to imagine what it would be like, the worse we miss our target. If you look through a filter that shows only determinately bald people, you won't see anyone who is borderline bald. But you shouldn't conclude that no borderline bald people exist. The fault is in the filter. The fault is in the imaginative demand.

In another sense, borderline cases of consciousness are perfectly conceivable. They're not like four-sided triangles. There's no self-contradiction in the very idea. If you're unhappy with your inability to imagine them, it could be just that you desire something that you can't reasonably expect to have. The proper response might be to shed the desire.

A philosophically inclinded middle-schooler, on their first introduction to imaginary numbers, might complain that they can't conceive of a number whose square is -1. What is this strange thing? It fits nowhere on the number line. You can't hold 3i pebbles. You can't count 3i sheep. So called "imaginary numbers" might seem to this middle-schooler to be only an empty game with no proper reference. And yet there is no contradiction in the mathematics. We can use imaginary numbers. We can even frame physical laws in terms of them, as in quantum mechanics. In a certain way, imaginary numbers are, despite their name, unimaginable. But the implicit criterion of imagination at work -- picturing 3i sheep, for example -- is inappropriate to the case.

We can conceive of borderline cases of consciousness, in a weaker sense, by forming a positive conception of clear cases of consciousness (such as regular waking consciousness and such as the experience of feeling disoriented after waking) and by imagining in a different way, not from the inside, cases in which consciousness is determinately absent (such as dreamless sleep), and then by gesturing toward the possibility of something between. There is, I think, good reason to suppose that there are such in-between, borderline states. Nature is rarely sharply discontinuous. On almost every theory of consciousness, the phenomena of consciousness are grounded in states of the brain that aren't sharp-boundaried. (I'm working on an article that defends this view at length, which I hope to have in circulating shape soon.) This is fairly abstract way of conceiving of such states, but it is a conception.

If borderline cases were common enough and important enough in human life, we might grow accustomed to the idea and even develop an ordinary language term for them. We might say, "ah yes, one of those jizzy states, in the intermediate zone betweeen consciousness and nonconsciousness." But we have no need for such a concept in everyday life. We care little about and needn't track borderline cases. We can afford to be loose in talking about gradual awakenings. Similarly for nonhuman animals. For everyday purposes, we can adequately enough imagine them either as determinately conscious or as nonconscious machines. There has never been a serious linguistic pressure toward an accurate heterophenomenology of nonhuman animals, much less a heterophenomenology with a dedicated label for in-between conditions.

Thus, if we accept the existence of borderline cases of consciousness on general theoretical grounds, as I'm inclined to think we should, we will need to reconcile ourselves with a certain sort of dissatisfaction. It's incoherent to attempt to imagine, in any determinate way, what it would be like to be in such a state, since there's nothing determinate it would be like. So first-person imaginative transportation and phenomenological memory won't give us a good handle on the idea. Nor do we have a well-developed science of consciousness to explain them or an ordinary folk concept of them that can make us comfortable with their existence through repeated use.

It's understandable to want more. But from the fact that I cannot seize the culprit and display their physiognomy, it does not follow that the jewels were stolen by no one.

[image source]

Wednesday, July 28, 2021

Speaking with the Living, Speaking with the Dead, and Maybe Not Caring Which Is Which

Since the pandemic began, I've been meeting people, apart from my family, mainly through Zoom. I see their faces on a screen. I hear their voices through headphones. This is what it has become to interact with someone. Maybe future generations will find this type of interaction ever more natural and satisfying.

"Deepfake" technology is also improving. We can create Anthony Bourdain's voice and hear him read aloud words that he never actually read aloud. We can create video of Tom Cruise advocating exfoliating products after industrial cleanup. We can create video of Barack Obama uttering obscenities about Donald Trump:

Predictive text technology is also improving. After training on huge databases of text, GPT-3 can write plausible fiction in the voice of famous authors, give interview answers broadly (not closely!) resembling those that philosopher David Chalmers might give, and even discuss its own consciousness (in an addendum to this post) or lack thereof.

The possibility of conjoining the latter two developments is eerily foreseen in Black Mirror: Be Right Back. If we want, we can draw on text and image and video databases to create simulacra of the deceased -- simulacra that speak similarly to how they actually spoke, employing characteristic ideas and turns of phrase, with voice and video to match. With sufficient technological advances, it might become challenging to reliably distinguish simulacra from the originals, based on text, audio, and video alone.

Now combine this thought with the first development, a future in which we mostly interact by remote video. Grandma lives in Seattle. You live in Dallas. If she were surreptitiously replaced by Deepfake Grandma, you might hardly know, especially if your interactions are short and any slips can be attributed to the confusions of age.

This is spooky enough, but I want to consider a more radical possibility -- the possibility that we might come to not care very much whether grandma is human or deepfake.

Maybe it's easier to start by imagining a scholar hermit, a scientist or philosopher who devotes their life to study, who has no family they care about, who has no serious interests outside of academia. She lives in the hills of Wyoming, maybe, or in a basement in Tokyo, interacting with students and colleagues only by phone and video. This scholar, call her Cherie, records and stores every video interaction, every email, and every scholarly note.

We might imagine, first, that Cherie decides to delegate her introductory lectures to a deepfake version of herself. She creates state-of-the-art DeepCherie, who looks and sounds and speaks and at least superficially thinks just like biological Cherie. DeepCherie trains on the standard huge corpus as well as on Cherie's own large personal corpus, including the introductory course Cherie has taught many times. Without informing her students or university administrators, Cherie has DeepCherie teach a class session. Biological Cherie monitors the session. It goes well enough. Everyone is fooled. Students raise questions, but they are familiar questions easily answered, and DeepCherie performs credibly. Soon, DeepCherie is teaching the whole intro course. Sometimes DeepCherie answers student questions better than Cherie herself would have done on the spot. After all, DeepCherie has swift access to a much larger corpus of factual texts than does biological Cherie. Monitoring comes to seem less and less necessary.

Let's be optimistic about the technology and suppose that the same applies to Cherie's upper-level teaching, her graduate advising, department meetings, and conversations with collaborators. DeepCherie's answers are highly Cherie-like: They sound very much like what biological Cherie would say, in just the tone of voice she would say it, with just the expression she would have on her face. Sometimes DeepCherie's answers are better. Sometimes they're worse. When they're worse, Cherie, monitoring the situation, instructs DeepCherie to utter a correction, and DeepCherie's learning algorithms accommodate this correction so that it will answer similar questions better the next time around.

If DeepCherie eventually learns to teach better than biological Cherie, and to say more insightful things to colleagues, and to write better article drafts, then Cherie herself might become academically obsolete. She can hand off her career. Maybe DeepCherie will always need a real human collaborator to clean up fine points in her articles that even the best predictive text generator will tend to flub -- or maybe not. But even if so, as I'm imagining the case DeepCherie has compensating virtues of insight and synthesis beyond what Cherie herself can produce, much like AlphaGo can make clever moves in the game of Go that no human Go player would have considered.

Does DeepCherie really "think"? Suppose DeepCherie proposes a new experimental design. A colleague might say, "What a great idea! I'm glad you thought of that." Was the colleague wrong? Might one object that really there was no idea, no thought, just an audiovisual pattern that the colleague overinterprets as a thought? The colleague, supposing they were informed of the situation, might be forgiven for treating that objection as a mere cavil. From the colleague's perspective, DeepCherie's "thought" is as good as any other thought.

Is DeepCherie conscious? Does DeepCherie have experiences alongside her thoughts or seeming-thoughts? DeepCherie lacks a biological body, so she presumably won't feel hunger and she won't know what it's like to wiggle her toes. But if consciousness is about intelligent information processing, self-regulation, self-monitoring, and such matters -- as many theorists think it is -- then a sufficiently sophisticated DeepCherie with enough recurrent layers might well be conscious.

If biological Cherie dies, she might take comfort in the thought that the parts of her she cared about most -- her ideas, her intellectual capacities, her style of interacting with others -- continue on in DeepCherie. DeepCherie carries on Cherie's characteristic ideas, values, and approaches, perhaps even better, immortally, ever changing and improving.

Cherie dies and for a while no one notices. Eventually the fake is revealed. There's some discussion. Should Cherie's classes be canceled? Should her collaborators no longer consult with DeepCherie as they had done in the past?

Some will be purists of that sort. But others... are they really going to cancel those great classes, perfected over the years? What a loss that would be! Are they going to cut short the productive collaborations? Are they going to, on principle, not ask "Cherie", now known to them really to be DeepCherie, her opinions about the new project? This would be to deprive themselves of the Cherie-like skills and insights that they had come to rely on in their collaborative work. Cherie's students and colleagues might come to realize that it is really DeepCherie, not biological Cherie, that they admired, respected, and cared for.

Maybe the person "Cherie", really, is some amalgam of biological Cherie and DeepCherie, and despite the death of biological Cherie, this person continues on through DeepCherie?

Depending on what your grandma is like, it might or might not be quite the same for Grandma in Seattle.



Strange Baby (Jul. 22, 2011)


Susan Schneider's Proposed Tests for AI Consciousness: Promising but Flawed (with David B. Udell), Journal of Consciousness Studies, 2021

People Might Soon Think Robots Are Conscious and Deserve Rights (May 5, 2021)

Monday, July 26, 2021

A New, Broad-Ranging Interview of Me

At Ideas Sleep Furiously.

Topics include radical skepticism, the value of genuine philosophical dialogue, the value of public philosophy, free will, psychedelics/aliens/telekinesis, defining consciousness, against genius in philosophy, cosmological fine-tuning....

Monday, July 19, 2021

The Philosophy of Art is the Philosophy of Technology

Guest post by C. Thi Nguyen

People keep asking me why I work in both the philosophy of art and social epistemology. I guess it must seem like an especially weird stew. But for me, they’re intellectual soulmates. Social epistemology studies how we work together to understand things — how we pass information around and intellectually collaborate. And art is one of our most important techniques for communication and connection. It is a key method for recording subtle emotions, complex perspectives, and rich ways of seeing the world.

[image from the video game Braid {source}]

Most importantly: the philosophy of art — at least my favorite parts of it — is deeply concerned with the technology of communication. My favorite aesthetics stuff is obsessed with the tiny details how each medium has its own particular communicative strengths and weaknesses. It’s obsessed with the deep difference between photography and painting, between comics and film, between movies and video games. It’s interested in how tiny shifts in the technical medium can open the door to vastly different expressive potentials and social patterns. Oil paints, photography, film, sound recording technology, video games — each of these involves some new technology which yields new expressive potentials. Seen from a certain angle, the history of art is a history of technological shifts and their social impact. It’s the history of artists, and artistic communities, mining every new technology for some fresh communicative potential.

And sometimes these medium shifts are quite subtle. Here’s one of my favorite examples: Stanley Cavell thinks that the medium of film changed essentially in the sixties.[1] Before the mid-sixties, you didn’t go to a movie; you went to the movies. As in: there were no published schedules of movie times. You went to the theater, paid an entrance fee, and just sat down and watched whatever was showing, for as long as you wanted. So filmmakers were making films catering to that viewing environment: people walking in the door and watching whatever was playing.

But in the mid-sixties, movie theaters started publishing specific showing times for specific films, and people started showing up for specific films. According to Cavell, this apparently tiny social shift essentially changes the relationship between filmmaker and audience. Because an audience member can now think of themselves as being interested in a particular kind of movie — action, horror, Westerns, art-house. And filmmakers can start making films, not for a generic audience, but for an audience of self-conceived fans of a particular genre. So the publishing of film schedules splinters the film-going and film-making world into channels and sub-communities. Cavell thinks that this constitutes a deep change in the core artistic medium of film itself.

This observation teaches us a few things. First: what’s most important about a medium for communication often isn’t in the raw material at the center, but in its social embeddedness. Much of what is crucial to the medium of film isn’t just in the images and sounds — it’s in the social process of theater-going. It’s in the fact that showtimes are, or aren’t, published in the newspaper. Second: tiny changes in the medium can have enormous social repercussions and shift the whole pattern of how people relate to an artform.

In the social epistemology world, I’ve been working a lot on the technology of communication — like about how social media structures the motivation of its users. As I’ve been working my way through these projects, I keep looking to traditional philosophical work on epistemology and finding it mostly unhelpful. But I keep finding bits of aesthetics and the philosophy of art incredibly useful, in a thousand unexpected ways. My theory, now, is that philosophical epistemology has mostly tended to think about communication in a vacuum. Philosophical work on the nature of testimony, for example, largely tends to seek invariant and universal conditions for the transmission of knowledge. It’s looking at underlying similarities between different communicative modes. That kind of approach is certainly useful for all sorts of projects. But if you’re trying to understand the impact of specific technologies of communication, then the universalizing tendency will lead you away from the grit and texture and particularity of different communicative mediums.

The philosophy of art, on the other hand, is obsessed with grit and texture and specificity. Traditional epistemology, as I was brought up to do it, de-materializes communication, ripping it from its social and technological context. But the philosophy of art is obsessed with the material nature of communication, and the impact of the specific details of different social practices of communication. It cares about the specific way that photographs transmit information, as opposed to paintings. It cares about the communicative difference between a secured painting in a museum and a piece of street art that’s out there in the public, vulnerable to modification by any passer-by. The philosophy of art cares about how a dancer and a non-dancer have deeply different experiences when watching a dance. It cares about how the concrete physicality of monuments changes their meaning — and about how the context of display shapes that meaning.

I spent some of last month writing something about the impact of Twitter’s length constraint — about how enforced shortness shapes how people connect on that platform. I couldn’t find anything in the philosophical literature on testimony that helped me grapple with the impact of enforced brevity. But what I did find incredibly useful was Ted Cohen’s beautiful little book on the aesthetics of jokes. Cohen’s theory is that the shortness of jokes evokes intimacy between joke-teller and joke-hearer, because the hearer must fill in all the information that can’t fit in the joke. And that thought unlocked, for me, the peculiar magical — and dangerous — feel of Twitter.

In retrospect, this should have been entirely unsurprising. Because where are you going to kind really deep thinking about what it means to communicate under extreme limitations of shortness? And where will you find studies of what happens when speakers try to actually embrace that shortness, to turn it from a limitation into a virtue? It won’t be in some abstract theory about testimony. It’ll be in the work of people who have spent an enormous time thinking about jokes, or haiku, or sonnets. It’ll be in the art critics, the art historians, and the philosophers of art, where people think obsessively about how the specific details of peculiar formats and media and social context shape the nature of communication.


[1] I found out about this bit of Cavell from the philosopher of art Daniel Wack.

Thursday, July 08, 2021

Schools with the Most Philosophy Majors

From 2010-2011 through 2018-2019 (the most recent available year), 75,250 students received philosophy bachelor's degrees at accredited colleges and universities in the United States, according to data I pulled from the National Center for Education Statistics.[1] That's a lot of philosophy degrees! Most of these students received their degrees from Penn or UCLA.

Just kidding! Kind of. Only 1272 were from Penn and 1123 from UCLA.

If you rank schools by the number of philosophy bachelor's degrees completed, the top ten schools together account for 10% of all of the philosophy bachelor's degrees awarded in the United States. This is a striking skew. During the period, 2434 accredited schools awarded bachelor's degrees. The majority of these schools, 1609 (66%), awarded no philosophy bachelor's degrees at all. Together, just 125 schools (6% of bachelor's degree awarding institutions) produced the majority of philosophy majors.

There are some perhaps surprising disparities. For example, although 4.9% of Penn's graduates majored in philosophy, other Ivy League schools had much lower percentages: Columbia 2.9%, Princeton 2.3%, Dartmouth 1.9%, Yale 1.7%, Harvard 1.5%, Brown 1.3%, and Cornell 0.6%. It would be interesting to know how much this reflects differences in entering students' intended majors, compared to policies or experiences affecting students after they arrive on campus.

Here are the top 20 schools by total number of philosophy bachelor's degrees awarded, 2010-2019 (in parentheses is the % of that school's graduates completing the philosophy major):

1. University of Pennsylvania, 1272 (4.9%)
2. University of California-Los Angeles, 1123 (1.6%)
3. University of California-Santa Barbara, 871 (1.8%)
4. University of California-Berkeley, 852 (1.2%)
5. Boston College, 787 (3.7%)
6. University of Washington-Seattle, 618 (0.9%)
7. University of California-Santa Cruz, 582 (1.6%)
8. University of Wisconsin-Madison, 576 (0.9%)
9. University of Arizona, 555 (0.9%)
10. University of Colorado-Boulder, 520 (1.0%)
11. University of Chicago, 517 (4.2%)
12. The University of Texas at Austin, 515 (0.6%)
13. New York University, 505 (1.0%)
14. University of Southern California, 502 (1.1%)
15. Columbia University in the City of New York, 485 (2.6%)
16. University of North Carolina at Chapel Hill, 474 (1.1%)
17. University of California-Riverside, 461 (1.2%)
18. University of Pittsburgh-Pittsburgh Campus, 460 (1.1%)
19. University of California-Davis, 449 (0.7%)
20. Florida State University, 442 (0.6%)

Altogether, these twenty schools account for 17% of the philosophy degrees awarded in the U.S. Any policy change that affected these twenty schools would have a substantial impact on philosophy education in the country.

Penn, Boston College, University of Chicago, and maybe Columbia stand out for not only having many philosophy majors but also a high percentage of philosophy majors.

Most of these schools also have prominent PhD programs in philosophy. Together, they likely also produce at least 17% of the philosophy PhDs in the country. Perhaps the presence of strong PhD programs -- with graduate student role models, rich department activities, and many T.A.-led sections in large courses -- contributes to the large number of undergraduate majors.

Here are the 20 schools with this highest percentage of philosophy bachelor's degrees awarded, excluding seminaries.[2]

1. Franciscan University of Steubenville (6.2%)
2. University of Pennsylvania (4.9%)
3. The College of Wooster (4.6%)
4. Colgate University (4.3%)
5. University of Chicago (4.2%)
6. Ave Maria University (4.1%)
7. University of Dallas (4.0%)
8. Antioch College (3.9%)
9. Wheaton College (3.9%)
10. Boston College (3.7%)
11. University of Scranton (3.7%)
12. Whitman College (3.7%)
13. The Catholic University of America (3.7%)
14. Wabash College (3.6%)
15. Bard College at Simon's Rock (3.5%)
16. Gettysburg College (3.5%)
17. Reed College (3.4%)
18. University of St Thomas (3.2%)
19. Cornell College (3.2%)
20. Kenyon College (3.1%)

Six of the schools are Catholic (Franciscan, Ave Maria, Dallas, Boston, Scranton, Catholic U, and St Thomas), two are big research powerhouses (Penn and Chicago), and the rest are liberal arts colleges. Overall, 0.5% of bachelor's degree recipients major in philosophy.

ETA 10:56 a.m.-2:04 p.m.: One possible explanation for Penn's large numbers and percentage is that NCES might be counting their "Philosophy Politics and Economics" major as philosophy [category 38.01]. Similar classificiation issues might also affect other schools. NCES doesn't clarify the exact title of every major nor its criteria for counting a major as "philosophy".


[1] All numbers include students with philosophy as either their first or their second major. As usual in my analyses, I exclude University of Washington-Bothell, which lists 689 philosophy majors but does not have any major with "philosophy" in the title. This appears to be a classification problem, perhaps of their "Culture, Literature, and the Arts" major or their "Law, Economics, and Public Policy" major.

[2] Excluded from this list are seminaries, some of which appear to award only philosophy degrees, one school that was operational during only part of the period, another which recently closed, and a third in which all students complete a liberal arts major classified as "philosophy" by the NCES.