Friday, April 26, 2024

Neurons Aren't Special: A Copernican Argument

In virtue of what do human beings have conscious experiences?  How is it that there's "something it's like" to be us, while there's (presumably) nothing it's like to be a rock or a virus?  Our brains must have something to do with it -- but why?  Is it because brains are complex information processors?  Or because brains guide the sophisticated behavior of bodies embedded in rich environments?  Or because neurons in particular have a special power to give rise to consciousness?


In a paper in progress with Jeremy Pober (partly anticipated in some previous blog posts), I've been developing what I call a Copernican argument against the last of these options, the specialness of neurons.

[Dall-E image of a space alien reading a book titled "Are Humans Conscous?"]

Why might one be tempted to think neurons are special?  As I argue in my paper on whether the United States might literally be conscious, on the most straightforward interpretation of most materialist/physicalist/naturalist views of consciousness, what is special about brains are high-level structural or informational properties (which the U.S. might well possess), rather than, say, specific low-level features of neurons, such as presence of RNA and calcium ions.

But some famous thought experiments might seem to speak against this idea.

Ned Block, for example, imagines an entity that talks (or generalizing, behaves outwardly in many respects) just like a human being, but which is composed basically of a giant if-then lookup table (a "Blockhead").  He also imagines instantiating the high-level functional architecture of a human (described by a Turing table) by having the residents of China coordinate to instantiate that structure (the "Chinese nation" thought experiment).  Such entities, Block suggests, are unlikely to be conscious.  If we were to create an android like Data from Star Trek, the entity might behave superficially much like us but lack consciousness in virtue of being built very differently inside.

John Searle similarly imagines a "Chinese room" consisting of him reading from a rule book and seeming to converse in Chinese, without any of the relevant conscious thoughts, or an assembly of beer cans and wire, powered by windmills, that acts and reacts outwardly just like a human being  (though at a slower pace).  Surely, Searle suggests, no arrangement of beer cans, wire, and windmills, no matter how sophisticated, could give rise to consciousness.  That's just not the right kind of stuff.  Neurons, he says, have the causal power to generate consciousness, but not everything does.  Neurons are, in that respect, at least somewhat special.  Computer chips, despite their massive computational power, might not have that special something.

It doesn't follow from Block's or Searle's arguments that neurons are special in virtue of specific biological features like RNA and calcium ions.  Neither Block nor Searle commits to such a view, nor am I aware of any influential theorist of consciousness who does.  But the possibility at least becomes salient.  It becomes desirable to have an argument that whatever it is about the brain that makes it special enough to generate consciousness, it's not such low level biological details.

It can help to conceptualize the issue in terms of space aliens.  If we were to discover space aliens that behaved outwardly in highly sophisticated ways -- perhaps like us living in complex societies, with complex technology and communications -- and it turned out that their underlying architecture were different from ours with respect to such biological details, would we be forced to be agnostic about their consciousness?  Would we have to say, "Hold on!  No neurons?  Maybe they don't have the right stuff for consciousness!  They might be mere zombies, no more conscious than stones or toasters, for all their complex behavior."  Or would it be reasonable to assume that they are conscious, despite the architectural differences, barring evidence that their seeming complexity is all some elaborate ruse?

If we had the right theory of the architecture of consciousness, now would be the perfect time to deploy it.  Ah, the aliens fortunately have (or sadly lack) a global workspace, or high information integration, or higher-order representations of the right sort, or whatever!  But as I've argued, there's reason to be skeptical about all such theories.

Here's where an application of the cosmological principle of Copernican mediocrity can help.  According to Copernican principles in cosmology, we are licensed to assume (pending counterevidence) that we don't occupy any particularly special region of the cosmos, such as its exact center.  The Copernican principle of consciousness holds that we are similarly licensed to assume (pending counterevidence) that we aren't particularly special with respect to consciousness.  Among behaviorally sophisticated alien species of diverse biological form, we aren't luckily blessed with consciousness-instilling Earthiform neurons while every other species is experientially dark inside.  That would make us too special -- surprisingly special, in much the same way that it would be suspiciously, surprisingly special if we happened to be in the exact center of the cosmos.

In other words, the following Copernican Principle of Consciousness seems plausible:
Among whatever behaviorally sophisticated (approximately human level) species have evolved in the observable universe, we are not specially privileged with respect to consciousness.
That is, we are not among a small minority that are conscious, while the rest are not.  Nor do we have especially more consciousness than all the rest, nor especially good consciousness.

If we assume (as seems plausible, but which could be contested) that across the trillion galaxies of the observable universe, behaviorally sophisticated life has independently evolved at least a thousand times, and that in only a small minority of those cases do the entities have neurons that are structurally like ours at a fine level of anatomical detail (e.g., having RNA and calcium channels), then it follows that consciousness does not depend upon having neurons structurally like ours at that fine level of anatomical detail.

Friday, April 19, 2024

Flexible Pluralism about Others' Interpretation of Your Philosophical Work

Daniel Dennett has died, and the world has lost possibly its most important living philosopher.
[Image: Dennett in 2012]

My most vivid memory of Dennett is from a long face-to-face meeting I had with him in 2007 at the Association for the Scientific Study of Consciousness (ASSC).  Dennett was at that time among the world's most eminent philosophers, and I was a recently-tenured UC Riverside professor of no particular note.  It was apparently typical of Dennett's generosity toward junior scholars to set aside plenty of time for me.  At this meeting and in subsequent interactions (as I later came to believe), he also exhibited another, less-discussed type of philosophical generosity: flexible pluralism about others' interpretation of one's work.

Some context: Dennett can be read as saying two apparently contradictory things about introspection in his 1991 book Consciousness Explained (and elsewhere).  First, he seems to say that people are sometimes radically mistaken about their own conscious experiences (e.g., about the richness and clarity of the visual field).  Second, he seems to say that introspective reports are "fictions" and that as fictions, they can no more be mistaken than can Doyle's description of the color of Sherlock Holmes's easy chair.  In a 2007 paper I presented textual evidence of his apparent inconsistency on this issue and challenged Dennett to explain himself.  His published reply left me unsatisfied: He said that what people have fiction-writer-like "dictatorial authority" over is only "the (unwitting) metaphors in which that account [of their experiences] is ineluctably composed" (Dennett 2007, p. 254).

To me, this seemed far too weak, given the role of the relevant passages in Consciousness Explained: There's little point, and much that would be misleading, in all Dennett's talk about people as authoritative about conscious experiences if the "authority" amounts to nothing but the choice of metaphor.  I can choose to metaphorically describe the Sun as an all-penetrating eye -- but nothing of interest follows about my relationship to or knowledge of the Sun.

In our chat at the 2007 ASSC I pressed him on this, and developed an interpretation of his view which I later expressed to him in email and a subsequent blog post:
The key idea is that there are two sorts of "seemings" in introspective reports about experience, which Dennett doesn't clearly distinguish in his work. The first sense corresponds to our judgments about our experience, and the second to what's in stream of experience behind those judgments. Over the first sort of "seeming" we have almost unchallengeable authority; over the second sort of seeming we have no special authority at all. Interpretations of Dennett that ascribe him the view that there are no facts about experience beyond what we're inclined to judge about our experience emphasize the first sense and disregard the second. Interpretations that treat Dennett as a simple skeptic about introspective reports emphasize the second sense and ignore the first. Both miss something important in his view.
In both conversation and email, Dennett expressed enthusiasm about this interpretation of his work, seeming to accept it as the correct interpretation -- though maybe what he really meant (or should have meant) is that the above was a correct interpretation.

I add this caveat because in the aftermath of that post I encountered some other relatively junior scholars who had discussed these same issues with Dennett, and who had arrived at different interpretations than mine, and who reported having the experience of feeling like Dennett had affirmed that their interpretation was correct.

In general, how fussy should a philosopher be about others' interpretation of their work?  Can one reasonably not object to, or even praise and celebrate, more than one conflicting interpretation?

I've argued that great historical philosophers generally admit of multiple reasonable interpretations, no one of which need be the uniquely correct interpretation.  Maybe Dennett is also a great philosopher; but regardless, we all generate philosophy with indeterminate content.  No one could think through all of the consequences of their views.  It's reasonable to expect a vague boundary between consequences that are explicitly thought but left implicit in the text vs those that are not explicitly intended by the author but are still implicitly part of the overall picture vs those that are broadly in keeping with the overall picture but require a few plausible steps vs those that require a few more steps or a somewhat more controversial additional commitment vs those are are clearly extensions beyond the original view.  If we consider an author's view over time, the noise and indeterminacy increase: The author's opinions might fluctuate; they might lose track of their commitments; they might even contradict themselves.

Also, our words are public entities over which we don't have total control.  As Tyler Burge has emphasized, if someone says "I have arthritis in my thigh" (thinking of arthritis not just as a disease of the joints) or "there's an orangutan in the fridge" (thinking of "orangutan" as the name of a fruit drink), they've said something very different than they may have intended.  Even assuming that such flat misuse or malapropism is rare in philosophy, in a smaller way, words like "democracy", "belief", "freedom" are not entirely at each philosopher's behest.  These words are neither exact in meaning nor complete putty in our hands.  What we say is not precisely fixed by our intentions.

Thus, philosophical authors don't have complete control over, authority over, and understanding of their own work.  If two people approach Dennett with two different competing interpretations of what he has been saying, both might be equally right and good.  He could reasonably say to each, "Yes, that's a terrific interpretation of what I meant!"

Philosophers differ in their fussiness about others' interpretations.  Some -- like Dennett (at least when interacting with more junior scholars; he was sometimes ferocious toward peers) -- generously see the merit in diverse interpretations of their work.  Others are much more difficult to satisfy.  In the extreme, I've met philosophers who resist any kind of summary, distillation, or translation that they did not themselves produce and insist that you are getting them wrong unless you stick with their exact phrasing, and who seem to react to objections by insisting post hoc that their view contains elements, previously invisible to readers, to handle every new concern that might arise.  (A little interpretative charity is good, but excessive self-charity is trying to eat more than your share of the cookies.)

We should all realize that we don't have complete control over our work, or a complete understanding of what we meant, that our work has commitments and implications which we sometimes intend and sometimes forget -- and that, especially if it's long and rich enough, others might legitimately find diverse and contradictory things within it, including both things we like and things we dislike.  Of course, some interpretations will be baldly, factually incorrect.  I'm not saying that philosophers should fail to object to straightforward interpretative mistakes.  But a certain amount of looseness, tolerance, and pluralism about how people interpret us constitutes appropriate modesty about our relation to what we've said.

Thursday, April 11, 2024

Philosophy and the Ring of Darkness

"As the circle of light expands, so also does the ring of darkness around it"

-- probably not Einstein

Although it wasn't a prominent feature of my recent book, The Weirdness of the World, I find myself returning to this metaphor in podcast interviews about the book (e.g., here; see also p. 257-258 of Weirdness). I want to reflect a bit more on that metaphor today. Philosophy, I'll suggest, lives in the penumbra of darkness. It's what we do when we peer at the shadowy general forms just beyond the ring of light.

Within the ring of light lies what is straightforwardly knowable through common sense or mainstream science. Water is H2O. There's tea in this mug. Continents drift. You shouldn't schedule children's parties at 3:00 a.m. In the penumbra are matters of conjecture or speculation: There's alien life somewhere in the galaxy. Human beings are essentially just arrangements of material stuff. My retiring colleague will enjoy this Nietzsche finger puppet I bought for her.

Not all penumbral questions are philosophical, and philosophy doesn't dwell only in the penumbra. The question of whether there was once life on Mars is penumbral (not straightforwardly answerable), but it's not primarily philosophical, and neither is my question about the finger puppet -- at least not as these questions are normally approached. Also, some philosophical questions, for example about whether Kant ever wrote some particular sentence or whether Q follows from -P & (-Q -> P), lie well within the circle of light.

However, the penumbra is philosophy's familiar home; and any sufficiently broad question about the penumbra -- that is, concerning large, general issues that aren't straightforwardly answerable -- is worth regarding as a philosophical question. Some of these philosophical questions are addressed by big-picture speculative scientists, and some by philosophers. I draw no sharp distinction between them. If you're speculating about the most fundamental matters in any area, you're philosophizing, as far as I'm concerned.

I don't mean to suggest that things in the circle of light are known indubitably or exceptionlessly. I might be wrong about what's in my mug. A 3:00 a.m. party might be exactly what my group of jetlagged toddlers needs. Continental drift theory might someday be overturned. Maybe even radical skepticism is true and I'm just a brain in a vat, completely deluded about all such matters. Still, there's a distinction between what we reasonably regard as yielding to the ordinary methods of science and common sense and what we recognize as tending to elude such methods, requiring a more speculative approach. The latter is what occupies the penumbra. Of course, there's no sharp line between light and dark, nor a sharp beginning or end to the penumbra. Some penumbral questions -- what is the ultimate origin of the universe, if any, before the Big Bang -- lie with their far edge well into the darkness.

Nor is the penumbra fixed. As the initial quote suggests, the circle of light can grow. What was once penumbral -- whether humans and monkeys are genetically related, whether every true sentence of arithmetic is in principle provable -- can be illuminated. What was once wild philosophical speculation can become ordinary science.

The world is weird, as I argue in my recent book. Regarding fundamental questions of cosmology and consciousness, we are stuck with a variety of bizarre speculative possibilities, for none of which we have decisive evidence. What's the proper interpretation of the bizarreness of quantum mechanics? Could advanced AI systems have genuine conscious experiences? We don't know, and we can't for the foreseeable future find out. There's no straightforward way to settle these questions, and the deeper we probe, the more we lose ourselves in thickets of competing theoretical bizarreness.

Does that mean that we will never know whether the many worlds interpretation of quantum mechanics is correct or whether consciousness could arise in a sufficiently sophisticated silicon-based computer? This is one question my podcast interviewers often ask.  

No, that doesn't follow. Science can prove some pretty amazing things, given time. Who'd have thought a couple centuries ago that just by looking up at the stars we could learn so much strange detail about the early history of the universe?

But as the light grows, the penumbral ring will expand to match. There will always be darkness beyond. There will always be room for philosophical speculation. We will never complete the project of understanding the basic structure of world. If we figure out that X caused the Big Bang, we can then speculate about what caused X or whether X arose without a cause. If we figure out AI consciousness, in terms of Theory T of consciousness, we will uncover new topics of speculation concerning the wider applicability, or necessity, or fundamental grounds of Theory T.

Consider the Agrippan trilemma. To establish some proposition A, if we aren't just going to assume it without argument, we need an argument with at least one premise B. But then to establish proposition B, we need a further argument with at least one premise C. But then to establish C we need some further premise D, and so on. Either (1.) we simply stop dogmatically somewhere, assuming A (or B or C...) without argument; (2.) we argue in a circle, eventually coming back around to A (because B because C because D because... A); or (3.) we regress infinitely, so that there's always a new question to pursue, and we never reach an end.

The answer is that of course practically we need to start somewhere -- either with some premises we (perhaps reasonably) simply take for granted without further argument (Horn 1) or with some set of premises that mutually support each other and are assumed as a bundle (Horn 2). But we will always be able to ask why assume that proposition or bundle? We can always go deeper, more fundamental. We can always ask for the why behind the why behind the why. We can always wonder about the conditions of the possibility of the structure of the grounds of whatever it is that we currently regard as fundamental. Behind every curtain stands another curtain. There is no last curtain we can open after which we have a complete understanding.

This retreating-curtain view can be justified on Agrippan grounds. Or we can defend it by induction: Never so far have we found a once-penumbral question which, when answered, didn't reveal new, more fundamental questions behind it. Just try to find a counterexample! You won't, because whatever answer you give me, I can always respond with the toddler's trick of once again asking "why?"

Even within the light, it's of course entirely possible to be an annoying philosopher-toddler. My mug contains tea. Well, how do I know that? By looking in it. Well, how do I know that looking into a mug is a good way to learn about its contents? Well, um... now already I'm starting to do some philosophy. Maybe because looking in general has seemed to be a reliable process in the past. Well, how do I know that? And even if I do know it, how do I know that the past is a reliable guide to the future? Starting anywhere, we can quickly find layers of philosophical depth. Think of the circle of light, perhaps, not as a two-dimensional figure but instead as a thin disk in three-dimensional space. Even if you start at its middle, with the seemingly most straightforward and securely known facts, dig just a few questions deep and you will find penumbra and darkness.

[DALL-E image of a circle of light with vague forms in a penumbra of darkness around it]

[minor revisions 12 Apr 2024]

Saturday, April 06, 2024

Every Scholar Should Feel Relatively Underappreciated

Yes, all parents can rationally think that their children are above average, and everyone could, in principle, reasonably regard themselves as better-than-average drivers. We can reasonably disagree about values. If we then act according to those divergent values, we can reasonably conclude we're better than average. If you think skillful driving involves X instead of Y and then drive in a more X-like manner, you can justifiably conclude you're more skillful than those dopey Y drivers.


It's the same with scholarship. Ideally, every scholar should feel more underappreciated than most other scholars.

Suppose you're a philosophy grad student. You could choose to focus on area X, Y, or Z. You decide that area X is the most interesting and important, and you come to that conclusion not unreasonably. Other students, equally reasonably, judge that Y is the most interesting and important, or Z is. These differences in opinion might, for example, arise from differences in what you're exposed to, or the enthusiasm levels of people you trust. Consequently, you focus your research on X. Your disagreeing peers equally reasonably focus on Y or Z.

Committing to area X leads you, understandably, to even more deeply appreciate the value of X. It's such a rich topic! You hear the names and read the articles of senior scholars A, B, and C in area X. Your impression of the field understandably reinforces your sense of the interest and importance of X. Senior scholars A, B, and C become ever bigger names in your mind. You publish a few articles. You are now in conversation with leading senior scholars on one of the most important topics in the field.

Your peer in area Y of course similarly comes to more deeply appreciate the value of Y and the contributions of senior philosophers D, E, and F. If you and your peer both publish what might, from a third perspective (that of another peer focusing on topic Z), seem to be equally important topics, you might -- wholly rationally -- nonetheless see your own article as more important than your peer's, and vice versa.

Similarly for quality judgments: You and your peers might reasonably disagree about the relative importance of, say, formal rigor, clear prose, creative examples, and accurate grounding in historical texts. If you regard the first two as more central to philosophical quality and your peer regards the second two as more important, it is then reasonable that you each work harder to make your work better in those particular respects. Your work ends up more formally rigorous and more clearly written; theirs ends up more creative and historically grounded. Each of you will then, quite reasonably, regard your work as better than your peer's, each better adhering to the different quality standards that you reasonably endorse.

Similarly for other features of academia: Philosophers reasonably think philosophy is especially valuable. This starts as a selection effect: Those who relatively undervalue philosophy will tend not to seek a degree in it. As scholars dig deeper into their field, its value will become increasingly salient. Likewise, chemists will reasonably think chemistry is especially valuable, historians will think history is especially valuable, etc.

Scholars who think research articles are especially valuable will tend to produce disproportionately more of those. Scholars who think books are especially valuable will produce more of those. Scholars who find editing valuable will edit more. Scholars who value supervising students will supervise more. Scholars who value classroom teaching will put more energy into doing that well. Scholars who value administrative work will do more of that. And of course there's room for reasonable disagreement here. Whatever part of academia you tend to value, you will tend to invest in, with the result that you reasonably think that what you are doing is especially important.

The entirely predictable consequence is that you will feel relatively underappreciated. You are working on one of the most important topics, doing some of the highest quality work, and focusing on the most important parts of the scholarly life. Most of your peers are focused on less important topics, doing work that doesn't quite rise to your standards, and are distracted with less important matters. If you're awarded with raises and promotions, you'll probably feel that they are overdue. If you're not awarded with raises and promotions, you'll probably feel that others doing less important work are unfairly getting raises and promotions instead.

And this is how it should be. If you devote yourself to the areas of academic life that you reasonably but disputably regard as the most important, and if the system is fair and you aren't excessively modest, you should feel relatively underappreciated. It's a sign that you're adhering to your distinctive values.

[ChatGPT image of six scholars arguing around a seminar table with stuffed bookshelves in the background; the original image was all White men; this image was the output when I asked the image to be revised to make two of the scholars women and two non-White; see the literature on algorithmic bias.]