Thursday, January 30, 2014

An Objection to Group Consciousness Suggested by David Chalmers

For a couple of years now, I have been arguing that if materialism is true the United States probably has a stream of conscious experience over and above the conscious experiences of its citizens and residents. As it happens, very few materialist philosophers have taken the possibility seriously enough to discuss it in writing, so part of my strategy in approaching the question has been to email various prominent materialist philosophers to get a sense of whether they thought the U.S. might literally be phenomenally conscious, and if not why not.

To my surprise, about half of my respondents said they did not rule out the possibility. Two of the more interesting objections came from Fred Dretske (my undergrad advisor, now deceased) and Dan Dennett. I detail their objections and my replies in the essay in draft linked above. Although I didn't target him because he is not a materialist, [update 3:33 pm: Dave points out that I actually did target him, though it wasn't in my main batch] David Chalmers also raised an objection about a year ago in a series of emails. The objection has been niggling at me ever since (Dave's objections often have that feature), and I now address it in my updated draft.

The objection is this: The United States might lack consciousness because the complex cognitive capacities of the United States (e.g., to war and spy on its neighbors, to consume and output goods, to monitor space for threatening asteroids, to assimilate new territories, to represent itself as being in a state of economic expansion, etc.) arise largely in virtue of the complex cognitive capacities of the people composing it and only to a small extent in virtue of the functional relationships between the people composing it. Chalmers has emphasized to me that he isn't committed to this view, but I find it worth considering nonetheless, and others have pressed similar concerns.

This objection is not the objection that no conscious being could have conscious subparts (which I discuss in Section 2 of the essay and also here); nor is it the objection that the United States is the wrong type of thing to have conscious states (which I address in Sections 1 and 4). Rather, it's that what's doing the cognitive-functional heavy lifting in guiding the behavior of the U.S. are processes within people rather than the group-level organization.

To see the pull of this objection, consider an extreme example -- a two-seater homunculus. A two-seater homuculus is a being who behaves outwardly like a single intelligent entity but who instead of having a brain has two small people inside who jointly control the being's behavior, communicating with each other through very fast linguistic exchange. Plausibly, such a being has two streams of conscious experience, one for each homunculus, but no additional group-level stream for the system as a whole (unless the conditions for group-level consciousness are weak indeed). Perhaps the United States is somewhat like a two-seater homunculus?

Chalmers's objection seems to depend on something like the following principle: The complex cognitive capacities of a conscious organism (or at least the capacities in virtue of which the organism is conscious) must arise largely in virtue of the functional relationships between the subsystems composing it rather than in virtue of the capacities of its subsystems. If such a principle is to defeat U.S. consciousness, it must be the case both that

(a.) the United States has no such complex capacities that arise largely in virtue of the functional relationships between people, and

(b.) no conscious organism could have the requisite sort of complex capacities largely in virtue of the capacities of its subsystems.

Contra (a): This claim is difficult to assess, but being a strong, empirical negative existential (the U.S. has not even one such capacity), it seems a risky bet unless we can find solid empirical grounds for it.

Contra (b): This claim is even bolder. Consider a rabbit's ability to swiftly visually detect a snake. This complex cognitive capacity, presumably an important contributor to rabbit visual consciousness, might exist largely in virtue of the functional organization of the rabbit's visual subsystems, with the results of that processing then communicated to the organism as a whole, precipitating further reactions. Indeed turning (b) almost on its head, some models of human consciousness treat subsystem-driven processing as the normal case: The bulk of our cognitive work is done by subsystems, who cooperate by feeding their results into a global workspace or who compete for fame or control. So grant (a) for sake of argument: The relevant cognitive work of the United States is done largely within individual subsystems (people or groups of people) who then communicate their results across the entity as a whole, competing for fame and control via complex patterns of looping feedback. At the very abstract level of description relevant to Chalmers's expressed (but let me re-emphasize, not definitively endorsed) objection, such an organization might not be so different from the actual organization of the human mind. And it is of course much bolder to commit to the further view implied by (b), that no conscious system could possibly be organized in such a subsystem-driven way. It's hard to see what would justify such a claim.

The two-seater homunculus is strikingly different from a rabbit or human system (or even a Betelguesian beehead) because the communication is only between two sub-entities, at a low information rate; but the U.S. is composed of about 300,000,000 sub-entities whose informational exchange is massive, so the case is not similar enough to justify transferring intuitions from the one to the other.

Thursday, January 23, 2014

New Essay in Draft: The Moral Behavior of Ethicists

... which is a recurrent topic of my research, as regular readers of this blog will know.

This new paper, co-authored with Joshua Rust, summarizes our work on the topic to date and offers a quantitative meta-analysis that supports our overall finding that professors of ethics behave neither morally better nor morally worse overall than do philosophers not specializing in ethics.

You might find it entirely unsurprising that ethicists should behave no differently than other professors. If you do find it unsurprising (Josh and I don't), you might still be interested in looking at another of Josh's and my papers, in which we think through some of the theoretical implications of this finding.

Tuesday, January 21, 2014

Stanislaw Lem's Proof that the External World Exists

Slowly catching up on science fiction classics, reading Lem's Solaris, I'm struck by how the narrator, Kris, escapes a skeptical quandary. Worried that his sensory experiences might be completely delusional, Kris concocts the following empirical test:

I instructed the satellite to give me the figure of the galactic meridians it was traversing at 22-second intervals while orbiting Solaris, and I specified an answer to five decimal points.

Then I sat and waited for the reply. Ten minutes later, it arrived. I tore off the strip of freshly printed paper and hid it in a drawer, taking care not to look at it.... Then I sat down to work out for myself the answer to the question I had posed. For an hour or more, I integrated the equations....

If the figures obtained from the satellite were simply the product of my deranged mind, they could not possibly coincide with [my hand calculations]. My brain might be unhinged, but it could not conceivably complete with the Station's giant computer and secretly perform calculations requiring several months' work. Therefore if the figures corresponded, it would follow that the Station's computer really existed, that I had really used it, and that I was not delirious (1961/1970, p. 50-51).

Except in detail, Kris's test closely resembles an experiment Alan Moore and I have used in our attempt to empirically establish the existence of the external world (full paper in draft here).

Kris is hasty in concluding from this experiment that he must have used an actually existing computer. Kris might, for example, have been victim of a deceiver with great computational powers, who can give him the meridians within ten minutes of his asking. And Kris would have done better, I think, to have looked at the readout before having done his own calculations. By not looking until the end, he leaves open the possibility that he delusively creates the figures supposedly from the satellite only after he has derived the correct answers himself. Assuming he can trust his memory and arithmetical abilities for at least a short duration (and if not, he's really screwed), Kris should look at the satellite's figures first, holding them steady before his mind, while he confirms by hand that the numbers make mathematical sense.

Increasingly, I think the greatest science fiction writers are also philosophers. Exploring the limits of technological possibility inevitably involves confronting the central issues of metaphysics, epistemology, and human value.

Wednesday, January 15, 2014

Waves of Mind-Wandering in Live Performances

I'm thinking (again) about beeping people during aesthetic experiences. The idea is this. Someone is reading a story, or watching a play, or listening to music. She has been told in advance that a beep will sound at some unexpected time, and when the beep sounds, she is to immediately stop attending to the book, play, or whatever, and note what was in her stream of experience at the last undisturbed moment before the beep, as best she can tell. (See Hurlburt 2011 for extensive discussion of such "experience sampling" methods.)

I've posted about this issue before; and although professional philosophy talks aren't paradigmatic examples of aesthetic performances, I have beeped people during some of my talks. One striking result: People spend lots of time thinking about things other than the explicit content of the performance -- for example, thinking instead about needing to go to the bathroom, or a sports bet they just won, or the weird color of an advertising flyer. And I'd bet Nutcracker audiences are similarly scatterbrained. (See also Schooler, Reichle, and Halpern 2004; Schubert, Vincs, and Stevens 2013.)

(image source: *)

But I also get the sense that if I pause, I can gather the audience up. A brief pause is commanding -- in music (e.g. Roxanne), in film -- but especially in a live performance like a talk. Partly, I suspect this is due to contrast with previous noise levels, but also it seems to raise curiosity about what's next -- a topic change, a point of emphasis, some unplanned piece of human behavior. (How interesting it is when the speaker drops his cup! -- much more interesting, usually, in a sad and wonderfully primate way, than the talk itself.)

I picture people's conscious attention coming in waves. We launch out together reasonably well focused, but soon people start drifting their various directions. The speaker pauses or does something else that draws attention, and that gathers everyone briefly back together. Soon the audience is off drifting again.

We could study this with beepers. We could see if I'm right about pauses. We could see what parts of performance tend to draw people back from their wanderings and what parts of performance tend to escape conscious attention. We could see how immersive a performance is (in one sense of "immersive") by seeing how frequently people report being off topic vs. on a tangentially related topic vs. being focused on the immediate content of the performance. We could vastly improve our understanding of the audience experience. New avenues for criticism could open up. Knowing how to capture and manipulate the waves could help writers and performers create a performance more in line with their aesthetic goals. Maybe artists could learn to want waves and gatherings of a certain sort, adding a new dimension to their aesthetic goals.

As far as I can tell, no one has ever done a systematic experience sampling study during aesthetic experience that explores these issues. It's time.

Friday, January 10, 2014

Skeptical Fog vs. Real Fog

I am a passenger in a jumbo jet that is descending through turbulent night fog into New York City. I'm not usually nervous about flying, but the turbulence is getting to me. I know that the odds of dying in a jet crash with a major US airline are well below one in a million, but descent in difficult weather conditions is among the most dangerous parts of flight -- so maybe I should estimate my odds of death in the next few minutes as about one in a million or one in ten million? I can't say those are odds I'm entirely happy about.

But then I think: Maybe some radically skeptical scenario is true. Maybe, for example, I'm a short-term sim -- an artificial, computerized being in a small world, doomed soon to be shut down or deleted. I don't think that is at all likely, but I don't entirely rule it out. I have about a 1% credence that some radically skeptical scenario or other is true, and about 0.1% credence, specifically, that I'm in a short-term sim. In a substantial portion of these radically skeptical scenarios, my life will be over soon. So my credence that my life will soon end for some skeptical-scenario type reason is maybe about one in a thousand or one in ten thousand -- orders of magnitude higher than my credence that my life will soon end for the ordinary-plane-crash type of reason.

Still, the plane-crash possibility worries me more than the skeptical possibility.

Does the fact that these skeptical reflections leave me emotionally cold show that I don't really, "deep down", have even a one-in-a-million credence in at least the imminent-death versions of the skeptical scenarios? Now maybe I shouldn't worry about those scenarios even if I truly assign a non-trivial credence to them. After all, there's nothing I can do about them, no action that I can reasonably take in light of them. I can't, for example, buy sim-insurance. But if that's why the scenarios leave me unmoved, the same is true about the descending plane. There's nothing I can do about the fog; I need to just sit tight. As a general matter helplessness doesn't eliminate anxiety.

Here my interest in radical skepticism intersects another of my interests, the nature of belief. What would be involved in really believing that there is a non-trivial chance that one will soon die because some radically skeptical scenario is true? Does genuine belief only require saying these things to oneself, with apparent sincerity, and thinking that one accepts them? Or do they need to get into your gut?

My view is that it's an in-between case. To believe, on my account, is to have a certain dispositional profile -- to be disposed to reason, and to act and react, both inwardly and outwardly, as ordinary people would expect someone with that belief to do, given their other related attitudes. So, for example, to believe that something carries a 1/10,000 risk of death is in part to be disposed sincerely to say it does and to draw conclusions from that fact (e.g., that it's riskier than something with a 1/1,000,000 risk of death); but it is also to have certain emotional reactions, to spontaneously draw upon it in one's everyday thinking, and to guide one's actions in light of it. I match the dispositional profile, to some extent, for believing there's a small but non-trivial chance I might soon die for skeptical-scenario-type reasons -- for example, I will sincerely say this when reflecting in my armchair -- but in other important ways I seem not to match the relevant dispositional profile.

It is not at all uncommon for people intellectually to accept certain propositions -- for example, that their marriage is one of the most valuable things in their lives, or that it's more important for their children to be happy than to get good grades, or that custodians deserve as much respect as professors -- while in their emotional reactions and spontaneous thinking, they do not very closely match the dispositional profile constitutive of believing such things. I have argued that this is one important way in which we can occupy the messy middle space between being accurately describable as believing something and being accurately describable as failing to believe it. My own low-but-not-negligible credence in radically skeptical scenarios is something like this, I suppose.

Thursday, January 02, 2014

Our Possible Imminent Divinity

We might soon be gods.

John Searle might be right that digital computers could never be conscious. Or the pessimists might be right who say we will blow ourselves up before we ever advance far enough to create real consciousness in computers. But let's assume, for the sake of argument, that Searle and the pessimists are wrong: In a few decades we will be producing genuinely conscious artificial intelligences in substantial quantity.

We will then have at least some features of gods: We will have created a new type of being, perhaps in our image. We will presumably have the power to shape our creations' personalities to suit us, to make them feel blessed or miserable, to hijack their wills to our purposes, to condemn them to looping circuits of pain or reward, to command their worship if we wish.

If consciousness is only possible in fully embodied robots, our powers might stop approximately there, but if we can create conscious beings inside artificial environments, we become even more truly divine. Imagine a simulated world inside a computer with its own laws and containing multiple conscious beings whose sensory inputs all flow in according to the rules of that world and whose actions are all expressed in that world -- The Sims but with conscious AIs.

[image from http://tapirangkasaterbabas.blogspot.com; go ahead and apply feminist critique whenever ready]

Now we can command not only the AI beings themselves but their entire world.

We approach omnipotence: We can create miracles. We can drop in Godzilla, we can revive the dead, we can move a mountain, undo errors, create or end the whole world at a whim. Zeus would be envious.

We approach omniscience: We can look at any part of the world, look inside anyone's mind, see the past if we have properly recorded it -- possibly, too, predict the future, depending on the details of the program.

We stand outside of space and to some extent time: Our created beings can point any direction of the sphere and not point at us -- we are everywhere and nowhere, not on their map, though capable of seeing and reaching anywhere. If the sim has a fast clock relative to our time, we can seem to endure for millenia or longer. We can pause their time and do whatever we like unconstrained by their clock. We can rewind to save points and thus directly view and interact with the past, perhaps sprouting off new worlds from it or rewriting the history of the one world.

But will we be benevolent gods? What duties will we have to our creations, and how well will we execute those duties? Philosophers don't discuss this issue as much as they should. (Nick Bostrom and Eliezer Yudkowsky are exceptions, and there's some terrific science fiction, e.g., Ted Chiang. In this story, R. Scott Bakker and I pit the duty to maximize happiness against the duty to give our creations autonomy and self-knowledge.)

Though to our creations we will literally have the features of divinity and they might rightly call us their gods, from the perspective of this level of reality we might remain very mortal, weak, and flawed. We might even ourselves be the playthings of still higher gods.

Wednesday, January 01, 2014

What I Wrote in 2013

I hope it is not too vain to begin 2014 with a retrospect of what I wrote in 2013. I kind of enjoy gathering it here -- it helps convince me that my solitary office labors are not a waste -- and maybe some readers will find it useful.

This work appeared in print in 2013:
This work is finished and forthcoming:
Also in 2013, I began writing short speculative fiction in earnest.  I am not sure how this will turn out; I think it's too early for me to know if I'm any good at it.  My first effort, a collaboration with professional fiction writer R. Scott Bakker, appeared in Nature ("Reinstalling Eden", listed above).  I wish I could post drafts on my website and solicit feedback, as I do with my philosophy articles, but fiction venues seem to dislike that.

Update January 2:
I fear the ill-chosen title of this post might give some people the misleading impression that I wrote all of this material during 2013.  Most of the work that appeared in print was finalized before 2013, and a fair portion of the other work was at least in circulating draft before 2013.  Here's how things stood at the end of 2012; lots of overlap!