Thursday, June 23, 2016

How to Accidentally Become a Zombie Robot

Susan Schneider's beautifully clear TEDx talk on the future of robot consciousness has me thinking about the possibility of accidentally turning oneself into a zombie. (I mean "zombie" in the philosopher's sense: a being who outwardly resembles us but who has no stream of conscious experience.)

Suppose that AI continues to rely on silicon chips and that -- as Schneider thinks is possible -- silicon chips just aren't the right kind of material to host consciousness. (I'll weaken these assumptions below.) It's 2045 and you walk into the iBrain store, thinking about having your degenerating biological brain replaced with more durable silicon chips. Lots of people have done it already, and now the internet is full of programmed entities that claim to be happily uploaded people who have left their biological brains behind. Some of these uploaded entities control robotic or partly organic bodies; others exist entirely in virtual environments inside of computers. If Schneider is right that none of these silicon-chip-instantiated beings is actually conscious, then what has actually happened is that all of the biological people who "uploaded" actually committed suicide, and what exist are only non-conscious simulacra of them.

You've read some philosophy. You're worried about exactly that possibility. Maybe that's why you've been so slow to visit the local iBrain store. Fortunately, the iBrain company has discovered a way to upload you temporarily, so you can try it out -- so that you can determine introspectively for yourself whether the uploaded "you" really would be conscious. Federal regulations prohibit running an uploaded iBrain at the same time that the original source person is conscious, but the company can scan your brain non-destructively while you are sedated, run the iBrain for a while, then pause your iBrain and update your biological brain with memories of what you experienced. A trial run!

From the outside, it looks like this: You walk into the iBrain store, you are put to sleep, a virtual you wakes up in a robotic body and says "Yes, I really am conscious! Interesting how this feels!" and then does some jogging and jumping jacks to test out the the body. The robotic body then goes to sleep and the biological you wakes up and says, "Yes, I was conscious even in the robot. My philosophical doubts were misplaced. Upload me into iBrain!"

Here's the catch: After you wake, how do you know those memories are accurate memories of having actually been conscious? When the iBrain company tweaks your biological neurons to install the memories of what "you" did in the robotic body, it's hard to see how you could be sure that those memories aren't merely presently conscious seeming-memories of past events that weren't actually consciously experienced at the time they occurred. Maybe the robot "you" really was a zombie, though you don't realize that now.

You might have thought of this possibility in advance, and so you might remain skeptical. But it would take a lot of philosophical fortitude to sustain that skepticism across many "trial runs". If biological you has lots of seeming-memories of consciousness as a machine, and repeatedly notices no big disruptive change when the switch is flipped from iBrain to biological brain, it's going to be hard to resist the impression that you really are conscious as a machine, even if that impression is false -- and thus you might decide to go ahead and do the upload permanently, unintentionally transforming yourself into an experienceless zombie.

But maybe if a silicon-chip brain could really duplicate your cognitive processes well enough to drive a robot that acts just as you would act, then the silicon-chip brain really would have to be conscious? That's a plausible (though disputable) philosophical position. So let's weaken the philosophical and technological assumptions a little. We can still get a skeptical zombie scenario going.

Suppose that the iBrain company tires of all the "trial runs" that buyers foolishly insist on, so the company decides to save money by not actually having the robot bodies do any of those things that the the trial-run users think they do. Instead, when you walk in for a trial they sedate you and, based on what they know about your just-scanned biological brain, they predict what you would do if you were "uploaded" into a robotic body. They then give you false memories of having done those things. You never actually do any of those things or have any of those thoughts during the time your biological body is sedated, but there is no way to know that introspectively after waking. It would seem to you that the uploading worked and preserved your consciousness.

There can be less malicious versions of this mistake. Behavior and cognition during the trial might be insufficient for consciousness, or for full consciousness, while memory is nonetheless vivid enough to lead to retrospective attributions of full consciousness.

In her talk, Schneider suggests that we could tell whether silicon chips can really host consciousness by trying them out and then checking whether consciousness disappears when we do so; but I'm not sure this test would work. If nonconscious systems (whether silicon chip or otherwise) can produce both (a.) outwardly plausible behavior, and (b.) false memories of having really experienced consciousness, then we might falsely conclude in retrospect that consciousness is preserved. (This could be so whether we are replacing the whole brain at once or only one subsystem at a time, as long as "outward" means "outside of the subsystem, in terms of its influence on the rest of the brain".) We might then choose to replace conscious systems with nonconscious ones, accidentally transforming ourselves into zombies.

[image source]

Tuesday, June 14, 2016

Possible Architectures of Group Minds: Memory

by Eric Schwitzgebel and Rotem Herrmann

Suppose you have 200 bodies. "You"? Well, maybe not exactly you! Some hypothetical science fictional group intelligence.

How might memory work?

For concreteness, let's assume a broadly Ann Leckie "ancillary" setup: two hundred humanoid bodies on a planet's surface, each with an AI brain remotely connected to a central processor on an orbiting starship.

(For related reflections on the architecture of group perception, see this earlier post.)

Central vs. Distributed Storage

For simplicity, we will start by assuming a storage and retrieval representational architecture for memory.

A very centralized memory architecture might have the entire memory store in the orbiting ship, which the humanoid bodies access any time they need to retrieve a memory. A humanoid body, for example, might lean down to inspect a flower which it wants to classify, simultaneously sending a request for taxonomic information to the central unit. In contrast, a very distributed memory architecture might have all of the memory storage distributed in the humanoid bodies, so that if the humanoid doesn't have classification information in its own local brain it will have to send a request around to other humanoids to see if they have that information stored.

A bit of thought suggests that completely centralized memory architecture probably wouldn't succeed if the humanoid bodies are to have any local computation (as opposed to being merely dumb limbs). Local computation presumably requires some sort of working memory: If the local humanoid is reasoning from P and (P -> Q) to Q, it will presumably have to retain P in some way while it processes (P -> Q). And if the local humanoid is reaching its arm forward to pluck the flower, it will presumably have to remember its intention over the course of the movement if it is to behave coherently.

It's natural, then, to think that there will be at least a short-term store in each local humanoid, where it retains information relevant to its immediate projects, available for fast and flexible access. There needn't be a single short term store: There could be one or more ultra-fast working memory modules for quick inference and action, and a somewhat slower short-term or medium-term store for contextually relevant information that might or might not prove useful in the tasks that the humanoid expects to confront in the near future.

Conversely, although substantial long-term information, not relevant to immediate tasks, might be stored in each local humanoid, if there is a lot of potential information that the group mind wants to be able to access -- say, snapshots of the entire internet plus recorded high-resolution video feeds from each of its bodies -- it seems that the most efficient solution would be to store that information in the central unit rather than carrying around 200 redundant copies in each humanoid. Alternatively, if the central unit is limited in size, different pieces could be distributed among the humanoids, accessible each to the other upon request.

Procedural memories or skills might also be transferred between long-term and short-term stores, as needed for the particular tasks the humanoids might carry out. Situation-specific skills, for example -- piloting, butterfly catching, Antarean opera singing -- might be stored centrally and downloaded only when necessary, while basic skills such as walking, running, and speaking Galactic Common Tongue might be kept in the humanoid rather than "relearned" or "re-downloaded" for every assignment.

Individual humanoids might also locally acquire skills, or bodily modifications, or body-modifications-blurring-into-skills that are or are not uploaded to the center or shared with other humanoids.

Central vs. Distributed Calling

One of the humanoids walks into a field of flowers. What should it download into the local short-term store? Possibilities might include: a giant lump of botanical information, a giant history of everything known to have happened in that location, detailed algorithms for detecting the presence of landmines and other military hazards, information on soil and wildlife, a language module for the local tribe whose border the humanoid has just crossed, or of course some combination of all these different types of information.

We can imagine the calling decision being reached entirely by the central unit, which downloads information into particular humanoids based on its overview of the whole situation. One advantage of this top-down approach would be that the calling decision would easily reflect information from the other humanoids -- for example, if another one of the humanoids notices a band of locals hiding in the bushes.

Alternatively, the calling decision could be reached entirely by the local unit, based upon the results of local processing. One advantage of this bottom-up approach would be that it avoids delays arising from the transmission of local information to the central unit for possibly computationally-heavy comparison with other sources of information. For example, if the local humanoid detects a shape that might be part of a predator, it might be useful to prioritize a fast call of information on common predators without having to wait for a call-up decision from orbit.

A third option would allow a local representation in one humanoid A to trigger a download into another humanoid B, either directly from the first humanoid or via the central unit. Humanoid A might message Humanoid B "Look out, B, a bear!" along with a download of recently stored sensory input from A and an instruction to the central unit to dump bear-related information into B's short term store.

A well engineered group mind might of course, allow all three calling strategies. There will still be decisions about how much weight and priority to give to each strategy, especially in cases of...

Conflict

Suppose the central unit has P stored in its memory, while a local unit has not-P. What to do? Here are some possibilities:

Central dictatorship. Once the conflict is detected, the central unit wins, correcting the humanoid unit. This might make especially good sense if the information in the humanoid unit was originally downloaded from the central unit through a noisy process with room for error or if the central unit has access to a larger or more reliable set of information relevant to P.

Central subordination. Once the conflict is detected, the local might overwrite the central. This might make especially good sense if the central store is mostly a repository of constantly updated local information, for example if humanoid A is uploading a stream of sensory information from its short term store into the central unit's long term store.

Voting. If more than one local humanoid has relevant information about P, there might be a winner-take-all vote, resulting in the rewriting of P or not-P across all the relevant subsystems, depending on which representation wins the vote.

Compromise. In cases of conflict there might be compromise instead of dominance. For example, if the central unit has P and one peripheral unit has not-P, they might both write something like "50% likely that P"; analogously if the peripheral units disagree.

Retain the conflict. Another possibility is to simply retain the conflict, rather than changing either representation. The system would presumably want to be careful to avoid deriving conclusions from the contradiction or pursuing self-defeating or contradictory goals. Perhaps contradictory representations could be somehow flagged.

And of course there might be different strategies on different occasions, and the strategies can be weighted, so that if Humanoid A is in a better position than Humanoid B the compromise result might be 80% in favor of Humanoid A, rather than equally weighted.

Similar possibilities arise for conflicts in memory calling -- for example if the local processors in Humanoid A represent bear-information download as the highest priority, the local processes in Humanoid B represent language-information download as urgent for Humanoid A, and the central unit represents mine detection as the highest priority.

Reconstructive Memory

So far we've been working with a storage-and-retrieval model of memory. But human memory is, we think, better modeled as partly reconstructive: When we "remember" information (especially complex information like narratives) we are typically partly rebuilding, figuring out what must have been the case in a way that brings together stored traces with other more recent sources of information and also with general knowledge. For example, as Bartlett found, narratives retold over time tend to simplify and move toward incorporating stereotypical elements even if those elements weren't originally present; and as Loftus has emphasized, new information can be incorporated into seemingly old memories without the subject being aware of the change (for example memories of shattered glass when a car accident is later described as having been at high velocity).

If the group entity's memory is reconstructive, all of the architectural choices we've described become more complicated, assuming that in reconstructing memories the local units and the central units are doing different sorts of processing, drawing on different pools of information. Conflict between memories might even become the norm rather than the exception. And if we assume that reconstructing a memory often involves calling up other related memories in the process, decisions about calling become mixed in with the reconstruction process itself.

Memory Filling in Perception

Another layer of complexity: An earlier post discussed perception as though memory were irrelevant, but an accurate and efficient perceptual process would presumably involve memory retrieval along the way. As our humanoid bends down to perceive the flower, it might draw examplars or templates of other flowers of that species from long-term store, and this might (as in the human case) influence what it represents as the flower's structure. For example, in the first few instants of looking, it might tentatively represent the flower as a typical member of its species and only slowly correct its representation as it gathers specific detail over time.

Extended Memory

In the human case, we typically imagine memories as stored in the brain, with a sharp division between what is remembered and what is perceived. Andy Clark and others have pushed back against this view. In AI cases, the issue arises vividly. We can imagine a range of cases from what is clearly outward perception to what is clearly retrieval of internally stored information, with a variety of intermediate, difficult-to-classify cases in between. For example: on one end, the group has Humanoid A walk into a newly discovered library and read a new book. We can then create a slippery slope in which the book is digitized and stored increasingly close to the cognitive center of the humanoid (shelf, pocket, USB port, internal atrium...), with increasing permanence.

Also, procedural memory might be partly stored in the limbs themselves with varying degrees of independence from the central processing systems of the humanoid, which in turn can have varying degrees of independence from the processing systems of the orbiting ship. Limbs themselves might be detachable, blurring the border between body parts and outside objects. There need be no sharp boundary between brain, body, and environment.

[image source]

Monday, June 06, 2016

If You/I/We Live in a Sim, It Might Well Be a Short-Lived One

Last week, the famous Tesla and SpaceX CEO and PayPal cofounder Elon Musk said that he is almost certain that we are living in a sim -- that is, that we are basically just artificial intelligences living in a fictional environment in someone else's computer.

The basic argument, adapted from philosopher Nick Bostrom, is this:

1. Probably the universe contains vastly many more artificially intelligent conscious beings, living in simulated environments inside of computers ("sims"), than flesh-and-blood beings living at the "base level of reality" ("non-sims", i.e., not living inside anyone else's computer).

2. If so, we are much more likely to be sims than non-sims.

One might object in a variety of ways: Can AIs really be conscious? Even if so, how many conscious sims would there likely be? Even if there are lots, maybe somehow we can tell we're not them, etc. Even Bostrom only thinks it 1/3 likely that we're sims. But let's run with the argument. One natural next question is: Why think we are in a large, stable sim?

Advocates of versions of the Sim Argument (e.g., Bostrom, Chalmers, Steinhart) tend to downplay the skeptical consequences: The reader is implicitly or explicitly invited to think or assume that the whole planet Earth (at least) is (probably) all in the same giant sim, and that the sim has (probably) endured for a long time and will endure for a long time to come. But if the Sim Argument relies on some version of Premise 1 above, it's not clear that we can help ourselves to such a non-skeptical view. We need to ask what proportion of the conscious AIs (at least the ones relevantly epistemically indistinguishable from us) live in large, stable sims, and what proportion live in small or unstable sims?

I see no reason here for high levels of optimism. Maybe the best way for the beings at the base level of reality to create a sim is to evolve up billions or quadrillions of conscious entities in giant stable universes. But maybe it's just as easy, just as scientifically useful or fun, to cut and paste, splice and spawn, to run tiny sims of people in little offices reading and writing philosophy for thirty minutes, to run little sims of individual cities for a couple of hours before surprising everyone with Godzilla. It's highly speculative either way, of course! That speculativeness should undermine our confidence about which way it might be.

If we're in a sim, we probably can't know a whole lot about the motivations and computational constraints of the gods at the base level of reality. (Yes, "gods".) Maybe we should guess 50/50 large vs. small? 90/10? 99/1? (One reason to skew toward 99/1 is that if there are very large simulated universes, it will only take a few of them to have the sims inside them vastly outnumber the ones in billions of small universes. On the other hand, they might be very much more expensive to run!)

If you/I/we are in a small sim, then some version of radical skepticism seems to be warranted. The world might be only ten minutes old. The world might end in ten minutes. Only you and your city might exist, or only you in your room.

Musk and others who think we might be in a simulated universe should take their reasoning to the natural next step, and assign some non-trivial credence to the radically skeptical possibility that this is a small or unstable sim.

-----------------------------------------

Related:

"Skepticism, Godzilla, and the Artificial Computerized Many-Branching You" (Nov 15, 2013).

"Our Possible Imminent Divinity" (Jan 2, 2014).

"1% Skepticism" (forthcoming, Nous).

[image source]

Tuesday, May 31, 2016

Percentage of Female Faculty at Elite U.S. Philosophy Departments, 1930-1979

Jonathan Strassfeld has generated some data on philosophers at eleven elite U.S. PhD programs from 1930-1979 (Berkeley, Chicago, Columbia, Cornell, Harvard, Michigan, Pennsylvania, Princeton, Stanford, UCLA, and Yale [note 1]). Carolyn Dicey Jennings made some corrections and did a gender analysis, finding substantial correlations between the percentage of women in those departments in 1930-1979 and the percentage of women and non-White doctorate recipients from those same departments from 2004-2014.

Starting with Strassfeld's version as of May 27 (hand-correcting for a few errors reported by Jennings and correcting a few more errors that I independently found), I decided to chart the percentage of women faculty in these departments over the period in question. (Here are my raw data. Corrections welcome. Data of this sort are rarely 100% perfect. The general trends, however, should be robust enough that a few errors make no material difference.)

I looked at time course by taking a snapshot of the faculty every five years starting in 1930 (ending in 1979 rather than 1980). Here's a chart:

UPDATE 2:04 p.m.: Strassfeld has made some further corrections and created this year-by-year chart:

Highlights:

* The 1930, 1935, 1940, and 1945 snapshots contain exactly zero women faculty (compared to 63-71 men during the period).

* The 1950 and 1955 snapshots contain exactly one woman: Elizabeth Flower at Penn (the universities have 98 and 104 recorded men faculty in those years).

* In 1960, Flower is joined in the dataset by Mary Mothersill at Chicago. The 1965 and 1970 snapshots both show five women (3%) among 156 and 191 total faculty respectively.

* In the late 1970s there's a sudden jump to 16/174 (10%) in 1975 and 18/171 (12%) in 1979.

Thus, despite the presence of some highly influential women philosophers in the early to mid 20th century -- for example, Simone de Beauvoir, G.E.M. Anscombe, and Hannah Arendt -- women held a vanishingly tiny proportion of philosophy faculty positions at elite U.S. universities from the 1930s through the early 1960s, even fewer than one might be inclined to think, in retrospect, upon casual consideration.

Some reference points:

* Using data from the National Center for Education Statistics, I estimated 9% women faculty among full time four-year university faculty in the U.S. in 1988 and 12-20% in the 1990s.

* Jennings and I found about 25% women faculty in PGR-rated U.S. PhD-granting departments in 2014.

I find this a helpful reminder that, for all of the continuing gender disparity in philosophy in the 2010s, things are nonetheless much different from the 1950s. Try to imagine the gender environment that Flower and Mothersill operated in!

------------------------------------------

I am also reminded of this autobiographical reflection from Martha C. Nussbaum, from her 1997 book Cultivating Humanity:

When I arrived at Harvard in 1969, my fellow first-year graduate students and I were taken up to the roof of Widener Library by a well-known philosopher of classics. He told us how many Episcopal Churches could be seen from that vantage point. As a Jew (in fact a convert from Episcopalian Christianity), I knew that my husband and I would have been forbidden to marry in Harvard's Memorial Church, which had just refused to accept a Jewish wedding. As a woman I could not eat in the main dining room of the faculty club, even as a member's guest. Only a few years before, a woman would not have been able to use the undergraduate library. In 1972 I became the first female to hold the Junior Fellowship that relieved certain graduate students from teaching so that they could get on with their research. At that time I received a letter of congratulation from a prestigious classicist saying that it would be difficult to know what to call a female fellow, since "fellowess" was an awkward term. Perhaps the Greek language could solve the problem: since the masculine for "fellow" was hetairos, I could be called a hetaira. Hetaira, however, as I knew, is the ancient Greek word not for "fellowess" but for "courtesan."

------------------------------------------

Note 1 (added 11:47 a.m.): This list is drawn from Strassfeld. He explains his selection thus:

I determined which departments to survey recursively, defining the "leading departments" as those whose graduates comprised the faculties of the leading departments. Focusing on the period of 1945-1969, when universities were growing explosively, I found that there was a group of eleven philosophy departments that essentially only hired graduates from among their own ranks and foreign universities - that it was virtually impossible for graduates of any American philosophy departments outside of this group to gain faculty positions at these "leading departments." Indeed, between 1949-1960, no member of their faculty had received a Ph.D. from an American institution outside of their ranks. There were, of course, border cases. Brown, Rockefeller, MIT, and Pittsburgh in particular might have been included. However, I judged that they did not place enough graduates on the faculties of the other leading universities, particularly during the period 1945-1969, for inclusion. This list also aligns closely with contemporary reputational assessments, with ten of the eleven departments ranking in the top 11 in a 1964 poll (Allan Murray Carter, An Assessment of Quality in Graduate Education).

Also, Strassfeld notes (personal communication) that the list only includes Assistant, Associate, and full Professors, not instructors or lecturers (such as Marjorie Greene who was an instructor at Chicago in the 1940s).

Thursday, May 26, 2016

Empty Box Rationalization

A hypothetical from Darrell Rowbottom, in conversation: Suppose you are a perfect moral rationalizer. Suppose you know that for any action you want to do, you are clever enough a moral theorist that you could find some plausible-seeming post-hoc justification for it. Would you actually need to come up with the justification? Maybe it's enough just to know in advance that you could come up with one, and not actually do the work?

Think of the savings of time and cognitive effort! Also, since self-serving rationalizations might tend to lead one away from the moral truth, you might be epistemically better off too. With or without an actual filled-in rationalization, you'll be able to feel fine about doing what you want.

Call this Empty Box Rationalization. Why bother to fill the box with an actual rationalization? Simply postulate that a plausible-seeming justification could be found!

Of course, few of us are clever enough moral theorists to take advantage of Empty Box Rationalization without limitation. As skilled as we may happen to be at justifying our actions to ourselves, there will be some actions beyond the pale, which we are incapable of plausibly rationalizing.

However, we might be able to take advantage of Limited Empty Box Rationalization. Limited Empty Box Rationalization differs from full Empty Box Rationalization by confining itself to a range of rationalizable actions. For any action within a certain range, I know that I am clever enough a rationalizer to devise, if I want, some plausible-seeming justification which I would accept upon reflection; and thus I can postulate that such a justification is out there to be found.

Here's an example. Suppose I'm always fifteen minutes late. Every time I show up late, I always manage to find a satisfactory excuse. Sometimes it's traffic. Sometimes it's that I really needed to finish some important task first. Sometimes it's that I got lost. Sometimes it's that I was detained by someone else. I always find some way to let myself off the hook, so that I never feel guilty. Now imagine that today I find myself arriving fifteen minutes late for a meeting with a graduate student. I could, hypothetically, go through the effort of trying to concoct an excuse. But maybe instead of wasting that time, I can just postulate the existence of some plausible excuse or other, so that we can get straight into the meeting without further delay.

(Sure, maybe an actual filled-in excuse from me would serve some kind of function for the other person. I set that aside for these reflections.)

People will differ in their degree of cleverness and thus differ in their working ranges of Limited Empty Box Rationalization. Some will be clever enough reliably to justify 15 minutes of tardiness; others clever enough reliably to justify 30 minutes. Some will be clever enough to justify reneging on wider ranges of commitments, to justify wider ranges of gray-area misconduct, perhaps even to justify, to their own satisfaction, what the rest of us would judge to be plainly morally odious. For one especially skillful example, consider Heidegger on Nazism.

Of course, this isn't fair. If only we were more clever we too could rationalize such actions! Perhaps for any action that I've done or that I'd really like to do, a clever enough moral theorist could, with enough work, come up with some plausible-seeming justification of it that would satisfy me. But then -- maybe that's good enough! If I know that a cleverer version of myself would believe A, then maybe that knowledge itself suffices to justify A, since who am I to disagree with a cleverer version of myself, who could of course get the better of me in argument?

Advanced Empty Box Rationalization begins with that thought. Advanced Empty Box Rationalization widens the range of Limited Empty Box Rationalization beyond the boundaries of one's own actual rationalizing capacities. For some range of actions wider than one's usual range of rationalizable actions, one justifiably accepts that either one could come up with a plausible-seeming justification that one would accept upon reflection, given that one is motivated to do so, or a cleverer version of oneself could devise such a justification. Perhaps as a limiting case one could accept that an infinitely clever version of oneself could hypothetically justify anything in this manner.

Application of these thoughts to current and past scandals in the profession is left as an exercise for the reader.

Related:

  • Schwitzgebel & Ellis (forthcoming), Rationalization in Moral and Philosophical Thought.
  • [image source]

    Tuesday, May 17, 2016

    Whether to Take Peter Singer to McDonalds

    Greetings from Hong Kong!

    I'm highly allergic to shellfish. I'm allergic enough that cross-contamination is an issue: If I'm served something that has been fried on the same surface as shellfish or touched with an implement that has touched shellfish, I will have a minor allergic reaction. Shellfish is so prevalent in the southern coastal Chinese diet that I have minor shellfish reactions at about half of my lunch or evening meals, even if I try to be careful. I've learned that there are only two types of restaurants that are entirely safe: strict Buddhist vegetarian restaurants and McDonalds.

    I was discussing this with my hosts at a university here in Hong Kong. One of the hosts said, "Well, we could go to that Buddhist restaurant that we took [the famous vegetarian philosopher] Peter Singer to". Sounds like a good idea to me! Another host said, "Yes, but that restaurant is so expensive! Too bad there isn't another good Buddhist restaurant around." I suggested that McDonalds would be fine, really. I didn't want to force them to spend a lot of money hosting me.

    It occurred to me that they should have taken Peter Singer to McDonalds, too. Singer is as famous for his argument against luxurious spending as he is for his argument in favor of vegetarianism, and one of his favorite examples of needless luxury spending is high-priced restaurant meals. The idea is that the money you spend on a luxurious restaurant meal could be donated to charity and perhaps save the life of a child living in poverty somewhere.

    So here's my thought. Suppose that the two options are (a) an expensive Buddhist restaurant, maybe $300 Hong Kong dollars per person for 10 people, $3000 Hong Kong dollars total ($400 US dollars), or (b) McDonalds for $500 HKD total ($65 US dollars). The money saved by choosing option b, if donated to an effective charity, is within the ballpark of what could be expected to save one person's life [update: or maybe about a tenth of a life; estimates vary]. On the other hand, the flesh from a steer can generate about 2000 McDonald's hamburgers, so ten people would be eating only 1/200 of a steer. Clearly one [or one tenth of a] human life is more valuable than 1/200 of a steer. Therefore, the university should have taken Peter Singer to McDonalds and donated the savings to an effective charity.

    Of course, there are other costs to McDonalds (other wasteful practices, environmental damage in meat production, etc.) and possibly other benefits to eating at the Buddhist restaurant (supporting good farming practices, possibly putting the profits to good use) -- but it seems unlikely that these differences would cumulatively outweigh the central tradeoff of the unsaved human life vs. 1/200 of a steer.

    If I ever have the chance to take Singer to dinner, I'd like to try this argument out on him and see what he thinks. (I wouldn't be surprised if he has already thought all of this through.)

    Our own dinner decision resolved in favor of the cheap student vegetarian cafeteria nearby, which I think maybe they had been hesitating about because it didn't seem fancy enough a venue for a visiting speaker. But it was perfect for me -- a rather "utilitarian" place, I might say -- and probably where they really should have taken Singer.

    [image source]

    Wednesday, May 11, 2016

    The Gender Situation Is Different in Philosophy

    As Carolyn Dicey Jennings and I have documented, academic philosophy in the United States is highly gender skewed, with gender ratios more characteristic of engineering and the physical sciences than of the humanities and social sciences. However, unlike engineering and the physical sciences, philosophy appears to have stalled out in its progress toward gender parity.

    Some of the best data on gender in U.S. academia are from the National Science Foundation's Survey of Earned Doctorates (SED). In an earlier post, I analyzed the philosophy data since 1973, creating this graph:

    The quadratic fit (green) is statistically much better than the linear fit (red; AICc .996 vs .004), meaning that it is highly unlikely that the apparent flattening is chance variation from a linear trend.

    Since the 1990s, the gender ratio of U.S. PhDs in philosophy has hovered steadily around 25-30%.

    The SED site contains data on gender by broad field, going back to 1979. It is interesting to juxtapose these data with the philosophy data. (The philosophy data are noisier, as you'd expect, due to smaller numbers relative to the SED's broad fields.)

    The overall trend is clear: Although philosophy's percentages are currently similar to the percentages in engineering and physical sciences, the trend in philosophy has flattened out in the 21st century, while engineering and the physical sciences continue to make progress toward gender parity. All the broad areas show roughly linear upward trends, except for the humanities which appears to have flattened at approximately parity.

    These data speak against two reactions that I have sometimes heard to Carolyn's and my work on gender disparity in philosophy. One reaction is "well, that just shows that philosophy is sociologically more like engineering and the physical sciences than we might have previously thought". Another is "although philosophy has recently stalled in its progress toward gender parity, that is true in lots of other disciplines as well". Neither claim appears to be true.

    [I am leaving for Hong Kong later today, so comment approval might be delayed, but please feel free to post your thoughts and I'll approve them and respond when I can!]

    New Op-Eds on Ethnic Diversity in Philosophy

    A couple very cool op-eds today on ethnic diversity in philosophy:

    Jay L. Garfield and Bryan W. Van Norden in the New York Times:

  • If Philosophy Won't Diversify, Let's Call It What It Really Is
  • And John E. Drabinski, on his home page, with mostly supportive but partly critical read of the Garfield and Van Norden:

  • Diversity, Neutrality, Philosophy
  • -------------------------------------------------

    Related posts:

  • Philosophy Is Incredibly White, but This Does Not Make It Unusual Among the Humanities (Sep. 3, 2014)
  • What's Missing in Philosophy Classes? Chinese Philosophers (Los Angeles Times, Sep. 11, 2015)
  • Monday, May 09, 2016

    I Also Doubt That It Is Contingently So

    Vocals: Nomy Arpaly. Guitar: David Estlund.
    Lyrics by Nomy Arpaly:

    It ain't necessarily so
    It ain't necessarily so
    What ethicists say
    Can sound good in a way
    But it ain't necessarily so

    Morality trumps other oughts
    Morality trumps other oughts
    No rational action
    Can be an infraction
    Morality trumps other oughts

    For eudaimonia --
    You get the idea --
    Be virtuous by day and night
    Departures from virtue
    Are all gonna hurt you
    Sometimes I wanna say yeah right

    We always give laws to ourselves
    We always give laws to ourselves
    We lose our potential
    For being agential
    When we break them laws from ourselves

    I say it ain't necessarily so
    It ain't necessarily so
    I'll say it though, frankly
    They'll stare at me blankly
    It ain't necessarily so

    Wednesday, May 04, 2016

    Possible Architectures of Group Minds: Perception

    My favorite animal is the human. My favorite planet is Earth. But it's interesting to think, once in a while, about other possible advanced psychologies.

    Over the course of a few related posts, I'll consider various possible architectures for superhuman group minds. Such minds regularly appear in science fiction -- e.g., Star Trek's Borg and the starships in Ann Leckie's Ancillary series -- but rarely do these fictions make the architecture entirely clear.

    One cool thing about group minds is that they have the potential to be spatially distributed. The Borg can send an away team in a ship. A starship can send the ancillaries of which it is partly composed down to different parts of the planet's surface. We normally think of social groups as having separate minds in separate places, which communicate with each other. But if mentality (instead or also) happens at the group level, then we should probably think of it as a case of a mind with spatially distributed sensory receptors.

    (Elsewhere, I've argued that ordinary human social groups might actually be spatially distributed group minds. We'll come back to that in a future post, I hope.)

    So how might perception work, in a group mind?

    Central Versus Distributed Perceptual Architecture:

    For concreteness, suppose that the group mind is constituted by twenty groups of ten humanoids each, distributed across a planet's surface, in contact via relays through an orbiting ship. (This is similar to Leckie's scenario.)

    If the architecture is highly centralized, it might work like this: Each humanoid aims its eyes (or other sensory organs) toward a sensory target, communicating its full bandwidth of data back up to the ship for processing by the central cognitive system (call it the "brain"). This central brain synthesizes these data as if it had two hundred pairs of eyes across the planet, using information from each pair to inform its understanding of the input from other pairs. For example if the ten humanoids in Squad B are flying in a sphere around an airplane, each viewing the airplane from a different angle, the central brain forms a fully three-dimensional percept of that airplane from all ten viewing angles at once. The central brain might then direct humanoid B2 to turn its eyes to the left because of some input from B3 that makes that viewpoint especially relevant -- something like how when you hear a surprising sound to your left, you spontaneously turn your eyes that direction, swiftly and naturally coordinating your senses.

    Two disadvantages of this architecture are the bandwidth of information flow from the peripheral humanoids to the central brain and the possible delay of response to new information, as messages are sent to the center, processed in light of the full range of information from all sources, and then sent back to the periphery.

    A more distributed architecture puts more of the information processing in the humanoid periphery. Each humanoid might process its sensory input as best it can, engaging in further sensory exploration (e.g., eye movements) in light of only its own local inputs, and then communicate summary results to the others. The central brain might do no processing at all but be only a relay point, bouncing all 200 streaming messages from each humanoid to the others with no modification. The ten humanoids around the airplane might then each have a single perspectival percept of the plane, with no integrated all-around percept.

    Obviously, a variety of compromises are possible here. Some processing might be peripheral and some might be central. Peripheral sources might send both summary information and also high-bandwidth raw information for central processing. Local sensory exploration might depend partly on information from others in the group of ten, others in other 19 groups of ten, or from the central brain.

    At the extreme end of central processing, you arguably have just a single large being with lots of sensory organs. At the extreme end of peripheral processing, you might not want to think about the system as a "group mind" at all. The most interesting group-mind-ish cases have both substantial peripheral processing and substantial control of the periphery either by the center or by other nodes in the periphery, with a wide variety of ways in which this might be done.

    Perceptual Integration and Autonomy:

    I've already suggested one high integration case: having a single spherical percept of an airplane, arising from ten surrounding points of view upon it. The corresponding low integration case is ten different perspectival percepts, one for each of the viewing humanoids. In the first case, there's single coherent perceptual map that smoothly integrates all the perceptual inputs; in the second case each humanoid has its own distinct map (perhaps influenced by knowledge of the others' maps).

    This difference is especially interesting in cases of perceptual conflict. Consider an olfactory case: The ten humanoids in Squad B step into a meadow of uniform-looking flowers. Eight register olfactory input characteristic of roses. Two register olfactory input characteristic of daffodils. What to do?

    Central dictatorship: All ten send their information to the central brain. The central brain, based on all of the input, plus its background knowledge and other sorts of information, makes a decision. Maybe it decides roses. Maybe it decides daffodils. Maybe it decides that there's a mix of roses and daffodils. Maybe it decides it is uncertain, and the field is 80% likely to be roses and 20% likely to be daffodils. Whatever. It then communicates this result to each of the humanoids, who adopt it as their own local action-guiding representation of the state of the field. For example, if the central brain says "roses", the two humanoids registering daffodil-like input nonetheless represent the field as roses, with no more ambivalence about it than any of the other humanoids.

    Winner-take-all vote: There need be no central dictatorship. Eight humanoids might vote roses versus two voting daffodils. Roses wins, and this result becomes equally the representation of all.

    Compromise vote: Eight versus two. The resulting shared representation is either a mix of the two flowers, with roses dominating, or some feeling of uncertainty about whether the field is roses (probably) or instead daffodils (possible but less likely).

    Retention of local differences: Alternatively, each individual humanoid might retain its own locally formed opinion or representation even after receiving input from the group. A daffodil-smeller might then have a representation something like this: To me it smells like daffodils, even though I know that the group representation is roses. How this informs that humanoid's future action might vary. On a more autonomous structure, that humanoid might behave like a daffodil smeller (maybe saying, "Ah, it's daffodils, you guys! I'm picking this one to take one back to the daffodil loving Queen of Mars") or it might be more deferential to the group (maybe saying, "I know my own input suggests daffodils, but I give that input no more weight than I would give to the input of any other member of the group").

    Finally, no peripheral representation at all: An extremely centralized system might involve no perceptual representations at all in the humanoids, with all behavior issuing directly from the center.

    Conceptual Versus Perceptual:

    There's an intuitive distinction between knowing something conceptually or abstractly and having a perceptual experience of that thing. This is especially vivid in cases of known illusion. Looking at the Muller-Lyer illusion you know (conceptually) that the two lines minus the tails are the same length, but that's not how you (perceptually) see it.

    The conceptual/perceptual distinction can cross-cut most of the architectural possibilities. For example, the minority daffodil smeller might perceptually experience the daffodils but conceptually know that the group judgment is roses. Alternatively, the minority daffodil smeller might conceptually know that her own input is daffodils but perceptually experience roses.

    Counting Streams of Experience:

    If the group is literally phenomenally conscious at the group level, then there might be 201 streams of experience (one for each humanoid, plus one for the group); or there might be only one stream of experience (for the group); or streams of experience might not be cleanly individuated, with 200 semi-independent streams; or something else besides.

    The dictatorship, etc., options can apply to the group-level stream, as well as to the humanoid-level streams, perhaps with different results. For example, the group stream of consciousness might be determined by compromise vote (80% roses), while the humanoid streams of experience retain their local differences (some roses, some daffodils).

    To Come:

    Similar issues arise for group level memory, goal-setting, inferential reasoning, and behavior. I'll work through some of these in future posts.

    I also want to think about the moral status of the group and the individuals, under different architectural setups -- that is, what sorts of rights or respect or consideration we owe to the individuals vs. the group, and how that might vary depending on the set-up.

    ------------------------------------------------

    Related:

  • Possible Psychology of a Matrioshka Brain (Oct. 9, 2014).
  • If Materialism Is True, the United States Is Probably Conscious (Philosophical Studies 2015).
  • [image source, image source]

    Wednesday, April 27, 2016

    Orange on the Seder Plate

    ... and celebrating the death of children?

    "Does it matter if the story of the escape from Egypt is historically true?" Rabbi Suzanne Singer asked us, her congregants, on Saturday, at the Passover Seder dinner at Temple Beth El in Riverside.

    We're a liberal Reform Judaism congregation. Everyone except me seemed to be shaking their heads, no, it doesn't matter. I was nodding, however. Yes, it does matter.

    Rabbi Singer walked over to me with the microphone, "Okay, Eric, why does it matter?"

    I say "we" are a Reform Judaism congregation, but let me be clear: I am not Jewish. My wife Pauline is. My teenage son Davy is. Davy even teaches at the religious school. My nine-year-old daughter Kate, adopted from China at age one, recently described herself as "half Jewish". We're members. We volunteer, attend some of the services. Sometimes I try to chant the chants, sometimes I don't. I always feel a little... ambiguous.

    I hadn't been expecting to speak. I came out with some version of the following thought. If the story of Passover is literally true, then there's a miracle-working God. And it would matter if there were such a God. I don't think I would like the moral character of that God, a God who kills so many innocent Egyptians. I'm glad it's not literally true. It matters.

    I find it interesting, I added, that we ("we"?) have this celebratory holiday about the death of children, contrary to the values of most of us now. It's interesting how we struggle to deal with that change in values while keeping the traditions of the holiday.

    Passover, as you probably know, celebrates a story from Exodus. The Jews are slaves in Egypt. Moses and Aaron approach the Pharaoh and demand the release of their people. The Pharaoh refuses and God sends disaster after disaster upon the Egyptians. In the tenth and final plague, God sweeps through Egypt killing the firstborn son in every house, except the houses marked with the lamb's blood of the Jewish "Passover" sacrifice. In the traditional Haggadahs (i.e. scripts of how the ceremony is to be conducted), God's destruction of the Egyptians seems to be enthusiastically relished, the general tone being one of overflowing celebration for all the good things God (or G-d) has bestowed upon us: He didn't need to plague and torment our enemies and kill their firstborns, but he did, hooray!

    (One does remove a bit of wine from one's glass for each of the ten plagues, which has been explained to me as reducing one's joy to recognize the Egyptians' suffering; but not all traditional haggadahs offer that explanation and the overall tone is cheery about the plagues.)

    Temple Beth El uses a Reconstructionist Haggadah which is more reflective about the Egyptians' suffering and emphasizes the plight of the enslaved and oppressed everywhere throughout world history. The holiday is no longer understood as it once was. But still, we sing the happy songs.

    Others in Temple Beth El spoke up in response to my comment: values change, ancient war sadly and necessarily involved the death of children too, we're really celebrating the struggle for freedom.... The rabbi asked if this answered my question, or if I had anything more to say. Davy whispered, "Dad, you don't have anything more to say." I took his cue and shut my trap.

    The caterers arrived late. I was pleased to see that they put oranges upon the Seder plates this year. (It seems to be on and off in our congregation.) The traditional Seder plate has no orange: two bitter herbs (for the bitterness of slavery), charoset (sweet fruit and nuts as mortar for the storehouses of Egypt), parsley (dipped into salt water representing the tears of slavery), a roasted lamb bone (for the Passover sacrifice), and a hard boiled egg.

    The first time I saw an orange on the Seder plate, I was told this story about it: A woman was studying to become a rabbi. An orthodox rabbi told her that a woman belongs on the bima (pulpit) like an orange belongs on the Seder plate! When she became a rabbi, she put an orange on the plate.

    A wonderful story! The orange on the Seder plate is wild, defiant, overturning the rules, the beginning of a new tradition to celebrate gender equality.

    Does it matter if it's true?

    The true story is more complicated. Dartmouth Jewish Studies professor Susannah Heschel was speaking to a Jewish group at Oberlin College. The students had written a story in which a young girl asks a rabbi what room there is for lesbians in Judaism, and the rabbi rises in anger, shouting, "There's as much room for a lesbian in Judaism as there is for a crust of bread on the Seder plate!" Heschel, inspired by the story, but not wanting to put anything as unkosher as leavened bread on the Seder plate, put an orange on her family's Seder plate the next year.

    In the second story, the orange is not a wild act of defiance but already a compromise. The shouting rabbi is not an actual person but only an imagined, simplified foe.

    It matters that it's not true. From the two stories of the orange, we learn what I regard as the central lesson of Reform Judiasm, that myths are cultural inventions built to suit the values of their day, idealizations and simplifications, that they change as our values change, but also that there's only so much change that is possible in a tradition-governed institution, which is necessarily a compromise between past and present. An orange can be considered, but not a crust of bread.

    My daughter and I -- active in the temple but not quite Jewish, we too are oranges on the Seder plate, a new sort of thing in a congregation, without a marked place, welcomed this year, unsure how much we belong or want to belong, at risk of rolling off.

    In the car on the way home, my son scolded me: "How could you have said that, Dad? There are people in the congregation who take the Torah literally, very seriously. You should have seen how they were looking at you, with so much anger. If you'd said more, they would practically have been ready to lynch you."

    Due to the seating arrangement, I had been facing away from most of the congregation, while Davy had been facing toward most of the congregation. I didn't see those faces. Was Davy telling me the truth on the way home that night? Or was he creating a simplified myth of me?

    Today I celebrate the orange, that unstable mix of truth and myth, tradition and change.

    Thursday, April 21, 2016

    Possible Cognitive and Cultural Effects of Video Lifelogging

    Last week science fiction writer Ted Chiang came to Riverside to talk about the possible cognitive effects of video lifelogging. He also explores these issues in his 2013 story The Truth of Fact, the Truth of Feeling. It was an interesting talk! Chiang focused, as he also does in the story, on the transformative effects video lifelogging might have on our memories, including the possible decline of our ability to remember life events unaided if we can instead just easily call up results from a lifelog.

    Lifelogging is a movement aimed at recording and monitoring the details of your everyday life. Video lifelogging, which is just starting to become feasible, involves video-recording every moment of your waking day.

    Good search technology will be crucial. Imagine subvocalizing "the name of that book Emmeline[1] recommended at dinner last week" or "that time Taimur cracked a raw egg on his head", then having the relevant audiovisual results show up on your palm or your Google glass. The eventual effects on our minds, Chiang suggests, would be comparable to the transformative effects of literacy. In his talk, Chiang emphasized the decline of unaided memory and the ability to hold ourselves to higher standards of truthfulness about past deeds (what did you really say in that argument last week?).

    Just as early Chinese calligraphers could not have predicted quantitative textual analysis or the internet, I think we can assume that if video lifelogging is integrated deeply into our daily lives, it will change us in ways we can't fully anticipate. I'd like to suggest two possible effects that Chiang didn't mention.

    First: It is much easier to record audio and video than other sensory modalities. The recording and recreation of taste, smell, touch, and somatic sensation are much more speculative and remote. Most people already tend to privilege sight and hearing, but lifelogging could amplify that dominance -- perhaps so much that the other senses almost seem like a forgettable, buzzing distraction. Your memories of sex, for example, might focus much more on the audiovisual parts of the experience, if those are what you can easily revisit and recall (esp. with decreases in unaided memory, as Chiang suggests would be likely) -- and that in turn might lead you to focus more on those senses than on other senses in your future encounters, which in turn might substantially alter the cultural structures and expectations around sex. Similarly, perhaps, for the pleasures of eating.

    Second: Chiang had explicitly set aside privacy issues, and I will also do so (maybe Cory Doctorow will address these when he comes next fall), but intentional sharing raises interesting possibilities, especially if it's possible in real-time. Suppose we can't all afford to go to the concert -- but if we pool our funds, you can go, and we can all watch your lifelog in real-time (perhaps in immersive virtual reality), which will then be saved in our lifelogs. If our cognition and culture have shifted more toward the audiovisual, then it might seem closer to actually being there than it would seem to people now, and if our autobiographical memories have become dominated by lifelog results, then later it might feel more like a real memory of having been there than an analogous experience would seem to people now. Pushed to the extreme, an emphasis on shared real-time and remembered experiences might begin to blur the boundaries of the experienced self, including reducing how much we care about whether it was our own bodies that did something or someone else's.

    Just for starters.

    [image source]

    Friday, April 15, 2016

    New Essay in Draft: Phenomenal Consciousness, Defined and Defended as Innocently as I Can Manage

    Commentary on Keith Frankish (forthcoming), "Illusionism as a Theory of Consciousness".

    I don't see Keith's paper publicly available, but you can get a general sense of his view from his 2012 paper Quining Diet Qualia; and in any case I've written the essay to be comprehensible without prior knowledge of Frankish's work.

    Abstract:
    Phenomenal consciousness can be conceptualized innocently enough that its existence should be accepted even by philosophers who wish to avoid dubious epistemic and metaphysical commitments such as dualism, infallibilism, privacy, inexplicability, or intrinsic simplicity. Definition by example allows us this innocence. Positive examples include sensory experiences, imagery experiences, vivid emotions, and dreams. Negative examples include growth hormone release, dispositional knowledge, standing intentions, and sensory reactivity to masked visual displays. Phenomenal consciousness is the most folk psychologically obvious thing or feature that the positive examples possess and that the negative examples lack, and which preserves our ability to wonder, at least temporarily, about antecedently unclear issues such as consciousness without attention and consciousness in simpler animals. As long as this concept is not empty, or broken, or a hodgepodge, we can be phenomenal realists without committing to dubious philosophical positions.

    This paper further develops ideas from my similarly titled blog post on Feb 18. Many thanks for the helpful comments on that post!

    Full paper here. As always, questions, comments, and objections are welcome, either as comments on this post or by email.

    Thursday, April 14, 2016

    Paraphenomenal Experience: Conscious Experience Uncorrelated with Cognition and Behavior

    My student Alan T. Moore defends his dissertation today. (Good thing we proved he exists!) One striking idea from his dissertation is that much of our consciousness might be, in his terminology, paraphenomenal. A conscious experience is paraphenomenal to the extent it is uncorrelated with cognitive and behavioral processes. (That's my own tweaking of his formulation, not quite how Alan phrases it himself.)

    Complete paraphenomenality is a possibility so bizarre and skeptical that I'm unaware of any philosopher who has seriously contemplated it. (It seems likely, though, that someone has, so I welcome references!) Complete paraphenomenality would mean having a stream of experience that was entirely uncorrelated with any functional, cognitive, or sensory input and entirely uncorrelated with any functional, cognitive, or behavioral output (including introspective self-report). Imagine laying the stream of William James's conscious experience atop the behavior and cognitive life of Moctezuma II, or atop a stone -- simply no relationship between the non-phenomenal aspects of one's cognitive life and one's outward behavior (or lack of it) and the stream of lived experience. Or imagine taking the philosophical zombie scenario and instead of denying the zombies any experience, randomly scramble which body has which set of experiences, while holding all the physical and behavioral stuff constant in each body.

    Paraphenomenal is not the same as epiphenomenal. Epiphenomenalism about consciousness is the view that conscious experience has no causal influence, the view that consciousness is a causal dead-end. But most epiphenomenalists believe, indeed emphasize, that conscious experience still correlates with causally efficacious brain processes. On paraphenomenalism, in contrast, there aren't even correlations.

    Complete paraphenomenalism is about as implausible a philosophical view as one is likely to find. However, partial paraphenomenalism has some plausibility as an interpretation of recent empirical evidence, from Moore and others. Partial paraphenomenalism is the view that the correlations between conscious experiences and cognitive processes are weaker and more limited than one might otherwise expect -- that, for example, presence or absence of the conscious experience of visual imagery is largely irrelevant to performance on the types of cognitive tasks that are ordinarily thought to be aided by imagery. If so, this would be one way to explain empirical results suggesting that self-reported visual imagery abilities are largely uncorrelated with performance on "imagery" tasks like mental rotation and mental folding. (See my discussion here and in Ch. 3 of my 2011 book.)

    Especially strikingly to me are the vast differences in the experiences that people report in Russell T. Hurlburt's Descriptive Experience Sampling (e.g., here, here, here, here). Hurlburt "beeps" people at random moments throughout their day. When the beep sounds, their task is to recall their last moment of experience immediately before the beep. Hurlburt then later interviews them about details of the targeted experience. Some of Hurlburt's participants report conscious sensory experiences in almost all of their samples, while others almost never report sensory experiences. Some of Hurlburt's participants report inner speech in almost all of their samples, while others almost never report inner speech. Similarly for emotional feelings, imageless or "unsymbolized" thinking, and visual imagery -- some participants report these things in almost every sample, others never or almost never. Huge, huge differences in the general reported arc of experience! When functional cognitive capacities vary that much between people, it's immediately obvious (e.g., blind people vs. sighted people). But no such radical differences are evident among most of Hurlburt's participants. Participants often even surprise themselves. For example, it's not uncommon for people to initially say, before starting the sampling process, that they experience a stream of constant inner speech, but then report few or no instances of it when actually sampled.

    In his dissertation, Moore finds very large differences in people's reported experiences while reading (some of the preliminary data were reported here), but those reported experiential differences don't seem to predict performance in plausibly related cognitive tasks like recall of visual details from the story (for people reporting high visual imagery), rhyme disambiguation (for people reporting hearing the text in inner speech), or recall of details of the visual layout of the text (for people reporting visually experiencing the words on the page in front of them).

    When faced with radical differences in experiential report that are largely uncorrelated with the expected outward behavior or cognitive skills, we seem to have three interpretative choices:

    1. We could decide that the assumed functional relationship shouldn't have been expected in the first place. For example, in the imagery literature, some researchers decided that it was a mistake to have expected mental rotation ability to correlate with conscious visual imagery. Conscious visual imagery plays an important causal functional role in cognition, just not that role.

    2. We could challenge the accuracy of the subjective reports. This has tended to be my approach in the past. Maybe people who deny having visual sensory experience of the scene before them in Hurlburt's and Moore's data really do have sensory experience but either forget that experience or fail to understand exactly what is being asked.

    3. We could adopt partial paraphenomenalism about the experience. Maybe people really are radically different in their streams of experience while reading, or while going about their daily life, but those differences have little systematic relationship to the remainder of their cognition or behavior (apart from their ability to generate reports). I wouldn't initially have been much attracted to this idea, but I now think it's an important option to keep explicitly on the table. Alan Moore's dissertation builds an interesting case!

    [image source]

    Friday, April 08, 2016

    Awesome New SF Story about the Problem of Consciousness and Scientific Rationalization

    by University of Michigan philosopher and science fiction writer David John Baker, in leading SF podcast Escape Pod:

    The Hunter Captain.

    ------------------

    This is exactly the kind of interaction between philosophy and speculative fiction that I would like to see more of. The philosophical issues drive the story, giving the story depth and interest; at the same time, the vivid character and narrative in the story bring the philosophical issue to life.

    [See also: Philosophical SF: Recommendations from 41 Philosophers]

    Thursday, April 07, 2016

    Some Pragmatic Considerations Against Intellectualism about Belief

    Consider cases in which a person sincerely endorses some proposition ("women are just as smart as men", "family is more important than work", "the working poor deserve as much respect as the financially well off"), but often behaves in ways that fail to fit with that sincerely endorsed proposition (typically treats individual women as dumb, consistently prioritizes work time over family, sees nothing wrong in his or others' disrespectful behavior toward the working poor). Call such cases "dissonant cases" of belief. Intellectualism is the view that in dissonant cases the person genuinely believes the sincerely endorsed proposition, even if she fails to live accordingly. Broad-based views, in contrast, treat belief as a matter of how you steer your way through the world generally.

    Dissonant cases of belief are, I think, "antecedently unclear cases" of the sort I discussed in this post on pragmatic metaphysics. The philosophical concept of belief is sufficiently vague or open-textured that we can choose whether to embrace an account of belief that counts dissonant cases as cases of belief, as intellectualism would do, or whether instead to embrace an account that counts them as cases of failure to believe or as in-between cases that aren't quite classifiable either as believing or as failing to believe.

    I offer the following pragmatic grounds for rejecting intellectualism in favor of a broad-based view. My argument has a trunk and three branches.

    --------------------------------------------

    The trunk argument.

    Belief is one of the most central and important concepts in all of philosophy. It is central to philosophy of mind: Belief is the most commonly discussed of the "propositional attitudes". It is central to philosophy of action, where it's standard to regard actions as arising from the interaction of beliefs, desires, and intentions. It is central to epistemology, much of which concerns the conditions under which beliefs are justified or count as knowledge. A concept this important to philosophical thinking should be reserved for the most important thing in the vicinity that can plausibly answer to it. The most important thing in the vicinity is not our patterns of intellectual endorsement. It is our overall patterns of action and reaction. What we say matters, but what we do in general, how we live our lives through the world -- that matters even more.

    Consider a case of implicit classism. Daniel, for example, sincerely says that the working poor deserve equal respect, but in fact for the most part he treats them disrespectfully and doesn't find it jarring when others do so. If we, as philosophers, choose describe Daniel as believing what he intellectually endorses, then we implicitly convey the idea that Daniel's patterns of intellectual endorsement are what matter most to philosophy: Daniel has the attitude that stands at the center of so much of epistemology, philosophy of action, and philosophy of mind. If we instead describe Daniel as a mixed-up, in-betweenish, or even failing to believe what he intellectually endorses, we do not implicitly convey that intellectualist idea.

    Branch 1.

    Too intellectualist a view invites us to adopt noxiously comfortable opinions about ourselves. Suppose our implicit classist Daniel asks himself, "Do I believe that the working poor deserve equal respect?" He notices that he is inclined sincerely to judge that they deserve equal respect. Embracing intellectualism about belief, he concludes that he does believe they deserve equal respect. He can say to himself, then, that he has the attitude that philosophers care about most – belief. Maybe he lacks something else. He lacks "alief" maybe, or the right habits, or something. But based on how philosophers usually talk, you'd think that's kind of secondary. Daniel can comfortably assume that he has the most important thing straightened out. But of course he doesn't.

    Intellectualist philosophers can deny that Daniel does have the most important thing straightened out. They can say that how Daniel treats people matters more than what he intellectually endorses. But if so, their choice of language mismatches their priorities. If they want to say that the central issue of concern in philosophy is, or should be, how you act in general, then the most effective way to encourage others to join them in that thought is to build the importance of one's general patterns of action right into the foundational terms of the discipline.

    Branch 2.

    Too intellectualist a view hides our splintering dispositions. Here's another, maybe deeper, reason Daniel might find himself too comfortable: He might not even think to look at his overall patterns of behavior in evaluating what his attitude is toward the working poor. In Branch 1, I assumed that Daniel knew that his spontaneous reactions were out of line, and he only devalued those spontaneous reactions, not thinking of them as central to the question of whether he believed. But how would he come to know that his spontaneous reactions are out of line? If he's a somewhat reflective, self-critical person, he might just happen to notice that fact about himself. But an intellectualist view of the attitudes doesn’t encourage him to notice that about himself. It encourages Daniel, instead, to determine what his belief is by introspection of or reflection upon what he is disposed to sincerely say or accept.

    In contrast, a broad-based view of belief encourages Daniel to cast his eye more widely in thinking about what he believes. In doing so, he might learn something important. The broad-based approach brings our non-intellectual side forward into view while the intellectualist approach tends to hide that non-intellectual side. Or at least it does so to the extent we are talking specifically about belief -- which is of course a large part of what philosophers do in fact actually talk about in philosophy of mind, philosophy of action, and epistemology.

    Another way in which intellectualism hides our splintering dispositions is this: Suppose Suleyma has the same intellectual inclinations as Daniel but unlike Daniel her whole dispositional structure is egalitarian. She really does, and quite thoroughly, have as much respect for the custodian as for the wealthy business-owner. An intellectualist approach treats Daniel and Suleyma as the same in any domain where what matters is what one believes. They both count as believers, so now let's talk about how belief couples with desire to beget intentions, let's talk about whether their beliefs are justified, let's talk about what set of worlds makes their beliefs true -- for all these purposes, they are modeled in the same way. The difference between them is obscured, unless additional effort is made to bring it to light.

    You might think Daniel's and Suleyma's differences don't matter too much. They're worth hiding or eliding away or disregarding unless for some reason those differences become important. If that's your view, then an intellectualist approach to belief is for you. If on the other hand, you think their differences are crucially important in a way that ought to disallow treating them as equivalent in matters of belief, then an intellectualist view is not for you. Of course, the differences matter for some purposes and not so much for other purposes. The question is whether on balance it's better to put those differences in the foreground or to tuck them away as a nuance.

    Branch 3.

    Too intellectualist a view risks downgrading our responsibility. It's a common idea in philosophy that we are responsible for our beliefs. We don't choose our beliefs in any straightforward way, but if our beliefs don't align with the best evidence available to us we are epistemically blameworthy for that failure of alignment. In contrast, our habits, spontaneous reactions, that sort of thing -- those are not in our control, at least not directly, and we are less blameworthy for them. My true self, my "real" attitude, the being I most fundamentally am, the locus of my freedom and responsibility -- that's constituted by the aspects of myself that I consciously endorse upon reflection. You can see how the intellectualist view of belief fits nicely with this.

    I think that view is almost exactly backwards. Our intellectual endorsements, when they don't align with our lived behavior, count for little. They still count for something, but what matters more is how we spontaneously live our way through the world, how we actually treat the people we are with, the actual practical choices we make. That is the "real" us. And if Daniel says, however sincerely, that he is an egalitarian, but he doesn't live that way, I don't want to call him a straight-up egalitarian. I don't want to excuse him by saying that his inegalitarian reactions are mere uncontrollable habit and not the real him. It's easy to talk. It's hard to change your life. I don't want to let you off the hook for it in that way, and I don't want to let myself off the hook. I don't want to say that I really believe and I am somehow kind of alienated from all my unlovely habits and reactions. It's more appropriately condemnatory to say that my attitude, my belief state, is actually pretty mixed up.

    It's hard to live up to all the wonderful values and aspirations we intellectually endorse. I am stunned by the breadth and diversity of our failures. What we sincerely say we believe about ourselves and the people around us and how we actually spontaneously react to people and what we actually choose and do -- so often they are so far out of line with each other! So I think we've got to have quite a lot of forgiveness and sympathy for our failures. My empirical, normative, pragmatic conjecture is this: In an appropriate context of forgiveness and sympathy, the best way to frankly confront our regular failure to live up to our verbally espoused attitudes is to avoid placing intellectual endorsements too close to the center of philosophy.

    [image source]

    Tuesday, March 29, 2016

    Introspective Attention as Perception with a Twist

    Tomorrow I'm off to the Pacific APA in San Francisco. Thursday 4-6 I'm commenting on Wayne Wu's "Introspection as Attention and Action". This post is adapted from those comments.

    (Also Wed 6 pm I'm presenting my paper "A Pragmatic Approach to the Metaphysics of Belief" and Sat 6-9 I'm critic in an author-meets-critics on Jay Garfield's Engaging Buddhism. Feel free to stop by!)

    -------------------------------------------

    What does introspective attention to one's perceptual experiences amount to? As I look at my desk, I can attend to or think about my ongoing sensory experiences, reaching judgments about their quality or character. For example, I'm visually experiencing a brownish, slightly complicated oblong shape in my visual periphery (which I know to be my hat). I'm having auditory experience of my coffee-maker spitting and bubbling as it brews the pot. What exactly am I doing, in the process of reaching these judgments?

    One thing that I'm not doing, according to Wayne Wu, is launching a new process, distinct from the perceptual processes of seeing the hat and hearing the coffee-maker, which turns an "attentional spotlight" upon those perceptual processes. Introspective attention, Wu argues -- and I agree -- is a matter of applying phenomenal concepts in the course of ordinary perceiving, with the goal of arriving at a judgment about your perceptual experience -- doing so in a way that isn't radically different from the way in which, according to your goals, you can aim to categorize the things you perceive along any of several different dimensions.

    I hope the following is a Wu-friendly way of developing the idea. Suppose you're looking at a coffee mug. You can do so with any of a variety of perceptual goals in mind. You can look at it with the aim of noticing details about its color -- its precise shade and how consistent or variable its color is across its face. You can look at it with the aim of noticing details of its shape. You can look at it with the aim of thinking about how it could effectively be used as a weapon. You can look at it with a critical aesthetic eye. Each of these aims is a way of attending differently to the mug, resulting in judgments that employ different conceptual categories.

    You can also look at the mug with an introspective aim, or rather with one of several introspective aims. You can look at the mug with the aim of reaching conclusions about what your visual experience is as you look at the mug rather than with the aim of reaching conclusions about the physical mug itself. You might be interested in noticing your experience of the mug's color, possibly distinct from mug's real color. According to Wu, this is not a process radically different from ordinary acts of perception. Introspection your visual experience of the color or shape of the mug is not a two-stage process that consists of first perceiving the mug and then second of introspecting the experiences that arise in the course of that perceptual act.

    The approach Wu and I favor is especially attractive in thinking about what the early introspective psychologists E.B. Titchener and E.G. Boring called "R-error" or "stimulus error". Imagine that you're lying on your stomach and an experimenter is gently poking the bare skin of your back with either one or two toothpicks. You might have one of two different tasks. An objective task would be to guess whether you are being poked by only one toothpick or instead by both toothpicks at once. You answer with either "one" or "two". It's easy to tell that you're being poked by two if the toothpicks are several centimeters apart, but once they are brought close enough together, the task becomes difficult. Two toothpicks placed within a centimeter of each other on your back are likely to feel like only a single toothpick. The objective task would be to guess how many toothpicks you are being poked with in reality.

    An introspective task might be very similar but with a crucial difference: You are asked to report whether you have tactile experience of two separate regions of pressure or only one region. Again you might answer "one" or "two". This is of course not the same as the objective task. You're reporting not on facts about the toothpicks but rather on facts about your experience of the toothpicks. The objective task and the introspective task have different truth conditions. For example if two toothpicks are pressed to your back only 6 millimeters apart and you say "one" you've given the wrong answer if your task is objective but quite possibly the right answer if your task is introspective.

    [Edwin G. Boring in 1961]

    Here's the thing that Titchener and Boring noticed, which they repeatedly warn against in their discussions of introspective methodology: People very easily slide back and forth between the introspective task and the objective task. It's not entirely natural to keep them distinct in your mind over the course of a long series of stimuli. You might be assigned the introspective task, for example, and start saying "one", "one", "two", "one", "two", "two", "two" -- at first your intentions are clearly introspective, but then by the thirtieth trial you have slipped into the more familiar objective way of responding and you're just guessing how many toothpicks there actually are, rather than reporting introspectively. If you've slipped from the introspective to the objective mode of reporting, you've committed what Titchener and Boring call stimulus error.

    For Wu's and my view, the crucial point is this: It's very easy to shift unintentionally between the two ways of deploying your perceptual faculties. In fact I'm inclined to think -- I don't know if Wu would agree with me about this -- that for substantial stretches of the experiment your intentions might be vague enough that there's no determinate fact about the content of your utterances. Is your "one" really a report about your experience or a report about the world outside? It might be kind of muddy, kind of in-between. You're just rolling along not very thoughtful of the distinction. What distinguishes the introspective judgment from the perceptual judgment in this case is a kind of minor thing about your background intentions in making your report.

    Introspection of perceptual experience is perception with a twist, with an aim slightly different from the usual aim of reporting on the outside world. That's the idea. It's not a distinct cognitive process that begins after the perceptual act ends, ontologically distinct from the perceptual act and taking the perceptual act as its target.

    When you know that your experience might be misleading, the difference can matter to your reporting. For example, if you know that you're pretty bad at detecting two toothpicks when they're close together and you have reason to think that lots of the trials will have toothpicks close together, and if your focus is on objective reporting, you might say: "Well, 50-50% -- might be one, might be two for all I know". For introspective reporting, in contrast, you might say something like "Sure feels like one, though I know you might well actually be touching me with two".

    In visual experience, noticing blurriness is similar. Take off your glasses or cross your eyes. You know enough about the world to know that your coffee mug hasn't become objectively vague-edged and blurry. So you attribute the blurriness to your experience. This is a matter of seeing the world and reaching judgments about your experience in the process. You reach an experiential judgment rather than or in addition to an objective judgment just because of certain background facts about your cognition. Imagine someone so naive and benighted as to think that maybe actual real-world coffee mugs do in fact become vague bordered and blurry edged when she takes off her glasses. It seems conceivable that we could so bizarrely structure someone's environment that she actually came to adulthood thinking this. That person might then not know whether to apply blurriness to the outward object or to her experience of the object. It's a similar perceptual act of looking at the mug, but given different background concepts and assumptions in one case she reaches an introspective attribution while in the other case she reaches an objective attribution.

    [image source]

    -------------------------------------------

    Related:

  • "Introspection, What?" (2012), in D. Smithies and D. Stoljar, eds., Introspection and Consciousness (Oxford). My own positive account of the nature of introspection.
  • "Introspection". My Stanford Encyclopedia review article on theories of introspection.
  • "The Problem of Known Illusion and the Resemblance of Experience to Reality" (2014), Philosophy of Science 81, 954-960. Puzzlement and possible metaphysical juice, arising from reflections on the weird relation between objective and introspective reporting.
  •