Tuesday, December 27, 2016

A few days ago, Skye Cleary interviewed me for the Blog of the APA. I love her direct and sometimes whimsical questions.

--------------------------

SC: What excites you about philosophy?

ES: I love philosophy’s power to undercut dogmatism and certainty, to challenge what you thought you knew about yourself and the world, to induce wonder, and to open up new vistas of possibility.

SC: What are you working on right now?

ES: About 15 things. Foremost in my mind at this instant: “Settling for Moral Mediocrity” and a series of essays on “crazy” metaphysical possibilities that we aren’t in a good epistemic position to confidently reject....

[It's a brief interview -- only six more short questions.]

Read the rest here.

Wednesday, December 21, 2016

Is Most of the Intelligence in the Universe Non-Conscious AI?

In a series of fascinating recent articles, philosopher Susan Schneider argues that

(1.) Most of the intelligent beings in the universe might be Artificial Intelligences rather than biological life forms.

(2.) These AIs might entirely lack conscious experiences.

Schneider's argument for (1) is simple and plausible: Once a species develops sufficient intelligence to create Artificial General Intelligence (as human beings appear to be on the cusp of doing), biological life forms are likely to be outcompeted, due to AGI's probable advantages in processing speed, durability, repairability, and environmental tolerance (including deep space). I'm inclined to agree. For a catastrophic perspective on this issue see Nick Bostrom. For a polyannish perspective, see Ray Kurzweil.

The argument for (2) is trickier, partly because we don't yet have a consensus theory of consciousness. Here's how Schneider expresses the central argument in her recent Nautilus article:

Further, it may be more efficient for a self-improving superintelligence to eliminate consciousness. Think about how consciousness works in the human case. Only a small percentage of human mental processing is accessible to the conscious mind. Consciousness is correlated with novel learning tasks that require attention and focus. A superintelligence would possess expert-level knowledge in every domain, with rapid-fire computations ranging over vast databases that could include the entire Internet and ultimately encompass an entire galaxy. What would be novel to it? What would require slow, deliberative focus? Wouldn’t it have mastered everything already? Like an experienced driver on a familiar road, it could rely on nonconscious processing.

On this issue, I'm more optimistic than Schneider. Two reasons:

First, Schneider probably underestimates the capacity of the universe to create problems that require novel solutions. Mathematical problems, for example, can be arbitrarily difficult (including problems that are neither finitely solvable nor provably unsolvable). Of course AGI might not care about such problems, so that alone is a thin thread on which to hang hope for consciousness. More importantly, if we assume Darwinian mechanisms, including the existence of other AGIs that present competitive and cooperative opportunities, then there ought to be advantages for AGIs that can outthink the other AGIs around them. And here, as in the mathematical case, I see no reason to expect an upper bound of difficulty. If your Darwinian opponent is a superintelligent AGI, you'd probably love to be an AGI with superintelligence + 1. (Of course, there are other paths to evolutionary success than intelligent creativity. But it's plausible that once superintelligent AGI emerges, there will be evolutionary niches that reward high levels of creative intelligence.)

Second, unity of organization in a complex system plausibly requires some high-level self-representation or broad systemic information sharing. Schneider is right that many current scientific approaches to consciousness correlate consciousness with novel learning and slow, deliberative focus. But most current scientific approaches to consciousness also associate consciousness with some sort of broad information sharing -- a "global workspace" or "fame in the brain" or "availability to working memory" or "higher-order" self-representation. On such views, we would expect a state of an intelligent system to be conscious if its content is available to the entity's other subsystems and/or reportable in some sort of "introspective" summary. For example, if a large AI knew, about its own processing of lightwave input, that it was representing huge amounts of light in the visible spectrum from direction alpha, and if the AI could report that fact to other AIs, and if the AI could accordingly modulate the processing of some of its non-visual subsystems (its long-term goal processing, its processing of sound wave information, its processing of linguistic input), then on theories of this general sort, its representation "lots of visible light from that direction!" would be conscious. And we ought probably to expect that large general AI systems would have the capacity to monitor their own states and distribute selected information widely. Otherwise, it's unlikely that such a system could act coherently over the long term. Its left hand wouldn't know what its right hand is doing.

I share with Schneider a high degree of uncertainty about what the best theory of consciousness is. Perhaps it will turn out that consciousness depends crucially on some biological facts about us that aren't likely to be replicated in systems made of very different materials (see John Searle and Ned Block for concerns). But to the extent there's any general consensus or best guess about the science of consciousness, I believe it suggests hope rather than pessimism about the consciousness of large superintelligent AI systems.

Related:

Possible Psychology of a Matrioshka Brain (Oct 9, 2014)

If Materialism Is True, the United States Is Probably Conscious (Philosophical Studies 2015).

Susan Schneider on How to Prevent a Zombie Dictatorship (Jun 27, 2016)

[image source]

Friday, December 16, 2016

Extraterrestrial Microbes and Being Alone in the Universe

A couple of weeks ago I posted some thoughts that I intended to give after a cosmology talk here at UCR. As it happens, I gave an entirely different set of comments! So I figured I might as well also share the comments I actually gave.

Although the cosmology talk made no or almost no mention of extraterrestrial life, it had been advertised as the first in a series of talks on the question "Are We Alone?" The moderator then talked about astrobiologists being excited about the possibility of discovering extraterrestrial microbial life. So I figured I'd expand a bit on the idea of being "alone", or not, in the universe.

Okay, suppose that we find microbial life on another planet. Tiny micro-organisms. How excited should be we?

The title of this series of talks -- written in big letters on the posters -- is "Are We Alone?" What does it mean to be alone?

Think of Robinson Crusoe. He was stranded on an island, all by himself (or so he thought). He is kind of our paradigm example of someone who is totally alone. But of course he was surrounded by life on that island -- trees, fish, snails, microbes on his face. This suggests that on one way of thinking about being "alone", a person can be entirely alone despite being surrounded by life. Discovering microbes on another planet would not make us any less alone.

To be not alone, I’m thinking, means having some sort of companion. Someone who will recognize you socially. Intelligent life. Or at least a dog.

We might be excited to discover microbes because hey, it's life! But what’s so exciting about life per se?

Life -- something that maintains homeostasis, has some sort of stable organization, draws energy from its environment to maintain that homeostatic organization, reproduces itself, is complex. Okay, that's neat. But the Great Red Spot on Jupiter, which is a giant weather pattern, has maintained its organization for a long time in a complex environment. Flames jumping across treetops in some sense reproduce themselves. Galaxies are complex. Homeostasis, reproduction, complexity -- these are cool. Tie them together in a little package of microbial life; that’s maybe even cooler. But in a way we do kind of already know that all the elements are out there.

Now suppose that instead of finding life we found a robot -- an intelligent, social robot, like C3P0 from Star Wars or Data from Star Trek. Not alive, by standard biological definitions, if it doesn’t belong to a reproducing species.

Finding life would be cool.

But finding C3P0 would be a better cure for loneliness.

(Apologies to my student Will Swanson, who has recently written a terrific paper on why we should think of robots as "alive" despite not meeting standard biological criteria for life.)

Related post: "Why Do We Care About Discovering Life, Exactly?" (Jun 18, 2015)

Recorded video of the Dec 8 session.

Thanks to Nalo Hopkinson for the dog example.

[image source]

Monday, December 12, 2016

Is Consciousness an Illusion?

In the current issue of the Journal of Consciousness Studies, Keith Frankish argues that consciousness is an illusion -- or at least that "phenomenal consciousness" is an illusion. It doesn't exist.

Now I think there are basically two different things that one could mean in saying "consciousness doesn't exist".

(A.) One is something that seems to be patently absurd and decisively refuted by every moment of lived experience: that there is no such thing as lived experience. If it sounds preposterous to deny that anyone ever has conscious experience, then you're probably understanding the claim correctly. It is a radically strange claim. Of course philosophers do sometimes defend radically strange, preposterous-sounding positions. Among them, this would be a doozy.

(B.) Alternatively, you might think that when a philosopher says that consciousness exists (or "phenomenal consciousness" or "lived, subjective experience" or whatever) she's usually not just saying the almost undeniably obvious thing. You might think that she's probably also regarding certain disputable properties as definitionally essential to consciousness. You might hear her as saying not only that there is lived experience in the almost undeniable sense but also that the target phenomenon is irreducible to the merely physical, or is infallibly knowable through introspection, or is constantly accompanied by a self-representational element, or something like that. Someone who hears the claim that "consciousness exists" in this stronger, more commissive sense might then deny that consciousness does exist, if they think that nothing exists that has those disputable properties. This might be an unintuitive claim, if it's intuitively plausible that consciousness does have those properties. But it's not a jaw dropper.

Admittedly, there has been some unclarity in how philosophers define "consciousness". It's not entirely clear on the face of it what Frankish means to deny the existence of in the article linked above. Is he going for the totally absurd sounding claim, or only the more moderate claim? (Or maybe something somehow in between or slightly to the side of either of these?)

In my view, the best and most helpful definitions of "consciousness" are the less commissive ones. The usual approach is to point to some examples of conscious experiences, while also mentioning some synonyms or evocative phrases. Examples include sensory experiences, dreams, vivid surges of emotion, and sentences spoken silently to oneself. Near synonyms or evocative phrases include "subjective quality", "stream of experience", "that in virtue of which it's like something to be a person". While you might quibble about any particular example or phrase, it is in this sense of "consciousness" that it seems to be undeniable or absurd to deny that consciousness exists. It is in this sense that the existence of consciousness is, as David Chalmers says, a "datum" that philosophers and psychologists need to accept.

Still, we might be dissatisfied with evocative phrases and pointing to examples. For one thing, such a definition doesn't seem very rigorous, compared to an analytic definition. For another thing, you can't do very much a priori with such a thin definition, if you want to build an argument from the existence of consciousness to some bold philosophical conclusion (like the incompleteness of physical science or the existence of an immaterial soul). So philosophers are understandably tempted to add more to the definition -- whatever further claims about consciousness seem plausible to them. But then, of course, they risk adding too much and losing the undeniability of the claim that consciousness exists.

When I read Frankish's article in preprint, I wasn't sure how radical a claim he meant to defend, in denying the existence of phenomenal consciousness. Was he going for the seemingly absurd claim? Or only for the possibly-unintuitive-but-much-less-radical claim?

So I wrote a commentary in which I tried to define "phenomenal consciousness" as innocently as possible, simply by appealing to what I hoped would be uncontroversial examples of it, while explicitly disavowing any definitional commitment to immateriality, introspective infallibility, irreducibility, etc. (final MS version). Did Frankish mean to deny the existence of phenomenal consciousness in that sense?

In one important respect, I should say, definition by example is necessarily substantive or commissive: Definition by example cannot succeed if the examples are a mere hodgepodge without any important commonalities. Even if there isn't a single unifying essence among the examples, there must at least be some sort of "family resemblance" that ordinary people can latch on to, more or less.

For instance, the following would fail as an attempted definition: By "blickets" I mean things like: this cup on my desk, my right shoe, the Eiffel tower, Mickey Mouse, and other things like those; but not this stapler on my desk, my left shoe, the Taj Mahal, Donald Duck, or other things like those. What property could the first group possibly possess, that the second group lacks, which ordinary people could latch onto by means of contemplating these examples? None, presumably (even if a clever philosopher or AI could find some such property). Defining "consciousness" by example requires there to be some shared property or family resemblance among the examples, which is not present in things we normally regard as "nonconscious" (early visual processing, memories stored but not presently considered, and growth hormone release). The putative examples cannot be a mere hodge-podge.

Definition by example can be silent about what descriptive features all these conscious experiences share, just as a definition by example of "furniture" or "games" might be silent about what ties those concepts together. Maybe all conscious experiences are in principle introspectively reportable, or nonphysical, or instantiated by 40 hertz neuronal oscillations. Grant first that consciousness exists. Argue about these other things later.

In his reply to my commentary, Frankish accepts the existence of "phenomenal consciousness" as I have defined it -- which is really (I think) more or less how it is already defined and ought to be defined in the recent Anglophone "phenomenal realist" tradition. (The "phenomenal" in "phenomenal consciousness", I think, serves as a usually unnecessary disambiguator, to prevent interpreting "consciousness" as some other less obvious but related thing like explicit self-consciousness or functional accessibility to cognition.) If so, then Frankish is saying something less radical than it might at first seem when he rejects the existence of "phenomenal consciousness".

So is consciousness an illusion? No, not if you define "consciousness" as you ought to.

Maybe my dispute with Frankish is mainly terminological. But it's a pretty important piece of terminology!

[image source, Pinna et al 2002, The Pinna Illusion]

Tuesday, December 06, 2016

A Philosophical Critique of the Big Bang Theory, in Four Minutes

I've been invited to be one of four humanities panelists after a public lecture on the early history of the universe. (Come by if you're in the UCR area. ETA: Or watch it live-streamed.) The speaker, Bahram Mobasher, has told me he likes to keep it tightly scientific -- no far-out speculations about the multiverse, no discussion of possible alien intelligences. Instead, we'll hear about H/He ratios, galactic formation, that sort of stuff. I have nothing to say about H/He ratios.

So here's what I'll say instead:

Alternatively, here’s a different way our universe might have begun: Someone might have designed a computer program. They might have put simulated agents in that computer program, and those simulated agents might be us. That is, we might be artificial intelligences inside an artificial environment created by some being who exists outside of our visible world. And this computer program that we are living in might have started ten years ago or ten million years ago or ten minutes ago.

This is called the Simulation Hypothesis. Maybe you’ve heard that Elon Musk, the famous tycoon of Paypal, Tesla, and SpaceX, believes that the Simulation Hypothesis is probably true.

Most of you probably think that Musk is wrong. Probably you think it vastly more likely that Professor Mobasher’s story is correct than that the Simulation Hypothesis is correct. Or maybe you think it’s somewhat more likely that Mobasher is correct.

My question is: What grounds this sense of relative likelihood? It’s doubtful that we can get definite scientific proof that we are not in a simulation. But does that mean that there are no rational constraints on what it’s more or less reasonable to guess about such matters? Are we left only with hard science on the one hand and rationally groundless faith on the other?

No, I think we can at least try to be rational about such things and let ourselves be moved to some extent by indirect or partial scientific evidence or plausibility considerations.

For example, we can study artificial intelligence. How easy or difficult is it to create artificial consciousness in simulated environments, at least in our universe? If it’s easy, that might tend to nudge up the reasonableness of the Simulation Hypothesis. If it’s hard, that might nudge it down.

Or we can look for direct evidence that we are in a designed computer program. For example, we can look for software glitches or programming notes from the designer. So far, this hasn’t panned out.

Here’s my bigger point. We all start with framework assumptions. Science starts with framework assumptions. Those assumptions might be reasonable, but they can also be questioned. And one place where cosmology intersects with philosophy and the other humanities and sciences is in trying to assess those framework assumptions, rather than simply leaving them unexamined or taking them on faith.

[image source]

Related:

"1% Skepticism" (Nous, forthcoming)

"Reinstalling Eden" (with R. Scott Bakker; Nature, 2013)

Tuesday, November 29, 2016

How Everything You Do Might Have Huge Cosmic Significance

Infinitude is a strange and wonderful thing. It transforms the ridiculously improbable into the inevitable.

Now hang on to your hat and glasses. Today's line of reasoning is going to make mere Boltzmann continuants seem boring and mundane.

First, let's suppose that the universe is infinite. This is widely viewed as plausible (see Brian Greene and Max Tegmark).

Second, let's suppose that the Copernican Principle holds: We are not in any special position in the universe. This principle is also widely accepted.

Third, let's assume cosmic diversity: We aren't stuck in an infinitely looping variant of a mere (proper) subset of the possibilities. Across infinite spacetime, there's enough variety to run through every finitely specifiable possibility infinitely often.

These assumptions are somewhat orthodox. To get my argument going, we also need a few assumptions that are less orthodox, but I hope not wildly implausible.

Fourth, let's assume that complexity scales up infinitely. In other words, as you zoom out on the infinite cosmos, you don't find that things eventually look simpler as the scale of measurement gets bigger.

Fifth, let's assume that local actions on Earth have chaotic effects of an arbitrarily large magnitude. You know the Butterfly Effect from chaos theory -- the idea that a small perturbation in a complex, "chaotic" system can make a large-scale difference in the later evolution of the system. A butterfly flapping its wings in China could cause the weather in the U.S. weeks later to be different than it would have been if the butterfly hadn't flapped its wings. Small perturbations amplify. This fifth assumption is that there are cosmic-scale butterfly effects: far-distant, arbitrarily large future events that arise with chaotic sensitivity to events on Earth. Maybe new Big Bangs are triggered, or maybe (as envisioned by Boltzmann) given infinite time, arbitrarily large systems will emerge by chance from low-entropy "heat death" states, and however these Big Bangs or Boltzmannian eruptions arise, they are chaotically sensitive to initial conditions -- including the downstream effects of light reflected from Earth's surface.

Okay, that's a big assumption to swallow. But I don't think it's absurd. Let's just see where it takes us.

Sixth, given the right kind of complexity, evolutionary processes will transpire that favor intelligence. We would not expect such evolutionary processes at most spatiotemporal scales. However, given that complexity scales up infinitely (our fourth assumption) we should expect that at some finite proportion of spatiotemporal scales there are complex systems structured in a way that enables the evolution of intelligence.

From all this it seems to follow that what happens here on Earth -- including the specific choices you make, chaotically amplified as you flap your wings -- can have effects on a cosmic scale that influence the cognition of very large minds.

(Let me be clear that I mean very large minds. I don't mean galaxy-sized minds or visible-universe-sized minds. Galaxy-sized and visible-universe-sized structures in our region don't seem to be of the right sort to support the evolution of intelligence at those scales. I mean way, way up. We have infinitude to play with, after all. And presumably way, way slow if the speed of light is a constraint. Also, I am assuming that time and causation make sense at arbitrarily large scales, but maybe that can be weakened if necessary to something like contingency.)

Now at such scales anything little old you personally does would very likely be experienced as chance. Suppose for example that a cosmic mind utilizes the inflation of Big Bangs. Even if your butterfly effects cause a future Big Bang to happen this way rather than that way, probably a mind at that scale wouldn't have evolved to notice tiny-scale causes like you.

Far fetched. Cool, perhaps, depending on your taste in cool. Maybe not quite cosmic significance, though, if your decisions only feed a pseudo-random mega-process whose outcome has no meaningful relationship to the content of your decisions.

But we do have infinitude to play with, so we can add one more twist.

Here it is: If the odds of influencing the behavior of an arbitrarily large intelligent system are finite, and if we're letting ourselves scale up arbitrarily high, then (granting all the rest of the argument) your decisions will affect the behavior of an infinite number of huge, intelligent systems. Among them there will be some -- a tiny but finite proportion! -- such that the following counterfactual is true: If you hadn't made that upbeat, life-affirming choice you in fact just made, that huge, intelligent system would have decided that life wasn't worth living. But fortunately, partly as a result of that thing you just did, that giant intelligence -- let's call it Emily -- will discover happiness and learn to celebrate its existence. Emily might not know about you. Emily might think it's random or find some other aspect of the causal chain to point toward. But still, if you hadn't done that thing, Emily's life would have been much worse.

So, whew! I hope it won't seem presumptuous of me to thank you on Emily's behalf.

[image source]

Sunday, November 27, 2016

The Odds of Getting Three Consecutive Wars in a Row in the Card Game

What better way to spend the Sunday after Thanksgiving than playing card games with your family and then arguing about the odds?

As pictured, my daughter and I just got three consecutive "wars" in the card game of war. (I lost with a 3 at the end!)

What are the odds of that?

Well, the odds of getting just one war are 3/51, right? Here's why. It doesn't matter whether my or my daughter's card is turned first. That card can be anything. The second card needs to match it. With the first card out of the deck, 51 cards remain. Three of them match the first-turned card. So 3/51 = .058824 = about a 5.9% chance.

Then you each play three face down "soldier" cards. Those could be any cards, and we don't know anything about them, so they can be ignored for purposes of calculation. What's relevant are the next upturned cards, the "generals". Here there are two possibilities. First possibility: The first general is the same value as the original war cards. Since there are 50 unplayed cards and two that match the original two war cards, the odds of that are 2/50 = .040000 = 4.0%. The other possibility is that the value of the first general differs from that of the war cards: 48/50 = .960000 = 96.0%.

(As I write this, my son is sleeping late and my wife and daughter are playing with Musical.ly -- other excellent ways to spend a lazy Sunday!)

In the first case, the odds of the second general matching are only one in 49 (.020408, about 2.0%), since three of the four cards of that value have already been played and there are 49 cards left in the deck (disregarding the soldiers). In the second case, the odds are three in 49 (.061224, about 6.1%).

So the odds of two wars consecutively are: .058824 * .04 * .020408 (first war, followed by matching generals, i.e. all four up cards the same) + .058824 * .96 * .061124 (first war, followed by a different pair of matching generals) = .000048 + .003457 = .003505. In other words, there's about a 0.35% chance, or about one in 300 chance, of two consecutive wars.

If the second war had generals that matched the original war cards, then there's only one way for the third war to happen. Player one draws any new general. The odds of player two's new general matching are 3/47 (.063830).

If the second war had generals that did not match the original war cards, then there are two possibilities.

First possibility: The first new general is the same value as one of the original war cards or previous generals. There's a 4 in 48 (.083333) chance of that happening (two remaining cards of each of those two values). Finally, there's a 1/47 (.021277) chance that the last general matches this one (last remaining card of that value).

Second possibility: The first new general is a different value from either the original war cards or the previous generals. The odds of that are 44/48 (.916667), followed by a 3/47 (.063830) chance of match.

Okay, now we can total up the possibilities. There are three relevantly different ways to get three consecutive wars in a row.

A: First war, followed by second war with same values, followed by third war with different values: .058824 (first war) * .04000 (first general matches war cards) * .020408 (second general matches first general) * .063830 (odds of third war with fresh card values) = .000003 (.0003% or about 1 in 330,000).

B: First war, followed by second war with different values, followed by third war with same values as one of the previous wars: .058824 (first war) * .960000 (first general doesn't match war cards) * .061224 (second general matches first general) * .083333 (first new general matches either war cards or previous generals) * .021277 (second new general matches first new general) = .000006 (.0006% or about 1 in 160,000).

C: First war, followed by second and third wars, each with different values: .058824 (first war) * .960000 (first general doesn't match war cards) * .061224 (second general matches first general) * .916667 (first new general doesn't match either war cards or previous generals) * .063830 (second new general matches first new general) = .000202 (.02% or about 1 in 5000).

Summing up these three paths: .000003 + .000006 + .000202 = .000211. In other words, the chance of three wars in a row is 0.0211% or 1 in 4739.

Now for some leftover turkey.

-----------------------------------------------

As it happens we were playing the variant game Modern War -- which is much less tedious than the traditional card game of war! But since it was only the first campaign the odds are the same. (In later campaigns the odds of war increase, because smaller cards fall disproportionately out of the deck.)

Wednesday, November 23, 2016

The Moral Compass and the Liberal Ideal in Moral Education

Here are two very different approaches to moral education:

The outward-in approach. Inform the child what the rules are. Do not expect the child to like the rules or regard them as wise. Instead, enforce compliance through punishment and reward. Secondarily, explain the rules, with the hope that eventually the child will come to appreciate their wisdom, internalize them, and be willing to abide by them without threat of punishment.

The inward-out approach. When the child does something wrong, help the child see for herself what makes it wrong. Invite the child to reflect on what constitutes a good system of rules and what are good and bad ways to treat people, and collaborate in developing guidelines and ideals that make sense to the child. Trust that even young children can come to see the wisdom of moral guidelines and ideals. Punish only as a fallback when more collaborative approaches fail.

Though there need be no neat mapping, I conjecture that preference for the outward-in approach correlates with what we ordinarily regard as political conservativism and preference for the inward-out approach with what we ordinarily regard as political liberalism. The crucial difference between the two approaches is this: The outward-in approach trusts children's judgment less. On the outward-in approach, children should be taught to defer to established rules, even if those rules don't make sense to them. This resembles Burkean political conservativism among adults, which prioritizes respect for the functioning of our historically established traditions and institutions, mistrusting our current judgments about how to those institutions might be improved or replaced.

In contrast, the liberal ideal in moral education depends on the thought that most or all people -- including most or all children -- have something like an inner moral compass, which can be relied on as at least a partial, imperfect guide toward what's morally good. If you take four-year-old Pooja aside after she has punched Lauren (names randomly chosen) and patiently ask her to explain herself and to think about the ethics of punching, you will get something sensible in reply. For the liberal ideal to work, it must be true that Pooja can be brought to understand the importance of treating others kindly and fairly. It must be true that after reflection, she will usually find that she wants to be kind and fair to others, even without outer reward.

This is a lot to expect from children. And yet I do think that most children, when approached patiently, can find their moral compass. In my experience watching parents and educators, it strikes me that when they are at their best -- not overloaded with stress or too many students -- they can successfully use the inward-out approach. Empirical psychology also suggests that the (imperfect, undeveloped) seeds of morality are present early in development and shared among primates.

It is I think foundational to the liberal conception of the human condition -- "liberal" in rejecting the top-down imposition of values and celebrating instead people's discovery of their own values -- that when they are given a chance to reflect, in conditions of peace, with broad access to relevant information, people will tend to find themselves revolted by evil and attracted to good. Hatred and evil wither under thoughtful critical examination. So we liberals must believe. Despite complexities, bumps, regressions, and contrary forces, reflection and broad exposure to facts and arguments will bend us toward freedom, egalitarianism, and respect.

If this is so, here's something you can always do: Invite people to think alongside you. Share the knowledge you have. If there is light and insight in your thinking, people will slowly walk toward it.

Related essay: Human Nature and Moral Education in Mencius, Xunzi, Hobbes, and Rousseau (History of Philosophy Quarterly, 2007)

[image source]

Tuesday, November 15, 2016

Three Ways to Be Not Quite Free of Racism

Suppose that you can say, with a feeling of sincerity, "All races and colors of people deserve equal respect". Suppose also that when you think about American Blacks or South Asians or Middle Eastern Muslims you don't detect any feelings of antipathy, or at least any feelings of antipathy that you believe arise merely from consideration of their race. This is good! You are not an all-out racist in the 19th-century sense of that term.

Still, you might not be entirely free of racial prejudice, if we took a close look at your choices, emotions, passing thoughts, and swift intuitive judgments about people.

Imagine then the following ideal: Being free of all unjustified racial prejudice. We can imagine similar ideals for classism, ableism, sexism, ethnicity, subculture, physical appearance, etc.

It would be a rare person who met all of these ideals. Yet not all falling short is the same. The recent election has made vivid for me three importantly distinct ways in which one can fall short. I use racism as my example, but other failures of egalitarianism can be analyzed similarly.

Racism is an attitude. Attitudes can be thought of as postures of the mind. To have an attitude is to be disposed to act and react in attitude-typical ways. (The nature of attitudes is a central part of my philosophical research. For a fuller account of my view, see here.) Among the dispositions constitutive of all-out racism are: making racist claims, purposely avoiding people of that race, uttering racist epithets in inner speech, feeling negative emotions when interacting with that race, leaping quickly to negative conclusions about individual members of that race, preferring social policies that privilege your preferred race, etc.

An all-out racist would have most or all of these dispositions (barring "excusing conditions"). Someone completely free of racism would have none of these dispositions. Likely, the majority of people in our culture inhabit the middle.

But "the middle" isn't all the same. Here are three very different ways of occupying it.

(1.) Implicit racism. Some of the relevant dispositions are explicitly or overtly racist -- for example, asserting that people of the target race are inherently inferior. Other dispositions are only implicitly or covertly racist, for example, being prone without realizing it to evaluate job applications more negatively if the applicant is of the target race, or being likely to experience negative emotion upon being assigned a cooperative task with a person of the target race. Recent psychological research suggests that many people in our culture, even if they reject explicitly racist statements, are disposed to have some implicitly racist reactions, at least occasionally or in some situations. We can thus construct a portrait of the "implicit racist": Someone who sincerely disavows all racial prejudice, but who nonetheless has a wide-ranging and persistent tendency toward implicitly racist reactions and evaluations. Probably no one is a perfect exemplar of this portrait, with all and only implicitly racist reactions, but it is probably common for people to match it to a certain extent. To that extent, whatever it is, that person is not quite free of implicit racism.

Implicit racism has received so much attention in the recent psychological and philosophical literature that one might think that it is the only way to be not quite free of racism while disavowing racism in the 19th-century sense of the term. Not so!

(2.) Situational racism. Dispositions manifest only under certain conditions. Priscilla (name randomly chosen) is disposed sincerely to say, if asked, that people of all races deserve equal respect. Of course, she doesn't actually spend the entire day saying this. She is disposed to say it only under certain conditions -- conditions, perhaps, that assume the continued social disapproval of racism. It might also be the case that under other conditions she would say the opposite. A person might be disposed sincerely to reject racist statements in some contexts and sincerely to endorse them in other contexts. This is not the implicit/explicit division. I am assuming both sides are explicit. Nor am I imagining a change in opinion over time. I am imagining a person like this: If situation X arose she would be explicitly racist, while if situation Y arose she would be explicitly anti-racist, maybe even passionately, self-sacrificingly so. This is not as incoherent as it might seem. Or if it is incoherent, it is a commonly human type of incoherence. The history of racism suggests that perfectly nice, non-racist-seeming people can change on a dime with a change in situation, and then change back when the situation shifts again. For some people, all it might take is the election of a racist politician. For others, it might take a more toxically immersive racist environment, or a personal economic crisis, or a demanding authority, or a recent personal clash with someone of the target race.

(3.) Racism of indifference. Part of what prompted this post was an interview I heard with someone who denied being racist on the grounds that he didn't care what happened to Black people. This deprioritization of concern is in principle separable from both implicit racism and situational racism. For example: I don't think much about Iceland. My concerns, voting habits, thoughts, and interests instead mostly involve what I think will be good for me, my family, my community, my country, or the world in general. But I'm probably not much biased against Iceland. I have mostly positive associations with it (beautiful landscapes, high literacy, geothermal power). Assuming (contra Mozi) that we have much greater obligations to family and compatriots than to people in far-off lands, my habit of not highly prioritizing the welfare of people in Iceland probably doesn't deserve to labeled pejoratively with an "-ism". But a similar disregard or deprioritization of people in your own community or country, on grounds of their race, does deserve a pejorative label, independent any implicit or explicit hostility.

These three ways of being not quite free of racism are conceptually separable. Empirically, though, things are likely to be messy and cross-cutting. Probably the majority of people don't map neatly onto these categories, but have a complex set of mixed-up dispositions. Furthermore, this mixed-up set probably often includes both racist dispositions and, right alongside, dispositions to admire, love, and even make special sacrifices for people who are racialized in culturally disvalued ways.

It's probably difficult to know the extent to which you yourself fail, in one or more of these three ways, to be entirely free of racism (sexism, ableism, etc.). Implicitly racist dispositions are by their nature elusive. So also is knowledge of how you would react to substantial changes in circumstance. So also are the real grounds of our choices. One of the great lessons of the past several decades of social and cognitive psychology is that we know far less than we think we know about what drives our preferences and about the situational influences on our behavior.

I am particularly struck by the potentially huge reach of the bigotry of indifference. Action is always a package deal. There are always pros and cons, which need to be weighed. You can't act toward one goal without simultaneously deprioritizing many other possible goals. Since it's difficult to know the basis of your prioritization of one thing over another, it is possible that the bigotry of indifference permeates a surprising number of your personal and political choices. Though you don't realize it, it might be the case that you would have felt more call to action had the welfare of a different group of people been at stake.

[image source Prabhu B Doss, creative commons]

Wednesday, November 09, 2016

Thought for the Day

What you believe is not what you say you believe. It is how you act.

What you desire is not what you say you desire. It is what you choose.

Who you are is how you live.

You know this about other people, but it is very difficult to know this about yourself.

--------------------------------------

Acting Contrary to Our Professed Beliefs (Pacific Philosophical Quarterly, 2010).

Knowing Your Own Beliefs (Canadian Journal of Philosophy, 2011).

A Dispositional Approach to the Attitudes (New Essays on Belief, 2013).

The Pragmatic Metaphysics of Belief (in draft)

Friday, November 04, 2016

Use of "Genius", "Strict", and "Sexy" in Teaching Evaluations, by Discipline and Gender of Professor

Interesting tool here, where you can search for terms in professors' teaching reviews, by discipline and gender.

The gender associations of "genius" with male professors are already fairly well known. Here's how they show in this database:

Apologies for the blurry picture. Click on it to make it clearer!

On the other hand, terms like "mean", "strict", and "unfair" tend to occur more commonly in reviews of female professors. Here's "strict":

How about "sexy"? You might imagine that going either way: Maybe female professors are more frequently rated by their looks. On the other hand, maybe it's "sexier" to be a professor if you're a man. Here how it turns out:

Update, 10:45.

I can't resist adding one more. "Favorite":

Wednesday, November 02, 2016

Introspecting an Attitude by Introspecting Its Conscious Face

In some of my published work, I have argued that:

(1.) Attitudes, such as belief and desire, are best understood as clusters of dispositions. For example, to believe that there is beer in the fridge is nothing more or less than to be disposed (all else being equal or normal) to go to the fridge if one wants a beer, to feel surprised if one were to open the fridge and find no beer, to conclude that the fridge isn't empty if that question becomes relevant, etc, etc. (See my essays here and here.)

And

(2.) Only conscious experiences are introspectible. I characterize introspection as "the dedication of central cognitive resources, or attention, to the task of arriving at a judgment about one's current, or very recently past, conscious experience, using or attempting to use some capacities that are unique to the first-person case... with the aim or intention that one's judgment reflect some relatively direct sensitivity to the target state" (2012, p. 42-43).

Now it also seems correct that (3.) dispositions, or clusters of dispositions, are not the same as conscious experiences. One can be disposed to have a certain conscious experience (e.g., disposed to experience a feeling of surprise if one were to see no beer), but dispositions and their manifestations are not metaphysically identical. Oscar can be disposed to experience surprise if he were to see an empty fridge, even if he never actually sees an empty fridge and so never actually experiences surprise.

From these three claims it follows that we cannot introspect attitudes such as belief and desire.

But it seems we can introspect them! Right now, I'm craving a sip of coffee. It seems like I am currently experiencing that desire in a directly introspectible way. Or suppose I'm thinking aloud, in inner speech, "X would be such a horrible president!" It seems like I can introspectively detect that belief, in all its passionate intensity, as it is occurs in my mind right now.

I don't want to deny this, exactly. Instead, let me define relatively strict versus permissive conceptions of the targets of introspection.

To warm up, consider a visual analogy: seeing an orange. There the orange is, on the table. You see it. But do you really see the whole orange? Speaking strictly, it might be better to say that you see the orange rind, or the part of the orange rind that is facing you, rather than the whole orange. Arguably, you infer or assume that it's not just an empty rind, that it has a backside, that it has a juicy interior -- and usually that's a safe enough assumption. It's reasonable to just say that you see the orange. In a relatively permissive sense, you see the whole orange; in a relatively strict sense you see only the facing part of the orange rind.

Another example: From my office window I see the fire burning downtown. Of course, I only see the smoke. Even if I were to see the flames, in the strictest sense perhaps the visible light emitted from flames is only a contingent manifestation of the combustion process that truly constitutes a fire. (Consider invisible methanol fires.) More permissively, I see the fire when I see the smoke. More strictly, I need to see the flames or maybe even (impossibly?) the combustion process itself.

Now consider psychological cases: In a relatively permissive sense, you see Sandra's anger. In a stricter sense, you see her scowling face. In a relatively permissive sense, you hear the shyness and social awkwardness in Shivani's voice. In a stricter sense you hear only her words and prosody.

To be clear: I do not mean to imply that a stricter understanding of the targets of perception is more accurate or better than a more permissive understanding. (Indeed, excessive strictness can collapse into absurdity: "No, officer, I didn't see the stop sign. Really, all I saw were patterns of light streaming through my vitreous humour!")

As anger can manifest in a scowl and as fire can manifest in smoke and visible flames, so also can attitudes manifest in conscious experience. The desire for coffee can manifest in a conscious experience that I would describe as an urge to take a sip; my attitude about X's candidacy can manifest in a momentary experience of inner speech. In such cases, we can say that the attitudes present a conscious face. If the conscious experience is distinctive enough to serve as an excellent sign of the real presence of the relevant dispositional structure constituting that attitude, then we can say that the attitude is (occurrently) conscious.

It is important to my view that the conscious face of an attitude is not tantamount to the attitude itself, even if they normally co-occur. If you have the conscious experience but not the underlying suite of relevant dispositions, you do not actually have the attitude. (Let's bracket the question of whether such cases are realistically psychologically possible.) Similarly, a scowl is not anger, smoke is not a fire, a rind is not an orange.

Speaking relatively permissively, then, one can introspect an attitude by introspecting its conscious face, much as I can see a whole orange by seeing the facing part of its rind and I can see a fire by seeing its smoke. I rely upon the fact that the conscious experience wouldn't be there unless the whole dispositional structure were there. If that reliance is justified and the attitude is really there, distinctively manifesting in that conscious experience, then I have successfully introspected it. The exact metaphysical relationship between the strictly conceived target and the permissively conceived target is different among the various cases -- part-whole for the orange, cause-effect for the fire, and disposition-manifestation for the attitude -- but the general strategy is the same.

[image source]

Thursday, October 27, 2016

Dispositionalism vs Representationalism about Belief

The Monday before last, Ned Block and Eric Mandelbaum brought me into their philosophy grad seminar at New York University to talk about belief. Our views are pretty far apart, and I got pushback during class (and before class, and after class!) from a variety of directions. But the issue that stuck with me most was the big-picture issue of dispositionalism vs respresentationalism about belief.

I'm a dispositionalist. By this I mean that to believe some particular proposition, such as that your daughter is at school, is nothing more or less than to be disposed toward certain patterns of behavior, conscious experience, and cognition, under a range of hypothetical conditions -- for example, to be disposed to go to your daughter's school if you decide you want to meet her, to be disposed to feel surprise should you head home for lunch and find her waiting there, and to be disposed, if the question arises, to infer that her favorite backpack is also probably at the school (since she usually takes it with her). All of these dispositions hold only "ceteris paribus" or "all else being equal" and one needn't have all of them to count as believing. (For more details about my version of dispositionalism in particular, see here.) Crucial to the dispositionalist approach (but not unique to it) is the idea that the implementational details don't matter -- or rather, they matter only derivatively. It doesn't matter if you've got a connectionist net in your head, or representations in the language of thought, or a billion little homonuculi whispering in thieves' cant, or an immaterial soul. As long as you have the right clusters of behavioral, experiential, and cognitive dispositions, robustly, across a suitably broad range of hypothetical circumstances, you believe.

On a representationalist view, implementation does matter. On a suitably modest view of what a "representation" is (I like Dretske's account), the human mind uses representations. For example, it's very plausible that neural activity in primary visual cortex is representational, if representations are states of a system that function to track or convey information about something else. (In primary visual cortex, patterns of excitation in groups of neurons function to indicate geometrical features in various parts of the visual field.) The representationalist about belief commits to a general picture of the mind as a manipulator of representations, and then characterizes believing as a matter of having the right sort of representations (e.g., one with the content "my daughter it school") stored or activated in the right type of functional role in the mind (for example, stored in memory and poised (if all goes well) to be activated in cognitive processing when you are asked, "where is your daughter now?").

I interpreted some of the pushback from Block, Mandelbaum, and their students as follows: "Look, the best cognitive science employs a representational model of the mind. So representations are real. Even you don't deny that. So if you want a truly scientific model of the mind instead of some vague dispositionalism that looks only at the effects or manifestations of real cognitive states, you should be a representationalist."

How is a dispositionalist to reply to this concern? I have three broad responses.

The Implementational Response. The most concessive response (short of saying, "oops, you're right!") is to deny that there is any serious conflict between the two positions by allowing that the way one gets to have the dispositional profile constitutive of belief might be by manipulating representations in just the manner that the representationalist supposes. The views can be happily married! You don't get to have the dispositional profile of a believer unless you already have right sort of representational architecture underneath; and once you have the right sort of representational architecture underneath, you thereby acquire the relevant dispositional profile. The views only diverge in marginal or hypothetical cases where representational architecture and dispositional profile come apart -- but maybe those cases don't matter too much.

However, I think that answer is too concessive, for a couple of reasons.

The Messiness Response. Here's a too-simple hypothetical representationalist architecture for belief. To believe that P (e.g., that my daughter is at school today) is to just to have a representation with the content P ("my daughter is at school today") stored somewhere in the mind, ready to be activated when it becomes relevant whether P is the case (e.g., I'm asked "where is your daughter now?"). One problem with this view is the problem of specifying the exact content. I believe that the my daughter is at school today. I also believe that my daughter is at JFK Elementary today. I also believe that my daughter is at JFK Elementary now. I also believe that Kate is at JFK Elementary now. I also believe that Kate is in Ms. Salinas' class today. This list could obviously be expanded considerably. Do I literally have all of these representations stored separately? Or is there only one representation stored, from which the others are swiftly derivable? If so, which one? How could we know? This puzzle invites us to reject the simplistic picture that believing P is a matter of having a stored representation with exactly the content P. But once we make this move, we open ourselves up to a certain kind of implementational messiness -- which is plausible anyway. As we have seen in the two best-developed areas of cognitive science -- the cognitive science of memory and the cognitive science of vision -- the underlying architectural stories tend to be highly complex and tend not to map neatly onto our folk psychological categories. Furthermore, viewed from an appropriately broad temporal perspective, scientific fashions come and go: We have this many memory systems, no we have this many; early visual processing is not much influenced by later processing, wait yes it is influenced, wait no it's not after all. Dynamical systems, connectionist networks, patterns of looping activation can all be understood in terms of language-like representations, or no they can't, or maybe map-like representations or sensorimotor representations are better. Given the messiness and uncertainty of cognitive science, it is premature to commit to a thoroughly representationalist picture. Maybe someday we'll have all this figured out well enough so that we can say "this architectural structure, this one, is what you have if you believe that your daughter is at school, we found it!" That would be exciting! That day, I abandon dispositionalism. Until then, I prefer to think of belief dispositionally rather than relying upon any particular architectural story, even as general an architectural story as representationalism.

The What-We-Care-About Response. Why, as philosophers, do we want an account of belief? Presumably, it's because we care about predicting and explaining our behavior and our patterns of experience. So let's suppose as much divergence as it's reasonable to suppose between patterns of experience and behavior and patterns of internal architecture. Maybe we discover an alien species that has outward behavior and inner experiences virtually identical to our own but implemented very differently in the underlying architecture. Or maybe we can imagine a human being whose actions and experiences, not only in her actual circumstances but also in a wide range of hypothetical circumstances, are just like that of someone who believes that P, but who lacks the usual underlying architecture. On an architecture-driven account, it seems that we have to deny that these aliens or this person believes what they seem to believe; on a dispositional account, we get to say that they do believe what they seem to believe. The latter seems preferable: If what we care about in an account of belief is patterns of behavior and experience, then it makes sense to build an account of belief that prioritizes those patterns of behavior and experience as the primary thing, and treats purely architectural considerations as secondary.

----------------------------------------------

Some related posts and papers:

A Phenomenal, Dispositional Account of Belief (Nous 2002).

Belief (Stanford Encyclopedia of Philosophy, 2006 revised 2015).

Mad Belief? (blog post, Nov. 5, 2008).

A Dispositional Approach to Attitudes: Thinking Outside of the Belief Box (in Nottelmann, ed., New Essays on Belief, 2013).

Against Intellectualism About Belief (blog post, July 31, 2015)

The Pragmatic Metaphysics of Belief (essay in draft, October 2016).

Friday, October 21, 2016

Storytelling in Philosophy Class

One of my regular TAs, Chris McVey, uses a lot of storytelling in his teaching. About once a week, he'll spend ten minutes sharing a personal story from his life, relevant to the class material. He'll talk about a family crisis or about his time in the U.S. Navy, connecting it back to the readings from the class.

At last weekend's meeting of the Minorities And Philosophy group at Princeton, I was thinking about what teaching techniques philosophers might use to appeal to a broader diversity of students, and "storytime with Chris" came to mind. The more I think about it, the more I find to like about it.

Here are some thoughts.

* Students are hungry for stories, and rightly so. Philosophy class is usually abstract and impersonal, or when not abstract focused on toy examples or remote issues of public policy. A good story, especially one that is personally meaningful to the teacher, leaps out and captures attention. People in general love stories and are especially ready for them after long dry abstractions and policy discussions. So why not harness that? But furthermore, storytelling gives shape and flesh to the stick figures of philosophical abstraction. Most abstract principles only get their full meaning when we see how they play out in real cases. Kant might say "act on that maxim that you can will to be a universal law" or Mengzi might say "human nature is good" -- but what do such claims really amount to? Students rightly feel at sea unless they are pulled away from toy examples and into the complexity of real life. Although it's tempting to think that the real philosophical force is in the abstract principles and that storytelling is just needless frill and packaging, I think that the reverse might be closer to the truth: The heart of philosophy is in how we engage our minds when given real, messy examples, and the abstractions we derive from cases always partly miss the point.

* Personal stories vividly display the relevance of philosophy. Many -- maybe most -- students are understandably turned off by philosophy because it seems so remote from anything of practical value. What's the point, they wonder, in discussing Locke's view of primary and secondary qualities, or semi-comical far-fetched problems about runaway trolleys, or under what conditions you "know" something is a barn in Fake Barn Country? It takes a certain kind of beautiful, nerdy, impractical mind to love these questions for their own sake. Too much focus on such issues can mislead students into thinking that philosophy is irrelevant to their lives. However (I hope you'll agree), nothing is more relevant to our lives than philosophy. Every choice we make expresses our values. Every controversial opinion we form depends upon our general worldview and our implicit or explicit sense of what people or institutions or methods deserve our trust. Most students will understandably fail to see the connection between academic philosophy and the philosophy they personally live through their choices and opinions unless we vividly show how these are connected. Through storytelling, you model your struggle with Kant's hard line against lying, or with how far to trust purported scientific experts, or with your fading faith in an immaterial soul -- and students can see that philosophy is not just a Glass Bead Game.

* Personal stories shift the locus of academic capital. We might think of "academic capital" as the resources students bring to class which help them succeed. In philosophy class, important capital includes skill at reading and evaluating abstract arguments and, in class discussion, skill at working up passable pro and con arguments on the spot. Academic capital of this sort also includes knowledge of the philosophical tradition, comfort in a classroom environment, confidence that one knows how this game is played. These are terrific skills to have of course; and some students have more of them than others, or at least believe they do. Those students tend to dominate class discussion. If you tell a personally meaningful story, however, you can make a different set of skills and experiences suddenly important. Students who might have had similar stories from their own lives now have something unique to contribute. Students who are good at storytelling, students who have the social and emotional intelligence to evaluate what might have really happened in your family fight, students with cultural knowledge of the kind of situation you describe -- they now have some of the capital. And they might be a very different group from the ones who are so good at the argumentative pro-and-con. In my experience, good philosophical storytelling engages and draws out discussion from a larger and more diverse group of students than does abstract argument and toy example.

If philosophers were more serious about engaged, personal storytelling in class, we would I think have a different and broader range of students who loved our courses and appreciated the importance and interest of our discipline.

[image source]

Tuesday, October 11, 2016

My Daughter's Rented Eyes

(inspired by a conversation with Cory Doctorow about how a kid's high-tech rented eyes might be turned toward favored products in the cereal aisle)

At two million dollars outright, of course I couldn't afford to buy eyes for my four-year-old daughter Eva. So, like everyone else whose kids had been blinded by the GuGuBoo Toy company's defective dolls (may its executives rot in bankruptcy Hell), I rented the eyes. What else could I possibly do?

Unlike some parents, I actually read the Eye & Ear Company's contract. So I knew part of what we were in for. If we didn't make the monthly payments, her eyes would shut off. We agreed to binding arbitration. We agreed to a debt-priority clause, to financial liability for eye extraction, to automatic updates. We agreed that from time to time the saccade patterns of her eyes would be subtly adjusted so that her gaze would linger over advertisements from companies that partnered with Eye & Ear Co. We agreed that in the supermarket, Eva's eyes would be gently maneuvered toward the Froot Loops and the M&Ms.

When the updates came in, we always had the legal right to refuse them. We could, hypothetically, turn off Eva's eyes, then have them surgically removed and returned to Eye & Ear Co. Each new rental contract was thus technically voluntary.

When Eva was seven, the new updater threatened shutoff unless we transferred $1000 into a debit account. Her updated eyes contained new software to detect any copyrighted text or images she might see. Instead of buying copyrighted works in the usual way, we agreed to have a small fee deducted from our account for each work Eva viewed. Instead of paying $4.99 for the digital copy of a Dr Seuss book, Eye & Ear would deduct $0.50 each time she read the book. Video games might be free with ads, or $0.05 per play, or $0.10, or even $1.00. Since our finances were tight, we set up parental controls: Eva's eyes required parental permission for any charge over $0.99 or any cumulative charges over $5.00 in a day -- and of course they also blocked any "adult" material. Until we granted approval, blocked or unpaid material was blurred and indecipherable, even if she was just peeking over someone's shoulder at a book or walking past a television in a dentist's lobby.

When Eva was ten, the updater overlaid advertisements in her visual field. It helped keep the rental costs down. (We could have bought out of the ads for an extra $6,000 a year.) The ads never interfered much with Eva's vision -- they just kind of scrolled across the top of her visual field sometimes, Eva told us, or printed themselves onto clouds and the sides of buildings.

By the time Eva was thirteen, I'd finally risen to a managerial position at work, and we could afford the new luxury eyes for her. By adjusting the settings, Eva could see infrared at night. She could zoom in on distant objects. She could bug out her eyes and point them in different directions like some sort of weird animal, to take in a broader field of view. She could also take snapshots and later retrieve them with a subvocalization -- which gave her a great advantage at school over her normal-eyed and cheaper-eyed peers. Installed software could text-search through stored snapshots, solve mathematical equations, and pull relevant information from the internet. When teachers tried to ban such enhancements from the classroom, Eye & Ear fought back, arguing that the technology had become so integral to the children's lives that it couldn't be removed without disabling them. Eye & Ear refused to develop the technology to turn off the enhancement features, and no teacher could realistically prevent a kid from blinking and subvocalizing.

By the time Eva was seventeen it looked like she and the two other kids at her high school with luxury eye rentals would more or less have their choice among elite universities. I refused to believe the rumors about parents intentionally blinding their children so that they too could rent eyes.

When Eva turned twenty, all the updates -- not just the cheap ones -- required that you accept the "acceleration" technology. Companies contracted with Eye & Ear to privilege their messages and materials for faster visual processing. Pepsi paid a hundred million dollars a year so that users' eyes would prioritize resolving Pepsi cans and Pepsi symbols in the visual scene. Coca Cola cans and symbols were "deprioritized" and stayed blurry unless you focused on them for a few seconds. Loading stored images worked similarly. A remembered scene with a Pepsi bottle in it would load almost instantly. One with a Coke bottle would take longer and might start out fuzzy or fragmented.

Eye & Ear started to make glasses for the rest of us, which imitated some of the functions of the implants. Of course they were incredibly useful. Who wouldn't want to take snapshots, see in the dark, zoom into the distance, get internet search and tagging? We all rented whatever versions we could afford, signed the annual terms and conditions, received the updates. We wore them pretty much all day, even in the shower. The glasses beeped alarmingly whenever you took them off, unless you went through a complex shutdown sequence.

When the "Johnson for President" campaign bought an acceleration, the issue went all the way to the Supreme Court. Johnson's campaign had paid Eye & Ear to prioritize the perception of his face and deprioritize the perception of his opponent's face, prioritize the visual resolution and recall of his ads, deprioritize the resolution and recall of his opponent's ads. Eva was now a high-powered lawyer in a New York firm, on the fast track toward partner. She worked for the Johnson campaign, though I wouldn't have thought it was like her. Johnson was so authoritarian, shrill, and right-wing -- or at least it seemed so to me, when I took my glasses off.

Johnson favored immigration restrictions, and his opponent claimed (but never proved) that Eye & Ear implemented an algorithm that highlighted people's differences in skin tone -- making the lights a little lighter, the darks a little darker, the East Asians a bit yellow. Johnson won narrowly, before his opponent's suit about the acceleration had made it through the appeals process. It didn't hit the high court until a month after Johnson's inauguration. Eva helped prepare Johnson's defense. Eight of the nine justices were over eighty years old. They lived stretched lives with enhanced longevity and of course all the best implants. They heard the case through the very best ears.

---------------------------------

Related post:

Possible Cognitive and Cultural Effects of Video Lifelogging (Apr 21, 2016)

---------------------------------

Image source:

Photo: HAL 9000 resurrected by Ram Yoga, doubled.

Tuesday, October 04, 2016

French, German, Greek, Latin, but Not Arabic, Chinese, or Sanskrit?

[cross-posted at the Blog of the APA]

When I was graduate student in Berkeley in the 1990s, philosophy PhD students were required to pass exams in two of the following four languages: French, German, Greek, or Latin. I already knew German. I argued that Spanish should count (I had read Unamuno in the original as an undergrad), but my petition was denied since I didn’t plan to do further work in Spanish. I argued that a psychological methods course would be more useful than a second foreign language, given that my dissertation was in philosophy of psychology, but that was not treated as a serious suggestion. I'd learned some classical Chinese, but I thought it would be pointless to attempt 600 characters in two hours as required (much more daunting than 600 words in a European language). So I crammed French for a few weeks and passed the exam.

I have recently become interested in mainstream Anglophone philosophers’ tendency to privilege certain languages and traditions in the history of philosophy. If we think globally, considering large, robust traditions of written work treating recognizably philosophical topics with argumentative sophistication and scholarly detail, it seems clear that at least Arabic, classical Chinese, and Sanskrit merit inclusion alongside French, German, Greek, and Latin as languages of major philosophical importance.

The exclusion of Arabic, Chinese, and Sanskrit from Berkeley’s standard language requirements could not, I think, have been mere ignorance. Rather, the focus on French, German, Greek, and Latin appeared to express a value judgment: that these four languages are more central to philosophy as it ought to be studied.

The language requirements of philosophy PhD programs have loosened over the years, but French, German, Latin, and Greek still form the core language requirements in departments that have language requirements. Students therefore continue to receive the message that these languages are the most important ones for philosophers to know.

I examined the language requirements of a sample of PhD programs in the United States. Because of their sociological importance in the discipline, I started with the top twelve ranked programs in the Philosophical Gourmet Report. I then expanded the sample by considering a group of strong PhD programs that are not as sociologically central to the discipline – the programs ranked 40-50 in the U.S.

Among the top twelve programs (corrections welcome):

* Four appeared to have no foreign language requirement (Michigan, NYU, Rutgers, Stanford).

* Seven (Berkeley, Columbia, Harvard, Pitt, UCLA, USC, Yale) had some version of a language requirement, requiring one of French, German, Greek or Latin -- always exactly that list. Some programs explicitly allowed another language and/or another relevant research skill by petition or consultation.

* Only Princeton had a language requirement that did not appear to privilege French, German, Greek, and Latin. Princeton only requires a language “relevant to the student’s proposed course of study” (or alternatively “a unit of advanced work in another department” or “completion of an additional unit of work in any area of philosophy”).

You might think that, practically speaking, Arabic or classical Chinese would be a fine language to choose. Students can always petition; maybe such petitions are almost always granted. This response, however, ignores the fact that something is communicated by other languages’ non-inclusion on the privileged list. For a tendentious comparison – maybe too tendentious! – consider an admissions form that said “we admit men, but also women by petition”. One thing is treated as a norm and the other as an exception.

Interestingly, the PhD programs ranked less highly by the Philosophical Gourmet had more relaxed language requirements overall. In the 40-50 group, only two of the eleven mentioned a language requirement or list of languages. Still, the privileged languages were from the same set: “French, German, or other” at Saint Louis University, and optional certification in French, German, Greek, or Latin at Rochester.

I do not believe that we should be sending students the message that French, German, Greek, and Latin are more important than other languages in which there is a body of interesting philosophical work. It is too Eurocentric a vision of the history of philosophy. Let’s change this.

------------------------------------

Related Op-Eds:

What’s missing in college philosophy classes? Chinese philosophers (Schwitzgebel, Los Angeles Times, Sep 11, 2015)

If philosophy won’t diversify, let’s call it what it really is (Garfield and Van Norden, New York Times, May 11, 2016)

And on the opposite side:

Not all things wise and good are philosophy (Tampio, Aeon, Sep 13, 2016)

The image is, of course, from the Epic Rap Battle, Eastern vs Western Philosophers!

Wednesday, September 28, 2016

New Essay in Draft: The Pragmatic Metaphysics of Belief

Available here.

As always, comments and criticisms welcome, either by email to my address or in the comments section on this post.

Abstract:

Suppose someone intellectually assents to a proposition but fails to act and react generally as though that proposition is true. Does she believe the proposition? Intellectualist approaches will say she does believe it. They align belief with sincere, reflective judgment, downplaying the importance of habitual, spontaneous reaction and unreflective assumption. Broad-based approaches, which do not privilege the intellectual and reflective over the spontaneous and habitual in matters of belief, will refrain from ascribing belief or treat it as an intermediate case. Both views are viable, so it is open to us to choose which view to prefer on pragmatic grounds. I argue that since “belief” is a term of central importance in philosophy of mind, philosophy of action, and epistemology, we should use it to label most important phenomenon in the vicinity that can plausibly answer to it. The most important phenomenon in the vicinity is not our patterns of intellectual endorsement but rather our overall lived patterns of action and reaction. Too intellectualist a view risks hiding the importance of lived behavior, especially when that behavior does not match our ideals and self-conception, inviting us to noxiously comfortable views of ourselves.

The Pragmatic Metaphysics of Belief (in draft)

(I'll be giving a version of this paper as talk at USC on Friday, by the way.)

Related Posts:

On Being Blameworthy for Unwelcome Thoughts, Reactions, and Biases (Mar 19, 2015)

Against Intellectualism about Belief (Jul 31, 2015)

Pragmatic Metaphysics (Feb 11, 2016)

Monday, September 26, 2016

Cory Doctorow Speaking at UC Riverside: "1998 Called, and It Wants Its Stupid Internet Laws Back"

Come one, come all! (Well, for certain smallish values of "all".)

Cory Doctorow

"1998 Called, and It Wants Its Stupid Internet Laws Back"

Wednesday, September 28, 2016
3:10-5:00
INTS 1113

The topic will be digital rights management and companies' increasing tendency not to give us full control over the devices that matter to us, so that the the devices can "legitimately" (?) thwart us when we give them orders contrary to the manufacturers' interests.

The Jerk Quiz: New York City Edition

Now that my Jerk Quiz has been picked up by The Sun and The Daily Mail, I've finally hit the big time! I'm definitely listing these as "reprints" on my c.v.

Philosopher James DiGiovanna suggested to me that the existing Jerk Quiz might not be valid in New York City, so I suggested he draw up a NYC version. Here's the result!

New York City Jerk Test

by James DiGiovanna

1. You have a fifteen-minute break from work, a desperate need for a cigarette, and a seven-minute-each-way walk to the bank on a very crowded sidewalk. Do you:
(a) Calmly walk the 14-minute round-trip handling the cigarette cravings by reminding yourself that you only have a scant 7 more hours of work, a 49-minute commute on the crowded and probably non-functional F train, and then a brief walk through throngs of NYU students before you can reach your undersized apartment for a pleasant 4 minutes of smoking.
(b) Curse the existence of each probably mindless drone who stands between you and your goal.
(c) Find a narrow space just off the main thoroughfare and enjoy 5 quick drags meant to burn your entire cigarette down to the filter in under 30 seconds.
(d) Light up a cigarette as you walk, unconsciously assuming that others can dodge the flaming end and/or enjoy the smoking effluvia as they see fit, if indeed they have minds that can see anything at all.

2. You are waiting at the bodega to buy one measly cup of coffee, one of the few pleasures allowed to you in a world where the last tree is dying somewhere in what was probably a forest before Reagan was elected. However, there is a long line, including someone directly in front of you who is preparing to write a check in spite of the fact that this is the 21st century. You accidentally step on this person’s toe, causing him or her to move to the side yelping in pain. Do you:
(a) Apologize profusely.
(b) Offer the standard, “pardon me!” while wondering why check-writers were allowed to reproduce and create check-writing offspring at this late point in history.
(c) Say nothing, holding your precious place in line against the unhygienic swarm of lower lifeforms.
(d) Consider this foe vanquished and proceed to take his or her place as you march relentlessly towards the cashier.

3. You are in hell (midtown near Times Square) where an Eastern Hemisphere tourist unknowingly drops a wallet, and an elderly woman wanders out in front of a runaway hot dog stand, risking severe cholesterol and death. Do you:
(a) Shout to the Foreign Person while rushing to rescue the elderly woman.
(b) Ignore the neocolonialist tourist and his or her justifiable loss of money earned by exploiting the third world and attempt to save the woman because, my God, that could be you and/or your non-gender-specific life partner someday.
(c) Continue on your way because you have things to do.
(d) Yell so that others will see that there is a woman about to be hotdog-carted, assuming this will distract the crowd from the dropped wallet, making it easier for you to take it and run.

4. You have been waiting for the A train for 300 New York Minutes (i.e. five minutes in flyover state time.) Finally, it arrives, far too crowded to accept even a single additional passenger. Do you:
(a) Step out of the way so others can exit, and allow those on the platform in front of you to enter the train, and then, if and only if there is ample room to enter without compressing other persons, do you board the train.
(b) Wait calmly, because when his happens, 9 times out of 10 an empty train is 1 minute behind.
(c) Mindlessly join the throngs of demi-humans desperately hoping to push their way into the car.
(d) Slide along the outside of the car to the spot just adjacent the door, then slip in the narrow space made when a person who is clearly intending to get back in the car stepped off to make way for someone who was disembarking to pass.

5. It is a typical winter day in New York, meaning at the end of each sidewalk is a semi-frozen slush puddle of indeterminate depth. Perhaps it is barely deep enough to wet your boots, perhaps it drains directly into a C.H.U.D. settlement. You see a family, the father carrying a map and wearing a fanny pack, the mother holding a guide which say “Fodors New York för Nordmen,” the blindingly white children staring for the first time at buildings that are not part of a system of social welfare and frost. They absentlly march towards the end of the sidewalk, eyes raised towards New York’s imposing architecture, about to step into what could be their final ice bath. Do you:
(a) Yell at them to stop while you check the depth of the puddle for them.
(b) Block their passage and point to a shallower point of egress.
(c) Watch in amusement as they test the puddle depth for you.
(d) Push them into the puddle and use their frozen bodies as a bridge to freedom.

-----------------------------------------

(I interpret James's quiz as a commentary on how difficult it is, even for characterological non-jerks, to avoid jerk-like behaviors or thoughts in that kind of urban context.)

For more on Jerks see:

A Theory of Jerks

How to Tell If You're A Jerk