Wednesday, February 28, 2018

It's Not Just One Thing, to Believe There's a Gas Station on the Corner

A couple of weekends ago, at the fabulous conference on the nature of belief at UC Santa Barbara, Elisabeth Camp said something that kind of bugged me. It wasn't her main point, and lots of philosophers say similar things, so I don't mean to criticize Camp specifically. Philosophers who say this sort of thing usually treat it as an obvious background assumption -- and in so doing, they reveal that they have a fundamentally different picture than I do of how the mind works.

I'm not going to be able to quote Camp exactly, but the idea was roughly this: If you and I both believe that there's a gas station on the corner, then we believe exactly the same thing. That is, we believe that there is a gas station on the corner. Furthermore, this thing that we both believe is exactly the same thing that I convey to Nicholle when I then turn to her and say, while you nod, "There is a gas station on the corner."

Of course, we might disagree about some related propositions. Maybe I think it's a Shell and you think it's a Chevron. Maybe I think that the corner is next to the freeway onramp and you think it's two blocks from the onramp. But still, we are in perfect agreement about the particular proposition that there is a gas station on the corner.

Suppose that Nicholle is new to town and almost out of gas, and that's why the topic arose. Now, plausibly, I might almost as easily have said "There's a gas station over by the onramp" or "There's a Shell on the corner" or "There's a station -- maybe a Shell? -- just over that way [pointing]" or "I think you can get gas just down that road a bit" or.... Quite a few things, some of which you would have agreed with, some you would have disagreed with ("There's a Shell on the corner"), and still others that would have been awkward in your mouth but with which you might not have exactly disagreed (... "maybe a Shell"...).

On one view of the mind and of communication -- the view that seems to me implicit in Camp's remark -- there's a fact of the matter exactly which propositions about this topic I believe and which I don't (and maybe which I have some intermediate credence in). All of the propositions that I might naturally have said (barring exaggeration, metaphor, misstep, etc.) are among the things I believe. Quite a large number! Somewhere on this large list is exactly the belief "there is a gas station on the corner"; and since that's what I ended up saying, that's the content I conveyed to Nicholle; and you, Nicholle, and I now share precisely this belief.

One challenge for this picture, and maybe the first source of trouble, is determining which propositions should go on the "yes I believe it" list. It might have been inept of me, but not beyond the pale, to say "Probably there's either a Shell or a 76, like, near that big blue building and the colorful sign with the flowers or is it cactus, you know, after that underpass we took on the way into campus, I think, if you were with us then?" Is that among the propositions I believe? It gets even worse if we forget conversational pragmatics and focus simply on cognition or philosophy of mind. Do I believe that there's a gas station less than 1 mile away, and less than 1.1 miles away, and less than 1.2 miles away, and... closer than Los Angeles and closer than where Saul Kripke probably lives, and farther than the parking lot, and farther than where Brianna (or was it Jordy?) accidentally left her laptop.... It looks like the list of propositions that I believe might be infinite and difficult to determine the boundaries of.

A second challenge is specifying the content of this proposition "there's a gas station on the corner" which you, Nicholle, and I supposedly all believe. Which states of affairs are such that this belief is true and which are such that it's false? What counts as a "gas station"? (What if there's a 7-11 with a single pump, or a single broken pump, or a single pump that is closed for construction, or a guy who illegally sells gas from a parked tanker truck?) Which possible layouts of nearby streets are such that "the corner" determinately refers to the corner on which the gas station is located? (What if one of us is wrong about the direction of the corner relative to us and the freeway? What if there are two gas stations and three corners?) If you, Nicholle, and I disagree about what would count as there being a gas station on the corner, do we not in fact have the same belief? Or do we still have exactly the same belief but we're somehow wrong in our understanding of its satisfaction conditions?

A fine mess! Now I don't really mind messes. It's a messy world. But our messy world demands a messy philosophy of mind and a messy philosophy of language -- and the fundamental problem with the view I'm criticizing is that it demands cleanliness: You and I believe exactly these things and exactly not these others; belief contents are precise, and shared, and delivered in sentence-sized packages.

So here's the alternative picture I prefer. You and I both have a wide range of dispositions to act, and reflect, and say things aloud to each other and silently to ourselves, concerning the presence or absence or location of that gas station or that Shell or whatever it is. For example, I'm disposed not to worry about whether it's possible to get gas nearby if the question arises; I feel like I know there's gas nearby. Also, I'm disposed to say "yes" if someone asks if there's a Shell around. If I wanted gas, I would drive a certain direction, and I'd feel surprise if it turned out badly. I'm disposed to form certain visual imagery if asked about nearby gas stations, etc., etc. You have a partially overlapping, partially divergent set of dispositions. Like me, you are disposed to drive roughly that direction if you want to get gas; unlike me, you'd be surprised by the Shell sign.

Of course, I can't convey to Nicholle in three seconds this whole chaotic tangle which constitutes my belief state concerning that Shell station -- so I choose some simplification of it to put into words, pragmatically focused on what I think is the most relevant thing: "There's a gas station on the corner". This isn't *exactly* what I believe. Exactly what I believe is immensely complex, maybe infinitely so, and certainly beyond my ability to shape into a shareable linguistic package.

You and I "share" the belief about the gas station in roughly the same way you and I might both share a personality trait. Maybe we're both extraverted. But of course the exact shape of our extraversion differs in detail: We'd say yes to a somewhat different range of party invitations, we'd be gregarious and talkative in somewhat different ranges of situations. To say we're both "extraverted" is blunt tool to get at a much more complex phenomenon beneath; but good enough for practical purposes, if we don't care about a high degree of precision.

ETA 1:55 PM, Feb. 28:

Elisabeth Camp and I had an email exchange about this post, which with her permission I've added to The Underblog here.

----------------------------------------

Related:

"A Dispositional Approach to Attitudes: Thinking Outside of the Belief Box", in N. Nottelmann, ed., New Essays on Belief (Palgrave, 2013).

"A Phenomenal, Dispositional Account of Belief", Nous, 36 (2002) 249-275.

[image source]

Thursday, February 22, 2018

Why Moral and Philosophical Disagreements Are Especially Fertile Grounds for Rationalization (with Jon Ellis)

(with Jonathan E. Ellis; originally appeared at the Imperfect Cognitions blog)

Last week we argued that your intelligence, vigilance, and academic expertise very likely doesn't do much to protect you from the normal human tendency towards rationalization – that is, from the tendency to engage in biased patterns of reasoning aimed at justifying conclusions to which you are attracted for selfish or other epistemically irrelevant reasons – and that, in fact, you may be more susceptible to rationalization than the rest of the population. This week we’ll argue that moral and philosophical topics are especially fertile grounds for rationalization.

Here’s one way of thinking about it: Rationalization, like crime, requires a motive and an opportunity. Ethics and philosophy provide plenty of both.

Regarding motive: Not everyone cares about every moral and philosophical issue of course. But we all have some moral and philosophical issues that are near to our hearts – for reasons of cultural or religious identity, or personal self-conception, or for self-serving reasons, or because it’s comfortable, exciting, or otherwise appealing to see the world in a certain way.

On day one of their philosophy classes, students are often already attracted to certain types of views and repulsed by others. They like the traditional and conservative, or they prefer the rebellious and exploratory; they like confirmations of certainty and order, or they prefer the chaotic and skeptical; they like moderation and common sense, or they prefer the excitement of the radical and unintuitive. Some positions fit with their pre-existing cultural and political identities better than others. Some positions are favored by their teachers and elders – and that’s attractive to some, and provokes rebellious contrarianism in others. Some moral conclusions may be attractively convenient, while others might require unpleasant contrition or behavior change.

The motive is there. So is the opportunity. Philosophical and moral questions rarely admit of straightforward proof or refutation, or a clear standard of correctness. Instead, they open into a complexity of considerations, which themselves do not admit of straightforward proof and which offer many loci for rationalization.

These loci are so plentiful and diverse! Moral and philosophical arguments, for instance, often turn crucially on a “sense of plausibility” (Kornblith, 1999); or on one’s judgment of the force of a particular reason, or the significance of a consideration. Methodological judgments are likewise fundamental in philosophical and moral thinking: What argumentative tacks should you first explore? How much critical attention should you pay to your pre-theoretic beliefs, and their sources, and which ones, in which respects? How much should you trust your intuitive judgments versus more explicitly reasoned responses? Which other philosophers, and which scientists (if any), should you regard as authorities whose judgments carry weight with you, and on which topics, and how much?

These questions are usually answered only implicitly, revealed in your choices about what to believe and what to doubt, what to read, what to take seriously and what to set aside. Even where they are answered explicitly, they lack a clear set of criteria by which to answer them definitively. And so, if people’s preferences can influence their perceptual judgments (including possibly of size, color, and distance: Balcetis and Dunning 2006, 2007, 2010) what is remembered (Kunda 1990; Mele 2001), what hypotheses are envisioned (Trope and Liberman 1997), what one attends to and for how long (Lord et al. 1979; Nickerson 1998) . . . it is no leap to assume that they can influence the myriad implicit judgments, intuitions, and choices involved in moral and philosophical reasoning.

Furthermore, patterns of bias can compound across several questions, so that with many loci for bias to enter, the person who is only slightly biased in each of a variety of junctures in a line of reasoning can ultimately come to a very different conclusion than would someone who was not biased in the same way. Rationalization can operate by way of a series or network of “micro-instances” of motivated reasoning that together have a major amplificatory effect (synchronically, diachronically, or both), or by influencing you mightily at a crucial step (Ellis, manuscript).

We believe that these considerations, taken together with the considerations we advanced last week about the likely inability of intelligence, vigilance, and expertise to effectively protect us against rationalization, support the following conclusion: Few if any of us should confidently maintain that our moral and philosophical reasoning is not substantially tainted by significant, epistemically troubling degrees of rationalization. This is of course one possible explanation of the seeming intractability of philosophical disagreement.

Or perhaps we the authors of the post are the ones rationalizing; perhaps we are, for some reason, drawn toward a certain type of pessimism about the rationality of philosophers, and we have sought and evaluated evidence and arguments toward this conclusion in a badly biased manner? Um…. No way. We have reviewed our reasoning and are sure that we were not affected by our preferences....

For our full-length paper on this topic, see here.

Wednesday, February 21, 2018

Rationalization: Why Your Intelligence, Vigilance and Expertise Probably Don't Protect You (with Jon Ellis)

(with Jonathan E. Ellis; originally appeared at the Imperfect Cognitions blog)

We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. You try to point it out, but they deny it, and dig in more.

More formally, in recent work we have defined rationalization as what occurs when a person favors a particular view as a result of some factor (such as self-interest) that is of little justificatory epistemic relevance, and then engages in a biased search for and evaluation of justifications that would seem to support that favored view.

You, of course, never rationalize in this way! Or, rather, it doesn’t usually feel like you do. Stepping back, you’ll probably admit you do it sometimes. But maybe less than average? After all, you’re a philosopher, a psychologist, an expert in reasoning – or at least someone who reads blog posts about philosophy, psychology, and reasoning. You're especially committed to the promotion of critical thinking and fair-minded reasoning. You know about all sorts of common fallacies, and especially rationalization, and are on guard for them in your own thinking. Don't these facts about you make you less susceptible to rationalization than people with less academic intelligence, vigilance, and expertise?

We argue that no. You’re probably just as susceptible to post-hoc rationalization, maybe even more, than the rest of the population, though the ways it manifests in your reasoning may be different. Vigilance, academic intelligence, and disciplinary expertise are not overall protective against rationalization. In some cases, they might even enhance one’s tendency to rationalize, or make rationalizations more severe when they occur.

While some biases are less prevalent among those who score high on standard measures of academic intelligence, others appear to be no less frequent or powerful. Stanovich, West and Toplak (2013), reviewing several studies, find that the degree of myside bias is largely independent of measures of intelligence and cognitive ability. Dan Kahan finds that on several measures people who use more “System 2” type explicit reasoning show higher rates of motivated cognition rather than lower rates (2011, 2013, Kahan et al 2011). Thinkers who are more knowledgeable have more facts to choose from when constructing a line of motivated reasoning (Taber and Lodge 2006; Braman 2009).

Nor does disciplinary expertise appear to be protective. For instance, Schwitzgebel and Cushman (2012, 2015) presented moral dilemma scenarios to professional philosophers and comparison groups of non-philosophers, followed by the opportunity to endorse or reject various moral principles. Professional philosophers were just as prone to irrational order effects and framing effects as were the other groups, and were also at least as likely to “rationalize” their manipulated scenario judgments by appealing to principles post-hoc in a way that would render those manipulated judgments rational.

Furthermore, since the mechanisms responsible for rationalization are largely non-conscious, vigilant introspection is not liable to reveal to the introspector that rationalization has occured. This may be one reason for the “bias blind spot”: People tend to regard themselves as less biased than others, sometimes even exhibiting more bias by objective measures the less biased they believe themselves to be (Pronin, Gilovich and Ross 2004; Uhlmann and Cohen 2005). Indeed, efforts to reduce bias and be vigilant can amplify bias. You examine your reasoning for bias, find no bias because of your bias blind spot, and then inflate your confidence that your reasoning is not biased: “I really am being completely objective and reasonable!” (as suggested in Erhlinger, Gilovich and Ross 2005). People with high estimates of their objectivity might also be less likely to take protective measures against bias (Scopeletti et al 2015).

Partisan reasoning can be invisible to vigilant introspection for another reason: it need not occur in one fell swoop, at a sole moment or a particular inference. Rather, it can be the result of a series or network of “micro-instances” of motivated reasoning (Ellis, manuscript). Celebrated cases of motivated reasoning typically involve a person whose evidence clearly points to one thing (that it’s their turn, not yours, to do the dishes) but who believes the very opposite (that it’s your turn). But motives can have much subtler consequences.

Many judgments admit of degrees, and motives can have impacts of small degree. They can affect the likelihood you assign to an outcome, or the confidence you place in a belief, or the reliability you attribute to a source of information, or the threshold for cognitive action (e.g., what would trigger your pursuit of an objection). They can affect these things in large or very small ways.

Such micro-instances (you might call it motivated reasoning lite) can have significant amplificatory effects. This can happen over time, in a linear fashion. Or it can happen synchronically, spread over lots of assumptions, presuppositions, and dispositions. Or both. If introspection doesn't reveal motivated reasoning that happens in one fell swoop, micro-instances are liable to be even more elusive.

This is another reason for the sobering fact that well-meaning epistemic vigilance cannot be trusted to preempt or detect rationalization. Indeed, people who care most about reasoning, or who have a high “need for cognition”, or who attend to their cognitions most responsibly, may be the most impacted of all. Their learned ability to avoid the more obvious types of reasoning errors may naturally come with cognitive tools that enable more sophisticated, but still unnoticed, rationalization.

Coming tomorrow: Why Moral and Philosophical Disagreements Are Especially Fertile Grounds for Rationalization.

Full length article on the topic here.

Wednesday, February 14, 2018

Philosophy Relies on Those Double Majors

Happy Valentine's Day! Because I love you, here are some statistics.

Following a suggestion by Eric Winsberg, I decided to look at the IPEDS database from the National Center for Education Statistics to see how commonly Philosophy is chosen as a second major, compared to as a first major, among Bachelor's degree recipients in the United States.

I examined data from all U.S. institutions in the database, over the three most recent available academic years (2013-2014, 2014-2015, and 2015-2016), sorting completed majors by the IPEDS top-level major categories (2010 CIP two-digit classification), except replacing 38 "Philosophy and Religious Studies" with 38.01 "Philosophy". Separately I downloaded the grand totals of completed first and second majors across the U.S.

In all, across all institutions, 5,680,665 students completed a first major. Of these, 289,639 (4.9%) also completed a second major. Not many students earn second majors!

However, the ratios vary greatly by discipline. In all, 24,542 students earned a Philosophy major, of which 5,015 (20.4%) earned it as a second major. At a minimum, then, 20% of Philosophy majors are double majors. If half of those double majors choose to list Philosophy as their first major, then 40% of Philosophy majors in the U.S. are double majors. Unfortunately, the NCES database doesn't allow us to see how many of the people with first majors in Philosophy also had second majors in something else. Forty percent might be too high an estimate, if double majors who have Philosophy as one of their majors disproportionately list Philosophy as their second major. But even if 30%, rather than 40%, of Philosophy majors carry Philosophy along with some other major, that is still a substantial proportion.

Here's another way of looking at the data: 0.3% of students choose Philosophy as a first major, while among those who decide to take a second major, 1.7% choose Philosophy.

Across all 37 top-level categories of major (excluding from analysis the top-level category Philosophy & Religious Studies), only two had a higher percentage of students completing the major as a second major: Foreign Languages, Literatures, and Linguistics (28.5%), and Area, Ethnic, Cultural, Gender and Group Studies (26.2%). Let's call this the Second Major Percentage. In this respect, Philosophy is quite different from two of the other big humanities majors with which Philosophy is often compared: English Language and Literature (7.2%) and History (9.3%). Interestingly, Mathematics, which is often regarded as a very different type of major, ranks fourth, with a Second Major Percentage of 14.4%.

Here's a breakdown among all top-level majors with at least 10,000 completed degrees in the three-year period:


[apologies for the blurry rendering; click to enlarge; correction: the second "Engineering Technologies" should be simply "Engineering"]

Last fall, I presented data showing the precipitous decline in Philosophy majors in the past few years -- from 0.58% of all graduates to 0.39% of all graduates. One hypothesis, which I thought worth considering, is that Philosophy tends disproportionately to rely on double majors, and it is increasingly difficult for students to earn a second major. However, I now think that a decline in second majors can't be the primary explanation for the decline of the Philosophy major. First, although the percentage of students earning a second major has declined somewhat in the period -- from 5.4% in 2009-2010 to 5.1% in 2015-2016 -- that is small compared to the magnitude of the decline in Philosophy, where completions are down by 19% in absolute numbers and about a third in relative percentages. Second, we saw similar declines in English and History, and those disciplines don't appear to be as reliant on students' declaring a second major.

That said, Philosophy does appear to rely heavily on double majors, so we might expect policies that reduce the likelihood of double majoring to disproportionately harm Philosophy programs. Such policies might include increasing the requirements for other popular majors, increasing general education requirements (apart from G.E. requirements in Philosophy, of course), and increasing pressure to complete the degree quickly.

To further explore the question, I divided the colleges and universities into two categories: those with a high rate of double majoring (>= 10% of graduating students have two majors) vs those with a low-to-medium rate of double majoring (< 10%), excluding institutions with < 300 Bachelor's degree recipients over the three-year period. As expected, at institutions where students commonly complete second majors, about three times as high a percentage of students completed Philosophy degrees than at institutions where students less commonly complete second majors: 9240/935419 (1.0%) vs. 14876/4656563 (0.3%; p << .001).

Direction of causation is of course hard to know. These groups of institutions will differ in many other respects too, and that is very likely to be part of the causal story, maybe most of the causal story. And yet, the following fact is suggestive: The four majors that differ most in percentage between the two groups of universities, as measured by the ratio of percentage of students completing the major at the high-double-major schools to the percentage completing the major at the low-double major schools, those are exactly the same four majors that have the highest rate of second majors in the overall dataset.

That's a little abstractly put, so let me give you the breakdown for the highest-ratio majors, so you can see where this is coming from:

Area, Ethnic, Cultural, Gender and Group Studies: 1.5% of majors at high-double-major schools vs. 0.4% at low-double-major schools, ratio 4.1.
Philosophy: 1.0% vs. 0.3%, ratio 3.1
Foreign Languages and Literatures: 3.2% vs. 1.1%, ratio 3.0
Mathematics: 2.8% vs. 1.1%, ratio 2.7

I need a good name for this ratio, but I can't think of one, so let's just call it The Ratio. Of course, the median Ratio is about 1.0. History and English both have Ratios of 1.9, which is substantially above 1.0, but not as high as these other four.

In other words, perhaps unsurprisingly, the four majors that students are disproportionately most likely to declare as second majors are exactly the same four majors that show the greatest difference in completions between schools where lots of students have double majors and schools where few do.

With a few exceptions, most notably Biology (a Second Major Percentage of only 2.7%, but a Ratio of 1.7), the relationship is reasonably tidy. The correlation between the natural log of each major's Ratio with each major's Second Major Percentage is r = .76 (p < .001; excluding majors with < 1000 completions in either group of universities; natural log to improve spread near zero). Here it is as a scatterplot:


[click to enlarge]

So although it seems unlikely that the recent sharp decline in Philosophy majors is due primarily to a decline in the overall proportion of students declaring two majors, it remains plausible that conditions that are good for double-majoring in general may be good for the Philosophy major in particular.

Related posts:

Sharp Declines in Philosophy, History, and Language Majors Since 2010 (Dec 14, 2017)

Philosophy Undergraduate Majors Aren't Very Black, but Neither Are They as White as You Might Have Thought (Dec 21, 2017)

Women Have Been Earning 30-34% of Philosophy BAs in the U.S. Since Approximately Forever* (Dec 8, 2017).

Thursday, February 08, 2018

Is Consciousness Sparse or Abundant? Five Dimensions of Analysis

Consciousness -- that is, "phenomenal" consciousness, the stream of experience, subjective experience, "what-it's-like"-ness -- might be sparse, or it might be abundant. There might be lots of it, or there might be very little. This might be so along at least five different dimensions of analysis.

Entity sparseness vs. entity abundance. Consciousness might be sparse in the sense that few entities in the universe possess it, or it might be abundant in the sense that many entities in the universe possess it. Someone who thinks that consciousness is entity-abundant might think that insects are conscious, and snails, maybe earthworms -- all kinds of entities with sensory systems. They might also think that computers could soon have conscious experiences, if designed in the right way. At the extreme, they might accept panpsychism -- the view that every entity in the universe is conscious, even isolated hydrogen ions in outer space. In contrast, someone who thinks that consciousness is entity-sparse is much more conservative, seeing consciousness on Earth as limited, for example, only to cognitively sophisticated mammals and birds, or in the extreme only to adult human beings with sophisticated self-reflective powers.

State sparseness vs. state abundance. An entity who is conscious might be conscious all the time or only once in a while. Someone who thinks that consciousness is state abundant might think that even when we aren't in REM sleep we have dreams or dreamlike experiences or at least experiences of some sort. They might think that when we're driving absent-mindedly and can't remember a thing, we don't really blank out completely. In contrast, someone who thinks that consciousness is state sparse would hold that we are often not conscious at all. Consciousness might disappear entirely during long periods of dreamless sleep, or during habitual activity, or maybe during "flow" states of skillful activity. Someone who holds to extreme state sparseness might say that we are almost never actually phenomenally conscious, except in rare moments of explicit self-reflection.

Modality sparseness vs. modality abundance. An entity who is currently state conscious might be conscious in lots of modalities at once or in only one or few modalities at a time. For example, a minute ago, as you were reading the previous paragraph, did you have visual experience of the computer screen? Did you also have auditory experience of the hum of the fan in your room (to which I assume you weren't attending)? Did you have tactile experience of your feet in your shoes? Did you hear the words of that paragraph in inner speech? Did you have relevant visual imagery? Were you also simultaneously having emotional experience, a lingering feeling of hunger, etc., etc.? Someone who accepts modality abundance thinks that we have many types or modalities of experience ongoing most of the time when we are conscious. In contrast, someone who accepts modality sparseness thinks that we have only one or a few modalities of experience at any one time -- for example, no experience at all of the feeling of your feet in your shoes when your attention is elsewhere.

Modality width. Within a modality that is currently conscious in an entity at a time, the stream of experience might be broad or it might be narrow. Consider visual experience as you are reading. Do you have visual experience only of a few degrees of visual arc, whatever is near the center of your attention? Or do you have visual experience all the way out to the edge, including the rim of your computer screen, the rims of your glasses, the tip of your nose, almost 180 degrees of arc? Or somewhere in between -- maybe the whole computer screen and its rim, but little beyond that? Someone with a sparse view about visual modal width thinks that visual experience is usually (maybe until you think to attend to the periphery) only of a few degrees of arc. Someone with an abundant view might think that we basically experience the full 180 degrees of the visual field all the time when we have any visual experience at all. Analogous issues arise for other modalities. Do you have auditory experience only of the conversation to which you are attending, or also of the background noise? Is your visual imagery sketchily minimal or full of detail? Is your emotional experience a minimal valence and label or is it a very specifically embodied feeling?

Property sparseness vs. property abundance. Consider visual experience again. You are looking at a tree. According to one view, all you visually experience are low-level properties like color, shape, and orientation. You know it's a tree, but the "treeness" of it isn't part of your visual experience. (Maybe you have a cognitive experience of thinking "tree!" but that's a different modality.) According to another view, your visual experience is not just of simple low-level properties but instead of a wealth of properties. It's part of your visual experience, for example, that it's a tree, and that it looks ancient, and that it invites climbing, and that the leaves look about ready to fall.

Philosophers and consciousness scientists have recently been arguing in all kinds of interesting ways about various of these issues in isolation, but I can't recall seeing a good structuring of the landscape of options here which both captures the many different mix-and-match possibilities here, while also recognizing that all of the "abundant" views have something in common (consciousness is widespread and multifarious; it's easy to generate lots of types of it) and all of the "sparse" views have something in common (consciousness is rare and limited in its forms).

[image source]

Friday, February 02, 2018

Designing AI with Rights, Consciousness, Self-Respect, and Freedom (with Mara Garza)

New paper in draft!

Abstract:

We propose four policies of ethical design of human-grade Artificial Intelligence. Two of our policies are precautionary. Given substantial uncertainty both about ethical theory and about the conditions under which AI would have conscious experiences, we should be cautious in our handling of cases where different moral theories or different theories of consciousness would produce very different ethical recommendations. Two of our policies concern respect and freedom. If we design AI that deserves moral consideration equivalent to that of human beings, that AI should be designed with self-respect and with the freedom to explore values other than those we might selfishly impose. We are especially concerned about the temptation to create human-grade AI pre-installed with the desire to cheerfully sacrifice itself for its creators’ benefit.

Full version here. As always, comments and critical discussion welcome, either by email to me or as comments on this post.

[image source]