Thursday, February 25, 2016

Genuine Philosophical Dialogues

There's a possible format for written philosophy that, despite great potential, is almost non-existent. I'll call it genuine philosophical dialogue.

Here's how I imagine it.

It's a dialogue between two (maybe three or four) philosophers who respect each other as equals and who are fully committed to the process. Not, for example, Socrates and some passersby. Not a sage answering questions from his students. Not Berkeley's Philonous or Hume's Philo speaking for the author against manufactured opponents.

The philosophers enter the dialogue with a substantial disagreement. The philosophers aim to actually learn from each other -- exposing themselves the possibility of changing their minds or at least figuring out what common ground they share, aiming to find the root sources of their disagreement.

There are many conversational turns. Not just essay, objection, reply. At least 25 conversational turns. Time enough to go somewhere, shift, make progress, get past the well-rehearsed objection and response, interrupt each other for clarifications, then push back against the attempted clarifications, then push back against the pushback, then return to the main thread....

Both philosophers' positions are genuinely their own -- something they are invested in and expert about -- real, sophisticated philosophical positions, worth taking seriously. Not a novice's first impressions. Not a straw man. Not a cartoon.

The participants aim to present their real reasons for thinking what they do -- not the lawyerly or debate-team approach of reaching for any argument that might seem convincing. The participants expose to critique not only their positions but their best understanding of their own underlying reasoning and motivations.

Each philosopher takes responsibility for revising the whole dialogue, including the words of the other. This is one advantage of written format over live dialogue. If A and B agree that some thread was a false start, they can cut it away. If B sees a striking presupposition in what A says, B can point that out and then either trim it away in favor of more agreeable shared language if both agree that the presupposition is inessential, or bring the presupposition more explicitly forward if it looks like the disagreement might partly turn upon it. If A thinks B's point could be more clearly made in a different way, A could revise B's words and see if B agrees. The aim of these mutual revisions would be twofold: (1) for each to aid the other in presenting each position in its best light; (2) to avoid wasting the reader's time with distractions and confusions.


Russ Hurlburt and I, in our 2007 book, aimed to have a dialogue of this type, as we did also in some subsequent exchanges (though Hurlburt's academic affiliation is psychology, not philosophy). I am unaware of other examples of philosophical dialogues that meet all of the above criteria, though surely such dialogues must exist. Examples welcomed!

Genuine philosophical dialogues of this sort would, I think, tend to decrease misunderstanding, strawmanning, distraction into side issues that aren't the heart of the matter, boxing each other into unrepresentative and unflattering "isms", talking past each other, the general use (intentional or not) of unhelpful rhetorical moves, and excessive reliance on idiosyncratic jargon.

What if Kant and Hume had tried to build one of these dialogues together? It's almost unimaginable that they would try (even bracketing their language differences), but fascinating to contemplate what might have arisen, could they have pulled it off. Or consider 21st-century philosophers who prominently disagree -- it could be fascinating to see what common ground they would find, and where they would finally locate the heart of their disagreement after an extended exchange of this sort.

[image source]

Thursday, February 18, 2016

Phenomenal Consciousness, Conceptualized as Innocently as I Can Manage

I've been reading Keith Frankish recently. For example, this. Frankish appears to deny the reality of phenomenal consciousness, a.k.a. "qualia" or "what-it's-like-ness".

"Phenomenal consciousness" does sound like a bit of a suspicious concept. The terminology is technical and recent, for one thing. That invites the idea that phenomenal consciousness is a new concept invented by philosophers, and culturally specific. And that in turn invites the idea that in talking about it, we're talking about some odd sort of theoretical concoction, not a foundationally important aspect of human life.

Furthermore, philosophers wax oddly mysterious when they talk about it. Sometimes they say that it can't be defined, only gestured at or expressed via synonyms. Ned Block, borrowing a phrase about jazz from Louis Armstrong, tells us ("only half in jest"): "If you got to ask, you ain't never gonna get to know" (1978/1983, p. 241).

Despite all this air of mystery, I think the idea of phenomenal consciousness is simple and obvious, once you stop to think about it -- and probably would be so across cultures (though I'll admit that's speculative).

You have sensory experiences. Of course you do. Maybe you have the experience of seeing a computer screen. Maybe you have the experience of hearing someone making noise down the hall. Maybe you have the experience of the taste of coffee lingering in your mouth. Consider some vivid and obvious ones.

You have imagery experiences too. Imagine your house, as viewed from the street, if you can. Or think of the melody of Beethoven's Fifth ("da da da DA, da da da DA"). Or say this very sentence silently to yourself in inner speech.

You've had emotional experience also, I'm sure -- panics of fear, thrills of joy, the quiet pleasure of mellow contentment.

This isn't necessarily an exhaustive list of types of experience, but I think this is enough to give you the idea. There's something that sensory experiences, imagery experiences, and emotional experiences have in common. They're experiences. Dream experiences have this too (even though in some sense you're not "conscious"). At the same time, there are other things going on inside of you that you don't in the same way experience -- immune system response, for example, or the processes by which your fingernails grow. There are also other facts about your mind which you don't normally experience, such as your propensity to say "blue" when asked your favorite color (and let's assume that you aren't being asked right now), or your general, not-currently-relevant preference for vanilla over chocolate. Sensory, imagery, and emotional experiences have something obvious in common, which makes them different from all these other things. The term "phenomenally conscious" refers to that obvious feature they have in common. When you try to describe these experiences to someone else, there are facts about what those experiences are, facts that you are trying to get right -- facts that you might feel like you are struggling to put into words.

Neologism is helpful here not, I think, because the concept is new or strange but rather because folk usage is messy. "Experience" can mean that you've acquired some skills or undergone some events in the past, as in the phrase "I have some teaching experience". The words "consciousness" and "awareness" invite trouble if we hear too much of an epistemic dimension in them: To say that one is conscious of something or aware of something seems to imply that you know something about it, but we might not want to build any suggestions of transitive knowledge into our concept of consciousness. Personally, I like the phrases "conscious experience" and "stream of experience". I think those phrases convey the idea well enough. But for extra clarity, among those philosophers who like to make fine distinctions, the phrase "phenomenal consciousness" also serves.

Frankish suggests, in his 2012 article linked above (I'm not sure he's still committed to this), that demonstrative definitions of phenomenal consciousness by means of example -- that is, definitions of the sort I've just tried -- must fail because philosophers have different accounts of the underlying nature of such things (for example, some philosophers think we are directly aware of properties of our experiences, while others think that at least in sensory experience we are instead directly aware only of the properties of outward objects); but as Frankish acknowledges, disagreement about the fundamental nature of things is compatible with referring to them in common -- as for example ancient people and modern astronomers were both able to refer to "stars". Pushing farther, Frankish endorses Gareth Evans's view that identification of spatio-temporal particulars requires being able to track them in egocentric space; but that's quite a dubious commitment to take on board, especially in this context.

Frankish suggests -- and also Jay Garfield, in his recent book on Buddhist philosophy -- that the modern philosophical concept of phenomenal consciousness imports dubious notions, such as ineffability or infallible knowability or immateriality, which make it fail as a concept. But I don't see how a bare demonstrative account of the sort I've given, in terms of positive and negative examples, involves any such dubious philosophical commitments. Of course, one could define "qualia" or "phenomenal properties" in a way that commits to such matters, and sometimes philosophers do so, but my own aim is to avoid such commitments, keeping open as much as possible, while still pointing at the obvious thing that all conscious experiences have in common that makes us want to classify them together as "conscious experiences".

There are, I think, two main issues that I do not keep open. One is that there is some obvious common feature of most or all of the intended positive examples have (the various sensory experiences, imagery experiences, and emotional experiences) and which the negative examples lack, which we can take to be the target of the phrase "phenomenal consciousness". The other is that there are facts about what these experiences are, which in our introspective reports we are trying to get at. These assumptions are not entirely philosophically innocent; but I hope they are plausible enough. I can't make do with fewer.

There is one further thing that I will commit to here, of some philosophical significance. It's kind of the flip side of openness. If I have succeeded in conceiving of phenomenal consciousness with a high degree of metaphysical innocence, then there ought to be nothing built into the notion of phenomenal consciousness that rules out various wild metaphysical views such as idealism (the view that only mental things exist, and not anything material) or even radical solipsism (the view that the only thing that exists in the universe is my own mind). And indeed, I do think that there's nothing incoherent in the supposition that there might only be phenomenal consciousness and nothing else. Some philosophers -- maybe Frankish and Garfield among them? -- would like to rule out idealism from square one, would like to say that the very idea of mentality already presupposes the idea of something beyond the mental, so that idealism, and maybe also less radical views like dualism, are conceptually incoherent. If my minimalist conception of phenomenal consciousness is correct, however, there is no such easy path to the rejection of those metaphysical positions.

[image source]

Friday, February 12, 2016

New Essay in Draft: Women in Philosophy: Quantitative Analyses of Specialization, Prevalence, Visibility, and Generational Change

co-authored with Carolyn Dicey Jennings.

This article brings together lots of data that Carolyn and I have been gathering and posting about over the past several years, here and on New APPS. Considered jointly, these data tell a very interesting story about the continuing gender disparity in the discipline.

Here's the abstract:

We present several quantitative analyses of the prevalence and visibility of women in moral, political, and social philosophy, compared to other areas of philosophy, and how the situation has changed over time. Measures include faculty lists from the Philosophical Gourmet Report, PhD job placement data from the Academic Placement Data and Analysis project, the National Science Foundation’s Survey of Earned Doctorates, conference programs of the American Philosophical Association, authorship in elite philosophy journals, citation in the Stanford Encyclopedia of Philosophy, and extended discussion in abstracts from the Philosopher’s Index. Our data strongly support three conclusions: (1) Gender disparity remains large in mainstream Anglophone philosophy; (2) ethics, construed broadly to include social and political philosophy, is closer to gender parity than are other fields in philosophy; and (3) women’s involvement in philosophy has increased since the 1970s. However, by most measures, women’s involvement and visibility in mainstream Anglophone philosophy has increased only slowly; and by some measures there has been virtually no gain since the 1990s. We find mixed evidence on the question of whether gender disparity is even more pronounced at the highest level of visibility or prestige than at more moderate levels of visibility or prestige.

Full paper here.

As always, comments, corrections, and objections welcome, either on this post or to my email address.

Thursday, February 11, 2016

Pragmatic Metaphysics

I'm working again on the nature of belief. Increasingly, I find myself drawn to be explicit about my pragmatist approach to the metaphysics of attitudes.

Sometimes the world divides into neat types -- neat enough that you can more or less just point your science at it and straightforwardly sort the As from the Bs. Sometimes instead the world is fuzzy-bordered, full of intermediate cases and cases where plausible criteria conflict. When the world is the latter way, we face antecedently unclear cases. Antecedently unclear cases are, or can be, decision points. Do you want to classify this thing as an A or a B? Would there be some advantage in thinking of the category of "A" so that it sweeps in that case? Or is it better to think of "A" in a way that excludes that case or leaves it intermediate? Such decisions can reflect, often do at least implicitly reflect, our interests and values. Such decisions can also shape, often do at least implicitly shape, future outcomes and values, influencing both how we think about that particular type of case and how we think about As in general.

Pragmatic metaphysics is metaphysics done with these thoughts explicitly in mind. For instance: There are lots of ways of thinking about what a person is. Usually, the cases are antecedently clear: You are a person, I am a person, this coffee mug is not a person. But some interesting cases are intermediate or break in different directions depending on what criteria are emphasized: a fetus, a human without much cortex, a hypothetical conscious robot, a hypothetical enhanced chimpanzee. There is no settled fact about what exactly the boundaries of personhood are. We can choose to think of personhood in a way that includes or excludes such cases or leaves them intermediate -- and in doing so we both express and buttress certain values, for example, about what sorts of being deserve the highest level of moral consideration.

The human mind is a complex and fuzzy-bordered thing, right at the center of our values. Because it is complex and fuzzy-bordered, there will be lots of antecedently unclear cases. Because it is central to our values, how we classify such cases matters. Does being happy require feeling happy? Is compassion that doesn't privilege its object as irreplaceably special still love? Our classification decisions here aren't compelled by the phenomena. Instead, we can decide. What range of phenomena deserve such important labels as "happiness" and "love"? We might think of metaphysical battles over the definitions of those terms as political battles between philosophers with different visions and priorities, for control of our common disciplinary language.

At the center of my interest in belief are a set of antecedently unclear cases in which one intellectually assents to a proposition (e.g., "death is not bad", "women are just as intelligent as men") but fails to act and react generally as though that proposition is true (e.g., quakes with fear on the battlefield, treats most women as stupid). The pragmatic metaphysical question is: How should we classify such cases? What values are expressed in saying, about such cases, that we really do or really do not believe what we say we believe? What vision of the world manifests in these different ways of speaking, what projects are supported, what phenomena rendered more and less visible?


Related post:

Against Intellectualism About Belief (July 31, 2015)

Wednesday, February 10, 2016

[Updated] APA Membership Goes from 2.4% to 5.4% Self-Reported Black or African-American in a Single Year?

Update, Feb. 11: After I posted the below, Amy Ferrer at the APA looked into it and discovered that it was a spreadsheet error. The corrected data are here. In the corrected data, the 2014 percentage is 2.4% and 2015 is 2.6%, well within chance variation.


I'm looking at this table of demographic statistics from the American Philosophical Association, comparing the number of APA survey respondents self-describing as "Black/African-American", among regular APA members (excluding emeritus, K-12, colleague, international, and student members). In 2014, I see 56 out of 2730 respondents (2.1%; 2.4% if we exclude those in the "prefer not to answer" category) in the "Black/African-American" category. In 2015, it's 146 out of 2874 (5.1%; 5.4% excluding "prefer not to answer").

It's not possible that the percentage of Black philosophers in the U.S. doubled in a single year. Since only about half of the APA membership responded to the survey, it could be a non-response effect (i.e., Black philosophers much more likely to respond in 2015 than in 2014), but if so it's an amazingly huge one. Another possibility is a change in the format of the question or in the willingness of members to describe themselves as belonging to this racial category -- but if so, again it's quite large for an effect of that sort in such a short time frame.


Thursday, February 04, 2016

Cheerfully Suicidal A.I. Slaves

Suppose that we someday create genuinely conscious Artificial Intelligence, with all the intellectual and emotional capacity of human beings. Never mind how we do it -- possibly via computers, possibly via biotech ("uplifted" animals).

Here are two things we humans might want, which appear to conflict:

(1.) We might want them to subordinately serve us and die for us.

(2.) We might want to treat them ethically, as beings who deserve rights equal to the rights of natural human beings.

A possible fix suggests itself: Design the A.I.'s so that they want to serve us and die for us. In other words, make a race of cheerfully suicidal A.I. slaves. This was Asimov's solution with the Three Laws of Robotics -- a solution that slowly falls apart across the arc of his robot stories (finally collapsing in "Bicentennial Man").

What to make of this idea?

Douglas Adams parodies the cheerily suicidal A.I. with an animal uplift case in The Restaurant at the End of the Universe:

A large dairy animal approached Zaphod Beeblebrox's table, a large fat meaty quadruped of the bovine type with large watery eyes, small horns and what might almost have been an ingratiating smile on its lips.

"Good evening," it lowed and sat back heavily on its haunches. "I am the main Dish of the Day. May I interest you in parts of my body?" It harrumphed and gurgled a bit, wriggled its hind quarters into a comfortable position and gazed peacefully at them.

Zaphod's naive Earthling companion, Arthur Dent, is predictably shocked and disgusted, and when he suggests a green salad instead, the suggestion is brushed off. Zaphod and the animal argue that it's better to eat an animal that wants to be eaten, and can say so clearly and explicitly, than one that does not want to be eaten. Zaphod orders four rare steaks.

"A very wise choice, sir, if I may say so. Very good," it said. "I'll just nip off and shoot myself."

He turned and gave a friendly wink to Arthur.

"Don't worry, sir," he said. "I'll be very humane."

Adams, I think, nails the peculiarity of the idea. There's something ethically jarring about creating an entity with human-like intelligence and emotion, which will completely subject its own interests to ours, even to the point of suiciding at our whim. This appears to be so even if the being wants to be subjected in that way.

The three major classes of ethical theory -- consequentialism, deontology, and virtue ethics -- can each be read in a way that delivers this result. The consequentialist can object that the good of a small pleasure for a human does not outweigh the potential of the lifetime of pleasure for an uplifted steer, even if the steer doesn't appreciate that fact. The Kantian deontologist can object that the steer is treating itself as a "mere means" rather than as an agent whose life should not be sacrificed by itself or others to achieve others' goals. The Aristotelian virtue ethicist can say that the steer is cutting its life short rather than flourishing into its full potential of creativity, joy, friendship, and thought.

If we can use Adams' steer as an anchoring point of moral absurdity at one end of the ethical continuum, the question then arises to what extent such reasoning transfers to less obvious intermediate cases, such as Asimov's robots who don't sacrifice themselves as foodstuffs (though presumably they would do so if commanded to, by the Second Law) but who do, in the stories, appear perfectly willing to sacrifice themselves to save human lives.

When a human sacrifices her life to save someone else's it can be, at least sometimes, a morally beautiful thing. But a robot designed that way from the start, to always subordinate its interests to those of humans -- I'm inclined to think that ought to be ruled out, in the general case, by any reasonable egalitarian principle that treats AIs as deserving equal moral status with humans if they have broadly human-like cognitive and emotional capacities. Such a principle would be a natural extension of the types of consequentialist, deontologist, and virtue ethicist reasoning that rules out Adams' steer.

Thus, I think we can't use the "cheerfully suicidal slave" fix to escape the dilemma posed by (1) and (2) above. If we somehow create genuinely conscious, general-intelligence A.I. with a range of emotional capacities like our own, then we must create it morally equal, not subordinate.

[image source]


Related article: A Defense of the Rights of Artificial Intelligences (Schwitzgebel & Garza, Midwest Studies in Philosophy 2015).