... on bloggingheads.tv. Two of my favorite scholars!
Josh says that ordinary reasoning about mental states is unlike scientific reasoning because our reasoning about mental states is influenced by our moral judgments (as his work suggests) while scientific reasoning is not so influenced. Alison, in contrast, is a leading proponent of the view that scientific reasoning and ordinary reasoning have much in common, especially in children. Josh plays the role of interviewer and lets Alison do most of the talking.
Near the end, Alison touches briefly on what I think is the key flaw on Josh's argument, the unwarranted assumption that scientific reasoning is not much influenced by moral judgments. In my view -- and I think this is now the majority view in philosophy of science -- scientific thinking is, and should be, thoroughly permeated with emotion and morality. The old model of the impartial, objective scientific observer cannot be sustained. So there's no reason Josh's findings about the effects of moral judgments on ordinary reasoning have to stand in conflict with Alison's view of the continuity of scientific and everyday reasoning.
Hi Eric,
ReplyDeleteThe idea that the “problem of other minds” is in part an ethical problem or has a profound ethical dimension – and is not purely or even centrally an epistemological problem – has been central to Stanley Cavell’s work on the later Wittgenstein for several decades. It is also central to Stephen Mulhall’s Cavellian interpretation of Wittgenstein (see his influential essay on the problem of other minds and -- the Cavellian distinction between knowing and acknowledging -- in the film Blade Runner). Given that Cavell was also probably the first philosopher seriously to confront the role of appeal to linguistic intuitions about “what we would/should say“ in analytic philosophy (by what right does a Wittgenstein or an Austin speak on my or our behalf?), it has always mystified me a bit that experimental philosophers have passed over Cavell’s contributions in silence. (At least as far as I know.) I might also mention that, outside of academic philosophy, Sherry Turkle in STS at MIT has long been interested in the moral dimension of the psychological interpretation of computational artifacts. Work that we did together at the MIT Initiate on Technology and Self (2001-2004) – which involved bringing people of different ages into sustained and repeated contact with Kismet and Cog at the MIT AI Lab and then interviewing them about their encounters – suggested that a wide variety of ethical issues were bound up with human responses to robots and other artifacts designed to invite psychological interpretation and/or to engage with human beings in a “social” manner. It also suggested (or suggests to me know) that there’s work to be done in x-phi on the central idea of “intuition.” Since we conducted interviews with people over multiple, extended encounters with robots and robotic pets, we were able to observe how their intuitions changed (sometimes slowly, sometimes rapidly). Age was obviously an important factor. Anyway, when we talk about the intuitions people have about robot minds and/or the role moral factors play in the formation of those intuitions, which of their intuitions are relevant? Their initial gut reactions when confronted by someone hold a clipboard questionnaire on the street? Their reactions after a little actual interaction with robots? Or their stable intuitions after multiple, extended encounters? Or all of them?
Sorry if I managed to change the subject somewhat.
-Robert Briscoe
Thanks for the comment, Robert! The couple times I've tried to look at Cavell, I've found it a morass, and you're right that there isn't much discussion of him in contemporary x-phi / phil psych circles. What would you recommend as a good, brief starting point?
ReplyDeleteThat's a nice thought at the end about the change of intuitions over time. Indeed, it seems inevitable that once we've interacted with them enough, we'd treat robots like Data or C3P0 as conscious and loci of moral concern, regardless of any theories beforehand. (But whether this shows anything about whether they are *really* conscious is another question....)
Hi Eric,
ReplyDeleteAs a brief starting point, there's the first half of Mulhall's essay on Blade Runner with which you're probably familiar. A less brief starting point would be Cavell's The Claim of Reason. Flipping through, pages 18-20 address linguistic intuitions and argue that appeals to linguistic intuition have a kind of moral dimension -- they are "claims to community." Pages 145-154 argue that appeals to linguistic intuition are invitations to "projective imagination." Much of part four of the book argues for a connection between "the problem of other minds," philosophical skepticism, and issues of intersubjective moral acknowledgment. Part four, unfortunately, is a bit of a morass. But it's a morass full of good ideas. There's also a lot of stuff on these topics in Must We Mean What we Say?
I agree with you that intuitions about consciousness are unreliable guides as to whether a creature or robot is conscious. But they're a good place to begin.
Following up on some points Robert raises...
ReplyDeleteI have zero familiarity with Cavell, but I am reminded here of a remark of John Locke's that "person...is a forensic term, appropriating actions and their merit; and so belongs only to intelligent agents capable of a law, and happiness and misery."
It seems insufficiently appreciated in contemporary philosophy of mind that there's something morally abhorrent about entertaining the (dualist) possibility that your neighbor is a zombie or the (panpsychist) possibility that electrons have feelings.
I think Pete hit the mark.
ReplyDeleteOur linguistic intuitions has a moral component and it would be a quirk to our moral sense if we adscribe intuitions (with moral component) to "things" with no moral reciprocation.
But i also think that if we go further in that way of thinking it would be better for our enviromental causes, our interaction with technology... to treat them as if moral counterparts.
Pete's right to mention Locke. My friend Aaron Garrett at BU always emphasizes the extent to which we tend to project our contemporary concerns in epistemology and philosophy of mind (and sometimes philosophy of language) on people in the 17th and 18th centuries who were first and foremost moral, theological, and political philosophers.
ReplyDeleteThe idea that there's something morally abhorrent about the zombie hypothesis comes out nicely in Mulhall's anti-dualist discussion of the replicants in Blade Runner (which borrows, if remember correctly, from Cavell's discussion of slavery in CoR).
When you suggest the zombie hypothesis is morally abhorrent, which of the following do you mean?
ReplyDelete(1.) entertaining it
(2.) entertaining it seriously
(3.) feeling that the evidence goes against it but doesn't definitively prove it to be false
(4.) feeling that the evidence leaves a toss-up
(5.) believing it
Also: Does it matter what the state of the evidence actually is, or is it abhorrent to be uncertain about the matter regardless of what evidence is out there one way or the other?
Nice questions, Eric.
ReplyDeleteI'd pick 2-5 and perhaps even 1, though I don't have a clear view on what the attitude of entertaining P amounts to.
I'm not seeing a whole lot of *relevant* difference between 1-5. Compare how they stack up on a much clearer case of a morally abhorrent P: Black people aren't really alive and so can be neither tortured nor murdered. Now, which of 1 through 5 are abhorrent?
As for your disjunctive question, I'm not sure what disjunct suits me best. As I view matters, there are certain constitutive relations between mental/moral concepts and evidence, something along the line of the concepts not having evidence-transcendent application conditions. I'd want to claim that, for example, whether the concept of being due moral consideration is correctly applied to some entity can't outstrip the available evidence.
Somehow, general zombie-contemplation doesn't seem as abhorrent as the racial case you describe. Maybe because the motivations and psychology behind it are presumably so different.
ReplyDeleteYour racial case brings out really vividly how some skeptical possibilities might be morally abhorrent to seriously entertain (whatever that means!) -- which raises the question of over what cases they generalize. That the world was created 5 minutes ago? That I'm a brain in a vat? That the future will be radically unlike the past? In a certain mood, I could see the contemplation of all of these as morally abhorrent, though I think my general inclination is to be fairly morally permissive about what we contemplate.