Monday, September 28, 2015

Microaggression and the Culture of Solidarity

A guest post by Regina Rini

If you are on a college campus or read anxious thinkpieces, you’ve probably heard about ‘microaggression’. A microaggression is a relatively minor (hence ‘micro’) insult to a member of a marginalized group, perceived as damaging to that person’s full standing as social equal. Examples include acting especially suspicious toward people of color or saying to a Jewish student, "Since Hitler is dead, you don’t have to worry about being killed by him any more." A microaggression is not necessarily a deliberate insult, and any one instance might be an honest mistake. But over time a pattern of microaggression can cause macro harm, by continuously reminding members of marginalized groups of their precarious social position.

A recent paper by sociologists Bradley Campbell and Jason Manning claims that talk of microaggression signals the appearance of a new moral culture: a ‘culture of victimhood’. In the paper Campbell and Manning present a potted history of western morality. First there was a ‘culture of honor’, which prized physical bravery and took insults to demand an aggressive reply. Picture two medieval knights glowering at one another, swords drawn. Then, as legal institutions grew stronger, the culture of honor was displaced by a ‘culture of dignity’, in which individuals let minor insults slide, and reported more serious offenses to impartial authorities. Picture a 1950s businessman calmly telling the constable about a neighbor peeking in windows. Finally, there is now an emerging ‘culture of victimhood’, in which an individual publicly calls attention to having been insulted, in hopes of rallying support from others and inducing the authorities to act. Picture a queer Latina student tweeting about her professor’s perceived-to-be homophobic and racist comments.

There is a serious problem with Campbell and Manning’s moral history, and exposing this problem helps us to see that the ‘culture of victimhood’ label is misleading. The history they provide is a history of the dominant moral culture: it describes the mores of those social groups with greatest access to power. Think about the culture of honor, and notice how limited it must have been. If you were a woman in medieval Europe, you were not expected or permitted to respond to insults with aggression. Even if you were a man, but of low social class, you certainly would not draw your sword in response to insult from a social superior. The ‘culture of honor’ governed relations among a small part of society: white men of equally high social status.

Now think about the culture of dignity, which Campbell and Manning claim “existed perhaps in its purest form among respectable people in the homogenous town of mid-twentieth century America.” Another thing that existed among the ‘respectable people’ in those towns was approval of racial segregation; ‘homogenous towns’ did not arise by accident. People of color, women, queer people, immigrants – none could rely upon the authorities to respond fairly to reports of mistreatment by the dominant group. The culture of dignity embraced more people than had the culture of honor, but it certainly did not protect everyone.

The cultures of honor and dignity left many types of people formally powerless, with no recognized way of responding to moral mistreatment. But they did not stay quiet. What they did instead was whisper to one another and call one another to witness. They offered mutual recognition amid injustices they could not overcome. And sometimes, when the circumstances were right, they made sure that their mistreatment would be seen by everyone, even by the powerful. They sat in at lunch counters that refused to serve them. They went on hunger strike to demand the right to vote. They rose up and were beaten down at Stonewall when the police, agents of dignity, moved in.

The new so-called ‘culture of victimhood’ is not new, and it is not about victimhood. It is a culture of solidarity, and it has always been with us, an underground moral culture of the disempowered. In the culture of solidarity, individuals who cannot enforce their honor or dignity instead make claim on recognition of their simple humanity. They publicize mistreatment not because they enjoy the status of victim, but because they need the support of others to stand strong, and because ultimately public discomfort is the only route to redress possible. What is sought by a peaceful activist who allows herself to be beaten by a police officer in front of a television camera, other than our recognition? What is nonviolent civil disobedience, other than an expression of the culture of solidarity?

If the culture of solidarity is ancient, then what explains the very current fretting over its manifestation? One answer must be social media. Until very recently, marginalized people were reliant on word of mouth or the rare sympathetic journalist to document their suffering. Yet each microaggression is a single small act that might be brushed aside in isolation; its oppressive power is only visible in aggregate. No journalist could document all of the little pieces that add up to an oppressive whole. But Facebook and Twitter allow documentation to be crowdsourced. They have suddenly and decisively amplified the age-old tools of the culture of solidarity.

This is a development that we should welcome, not fear. It is good that disempowered people have new means of registering how they are mistreated, even when mistreatment is measured in micro-units. Some of the worries raised about ‘microaggression’ are misplaced. Campbell and Manning return repeatedly to false reporting of incidents that did not actually happen. Of course it is bad when people lie about mistreatment – but this is nothing special about the culture of solidarity. People have always abused the court of moral opinion, however it operated. An honor-focused feudal warlord could fabricate an insult to justify annexing his brother’s territory. A 1950s dignitarian might file a false police report to get revenge on a rival.

There are some more serious worries about the recent emergence of the culture of solidarity. Greg Lukianoff and Jonathan Haidt suggest that talk of microaggression is corrosive of public discourse; it encourages accusations and counter-accusations of bad faith, rather than critical thinking. This is a reasonable thing to worry about, but their solution, that “students should also be taught how to live in a world full of potential offenses” is not reasonable. The world is not static: what is taught to students now will help create the culture of the future. For instance, it is not an accident that popular support for marriage equality was achieved about 15 years after gay-straight alliances became a commonplace in American high schools and colleges. Teaching students that they must quietly accept racist and sexist abuse, even in micro units, is simply a recipe for allowing racist and sexist abuse to continue. A much more thoughtful solution, one that acknowledges the ongoing reality of oppression as more than an excuse for over-sensitive fussing, will be required if we are to integrate recognition of microaggression into productive public discourse.

There is also a genuine question about the moral blameworthiness of microaggressors. Some microaggressions are genuine accidents, with no ill intent on the part of the one who errs. Others are more complex psychological happenings, as with implicit bias. Still others are acts of full-blooded bigotry, hiding behind claims of misunderstanding. The problem is that outsiders often cannot tell which is which – nor, in many cases, can victims. And being accused of acting in micro-racist or micro-sexist ways is rarely something people receive without becoming defensive; it is painful to be accused of hurting others. We need a better way of understanding what sort of responsibility people have for their small, ambiguous contributions to oppression. And we need better ways of calling out mistakes, and of responding to being called out. These are all live problems for ethicists and public policy experts. Nothing is accomplished by ignoring the phenomenon or demanding its dismissal from polite conversation.

The culture of solidarity has always been with us – with some of us longer than others. It is a valuable form of moral community, and its recent amplification through social media is something we should welcome. The phenomena it brings to light – microaggression among them – are real problems, bringing with them all the difficulties of finding real solutions. But if we want our future moral culture to be just and equal, not merely quietly dignified, then we will have to struggle for those solutions.

Thanks to Kate Manne, Meena Krishnamurthy, and others for helping me think through the ideas of this post. Of course, they do not necessarily endorse everything I say.

image credit: ‘Hands in Solidarity, Hands of Freedom’ mural, Chicago IL. Photo by Terence Faircloth

Friday, September 25, 2015

Some Video Interviews of Me

... on topics related to consciousness and belief, about ten minutes each, here.

This interview is a decent intro to my main ideas about group consciousness. (Full paper: "If Materialism Is True, the United States Is Probably Conscious".)

This interview is a decent intro to my skepticism about the metaphysics of consciousness. (Full paper: "The Crazyist Metaphysics of Mind".)

Monday, September 21, 2015

A Theory of Hypocrisy

Hypocrisy, let's say, is when someone conspicuously advocates some particular moral rule while also secretly, or at least much less conspicuously, violating that moral rule (and doing so at least as much as does the average member of her audience).

It's hard to know exactly how common hypocrisy is, because people tend to hide their embarrassing behavior and because the psychology of moral advocacy is itself a complex and understudied issue. But it seems likely that hypocrisy is more common than a purely strategic analysis of its advantages would predict. I think of the "family values" and anti-homosexuality politicians and preachers who seem disproportionately likely to be caught in gay affairs, of the angry, judgmental people I know who emphasize how important it is to peacefully control one's emotions, of police officers who break the laws they enforce on others, of Al Gore's (formerly?) environmentally-unfriendly personal habits, and of the staff member here at UCR who was in charge of prosecuting academic misconduct and who was later dismissed for having grossly falsified his resume.

Now, anti-homosexuality preachers might or might not be more likely than their parishioners to have homosexual affairs, etc. But it's striking to me that the rates even come close, as it seems to me they do. A purely strategic analysis of hypocrisy suggests that, in general, people who conspicuously condemn X should have low rates of X, since the costs of advocating one thing and doing another are typically high. Among those costs: creating a climate in which X-ish behavior, which you engage in, is generally more condemned; attracting friends and allies who are especially likely to condemn the types of behavior you secretly engage in; attracting extra scrutiny of whether you in fact do X or not; and attracting the charge of hypocrisy, in addition to the charge of X-ing itself, if your X-ing is discovered, substantially reducing the chance that you will be forgiven. It seems strategically foolish for a preacher with a secret homosexual lover to choose anti-homosexuality to be a central platform of his preaching!

Here's what I suspect is going on.

People do not aim to be saints, nor even to be much morally better than their neighbors. They aim instead for moral mediocrity. If I see a bunch of people profiting from doing something that I regard as morally wrong, I want to do that thing too. No fair that (say) 15% of people cheat on the test and get A's, or regularly get away with underreporting their self-employment income. I want to benefit, if they are! This reasoning is tempting even if the cheaters are a minority and honest people are the majority.

Now consider the preacher tempted by homosexuality or the environmentalist who wants to eat steaks in her large air-conditioned house. They might be entirely sincere in their moral opinions. Hypocrisy needn't involve insincere commitment to the moral ideas one espouses (though of course it can be insincere). Still, they see so many others getting away with what they condemn that they (not aiming to be a lot better than their neighbors) might well feel licensed to indulge themselves a bit too.

Furthermore, if they are especially interested in the issue, violations of those norms might be more salient and visible to them than for the average person. The person who works in the IRS office sees how frequent and easy it is to cheat on one's taxes. The anti-homosexual preacher sees himself in a world full of gays. The environmentalist grumpily notices all the giant SUVs rolling down the road. Due to an increased salience of violations of the norms they most care about, people might tend to overestimate the frequency of the violations of those norms -- and then when they calibrate toward mediocrity, their scale might be skewed toward estimating high rates of violation. This combination of increased salience of unpunished violations plus calibration toward mediocrity might partly explain why hypocritical norm violations are more common than a purely strategic account might suggest.

But I don't think that's enough by itself to explain the phenomenon, since one might still expect people to tend to avoid conspicuous moral advocacy on issues where they know they are average-to-weak; and even if their calibration scale is skewed a bit high, they might hope to pitch their own behavior especially toward the good side on that particular issue -- maybe compensating by allowing themselves more laxity on other issues.

So here's the final piece of the puzzle:

Suppose that there's a norm that you find yourself especially tempted to violate, though you succeed for a while, at substantial personal cost, in not violating it. You love cheeseburgers but go vegetarian; you have intense homosexual desires but avoid acting on them. Envy might lead you to be especially condemnatory of other people who still do such things. If you've worked so hard, they should too! It's an issue you've struggled with personally, so now you have wisdom about it, you think. You want to try to make sure that others don't get away with that sin you've worked so hard to avoid. Moreover, optimistic self-illusions might lead you to overestimate the likelihood that you will stay strong and not lapse. These envious, self-confident moments are the moments when you are most likely to conspicuously condemn those behaviors to which you are tempted. But after you're on the hook for it, if you've been sufficiently conspicuous in your condemnations, it becomes hard to change your tune later, even after you have lapsed.

[image source; more on Rekers]

Thursday, September 17, 2015

Philosophical Conversations

a guest post by Regina Rini

You’re at a cocktail reception and find yourself talking to a stranger. She mentions a story she heard today on NPR, something about whether humans are naturally good or evil. Something like that. So far she’s just described the story; she hasn’t indicated her own view. There are a few ways you might respond. You might say, "Oh, that’s interesting. I wonder why this question is so important to people." Or you might say, "Here’s my view on the topic… What do you think?" Or maybe you could say "Here's the correct view on the topic… Anyone who thinks otherwise is confused."

It’s obvious that the last response is boorish cocktail party behavior. Saying that seems to be aimed at foreclosing any possible conversation. You’re claiming to have the definitive, correct view, and if you’re right then there’s no point in discussing it further. If this is how you act, you shouldn’t be surprised when the stranger appears disconcerted and politely avoids talking to you anymore. So why is it that most philosophy books and papers are written in exactly this way?

If we think about works of philosophy as contributing to a conversation, we can divide them up like this. There are conversation-starters: works that present a newish topic or question, perhaps with a suggestive limning of the possible answers, but without trying to come to a firm conclusion. There are conversation-extenders: works that react to an existing topic by explaining the author’s view, but don’t try to claim that this is the only possibly correct view and clearly invite response from those who disagree. And there are conversation-enders: works that try to resolve or settle an existing debate, by showing that one view is the correct view, or at least that an existing view is definitively wrong and must be abandoned.

Contemporary analytic philosophy seems to think that conversation-enders are the best type of work. Conversation-starters do get some attention, but usually trying to raise a new topic leads to dismissal by editors and referees. "This isn’t sufficiently rigorous", they will say. Or: "What’s the upshot? Which famous –ism does this support or destroy? It isn’t clear what the author is trying to accomplish." Opening a conversation, with no particular declared outcome, is generally regarded as something a dilettante might do, not what a professional philosopher does.

Conversation-extenders also have very little place in contemporary philosophy. If you merely describe your view, but don’t try to show that it is the only correct view, you will be asked "where is your argument?" Editors and referees expect to see muscularity and blood. A good paper is one that has "argumentative force". It shows that other views "fail" - that they are "inadequate", or "implausible", or are "fatally flawed". A good paper, by this standard, is not content to sit companionably alongside opposed views. It must aim to end the conversation: if its aspirations are fully met, there will no need to say anything more about the topic.

You might object here. You might say: the language of philosophy papers is brutal, yes, but this is misleading. Philosophers don’t really try to end conversations. They know their opponents will keep on holding their "untenable" views, that there will soon be a response paper in which the opponent says again the thing that they’ve been just shown they "cannot coherently say". Conversation-enders are really conversation-extenders in grandiose disguise. Boxers aren’t really trying to kill their opponents, and philosophers aren’t really trying to kill conversations.

But I think this objection misses something. It’s not just the surface language of philosophy that suggests a conversation-ending goal. That language is driven by an underlying conception of what philosophy is. Many contemporary analytic philosophers aspire to place philosophy among the ‘normal sciences’. Philosophy, on this view, aims at revealing the Truth – the objective and eternal Truth about Reality, Knowledge, Beauty, and Justice. There can be only one such Truth, so the aim of philosophy really must be to end conversations. If philosophical inquiry ever achieves what it aims at, then there is the Truth, and why bother saying any more?

For my part, I don’t know if there is an objective and eternal Truth about Reality, Knowledge, Beauty, and Justice. But if there is, I doubt we have much chance of finding it. We are locally clever primates, very good at thinking about some things and terrible at thinking about others. The expectation that we might uncover objective Truth strikes me as hubristic. And the ‘normal science’ conception of philosophy leads to written work that is plodding, narrow, and uncompanionably barbed. Because philosophy aims to end conversations, and because that is hard to do, most philosophy papers take on only tiny questions. They spin epicycles in long-established arguments; they smash familiar –isms together in hopes that one will display publishably novel cracks. If philosophy is a normal science, this makes perfect sense: to end the big conversation, many tiny sub-conversations must be ended first.

There is another model for philosophical inquiry, one which accepts the elusiveness of objective Truth. Philosophy might instead aim at interpretation and meaningfulness. We might aspire not to know the Truth with certainty, but instead to know ourselves and others a little bit better. Our views on Reality, Knowledge, Beauty, and Justice are the modes of our own self-understanding, and the means by which we make our selves understood to others. They have a purpose, but it is not to bring conversation to a halt. In fact, on this model, the ideal philosophical form is the conversation-opener: the work that shows the possibility of a new way of thinking, that casts fresh light down unfamiliar corridors. Conversation-openers are most valuable precisely because they don’t assume an end will ever be reached. But conversation-extenders are good too. What do you think?

The spirit of this post owes a lot to Robert Nozick, especially the introduction to his book Philosophical Explanations. Thanks to Eden Lin and Tim Waligore for helping me track down Nozick’s thoughts, and to several facebook philosophers for conversing about these ideas.

image credit: Not getting Involved by Tarik Browne

Monday, September 14, 2015

Chinese Philosophy & Intellectualism about Belief

Two separate announcements, not one. Though now that I think about it, joining the topics might make for an interesting future post....

Last weekend LA Times published my piece "What's Missing in College Philosophy Classes? Chinese Philosophers". (This is a revision of a Splintered Mind post from about a year ago.)

And The Minds Online Conference at Brains Blog is now in its third week. This week's topic: Belief and Reasoning. I haven't yet had a chance to read the other papers, but Jack Marley-Payne's "Against Intellectualist Theories of Belief" is nicely done, as I say in my own commentary on the paper.

Friday, September 11, 2015

Ethics, Metaethics, and the Future of Morality

a guest post by Regina Rini

Moral attitudes change over generations. A century ago in America, temperance was a moral crusade, but you’d be hard-pressed to find any mark of it now among the Labor Day beer coolers. Majority views on interracial and same-sex relationships have swung from one pole to the other within the lifetimes of many people reading this. So given what we know of the past, we can say this about the people of the future: their moral attitudes will not be the same as ours. Some ethical efforts that now belong to a minority – vegetarianism, perhaps – will become as ubiquitously upheld as tolerance for interracial partnerships. Other moral matters that seem urgent now will fade from thought just as surely as the temperance movement. We can’t know which attitudes will change and how, but we do know that moral change will happen. Should we try to stop it – or even to control it?

Every generation exercises some control over the moral attitudes of its children, through the natural indoctrination of parenting and the socialization of school. But emerging technologies now give us unprecedented scope to tailor what the future will care about. From social psychology and behavioral economics we increasingly grasp how to design institutions so that the ‘easy’ or ‘default’ choices people tend to adopt coincide with the ones that are socially valuable. And as gene-editing eventually becomes a normal part of reproduction, we will be able to influence the moral attitudes of generations far beyond our own children. Of course, it is not as simple as ‘programming’ a particular moral belief; genetics does not work that way. But we might genetically tinker with brain receptivity for neurotransmitters that affect a person’s readiness to trust, or her preference for members of her own ethnic group. We won’t get to decide precisely how our descendants come to find their moral balance – but we could certainly put our thumb on the scale.

On one way of looking at it, it’s obvious that if we can do this, we should. We are talking about morality here, the stuff made out of ‘should’s. If we can make future generations more caring, more disposed to virtue, more respectful of rational agency, more attentive to achieving the best outcomes – however morality works, we should help future people to do what they should do. That is what ‘should’ means. This thought is especially compelling when we realize that some of our moral goals, like ending racism or addressing the injustices climate change will bring, are necessarily intergenerational projects. The people of the future are the ones who will have to complete the moral journey we have begun. Why not give them a head start?

But there are also reasons to think we should not interfere with whatever course the future of morality might take. For one thing, we ought to be extremely confident that our moral attitudes are the right ones before we risk crimping the possibilities for radical moral change. Perhaps we are that confident about some issues, but it would be unreasonable to be so sure across the board. Think, for example, about moral attitudes toward the idea of ownership of digital media: whether sampling, remixing, and curating count as forms of intellectual theft. Already there appear to be generational splits on this topic, driven by technology that emerged only in the last 30 years. Would you feel confident trying to preordain moral attitudes about forms of media that won’t be invented for a century?

More insidiously, there is the possibility that our existing moral attitudes already reflect the influence of problematic political ideologies and economic forces. If we decide to impose a shape on the moral attitudes of the future, then the technology that facilitates this patterning will likely be in the hands of those who benefit from existing power structures. We may end up creating generations of people less willing to question harmful or oppressive social norms. Finally, we should consider whether any attempt to direct the development of morality is disrespectful to the free agency of future people. They will see our thumbprint on their moral scale, and they may rightly resent our influence even as they cannot escape it.

One thing these reflections bring out is that philosophers are mistaken when they attempt to cleanly separate metaethical questions (what morality is) from normative ethical questions (what we should do). If this clean separation was ever tenable, our technologically expanded control over the future makes it implausible now. We face imminent questions about what we should do – which policies and technologies to employ, to which ends – that depend upon our answers to questions about what morality is. Are there objective moral facts? Can we know them? What is the relationship between morality and human freedom? The idea that metaethical inquiry is for dusty scholars, disconnected from our ordinary social and political lives, is an idea that fades entirely from view when we look to the moral future.

--------------------------------------

I’ve drawn on several philosophers’ work for some of the arguments above. For arguments in favor of using technology to direct the morality of future generations, see Thomas Douglas, “Moral Enhancement” in the Journal of Practical Ethics; and Ingmar Persson and Julian Savulescu, Unfit for the Future. For arguments against doing so, see Bernard Williams, Ethics and the Limits of Philosophy (chapter 9); and Jurgen Habermas, The Future of Human Nature.

image credit: ’Hope in a better future’ by Massimo Valiani

Wednesday, September 09, 2015

The Invisible Portion of the Experiment

Maybe you've heard about huge psychological replication study that was released on August 28th. The headline finding is this: 270 psychologists jointly attempted to replicate the results of 100 recent studies in three top psychology journals. In fewer than 40% of cases were the researchers able to replicate the originally reported effect.

But here's perhaps an even more telling finding: Only 47% of the originally reported effect sizes were within the 95% confidence interval of the replication effect size. In other words, if you just used the replication study as your basis for guessing the real effect size, you would not expect the real effect size to be as large the effect size originally reported. [Note 1] This reported result inspired me to look at the raw data, to try a related analysis that the original replication study did not appear to report: What percentage of the replications find a significantly lower effect size than the original reported study? By my calculations: 36/95, or 38%. [See Note 2 for what I did.]

(A rather surprising additional ten studies showed statistically marginal trends toward lower effect sizes, which it is tempting to interpret as combination of a non-effect with a poorly powered original study or replication. A representative case is one study with an effect size of r = .22 on 96 participants and a replication effect size of r = .02 on 108 participants (one-tailed p value for difference between the r's = .07). Thus, it seems likely that 38% is a conservative estimate of the tendency toward lower effect size in replications.)

This study-by-study statistical comparison of effect sizes is useful because it helps distinguish the file drawer problem from what we might call the invisible factor problem.

The file drawer problem is this: Researchers are more likely to publish statistically significant findings than findings that show no statistically significant effect. Statistically chance results will sometimes occur, and if mostly these results are published, it might look like there's a real effect when actually there is no real effect.

The invisible factor problem is this: There are a vast number of unreported features of every experiment. Possibly one of those unreported features, invisible to the study's readership, is an important contributor to the reported findings. In infancy research for example, it's not common to report the experimenter's pre-testing interactions with the infant, if any, but pre-testing interactions might have a big effect. In cognitive research, it's not common to report what time of day participants performed the tasks, but time of day can influence arousal and performance. And so on.

The file drawer problem is normally managed in meta-analysis by assuming a substantial number of unpublished null-result studies (maybe five times as many as the published studies) and then seeing if the result still proves significant in a merged analysis. But this is only an adequate approach if the only risk to be considered is a chance tendency for a non-effect to show up as significant in some studies. If, on the other hand, there are a large number of invisible factors, or moderators, that dependably confound studies, leading to statistically significant positive results other than by chance, standard meta-analytic file-drawer compensations will not suffice. The invisible factors might be large and non-chance, unintentionally sought out and settled upon by well-meaning researchers, perhaps even passed along teacher to student. ("It works best if you do it like this.")

Here's how I think psychological research sometimes goes. You try an experiment one way and it "fails" -- that is, it doesn't produce the hoped-for result. So you try another way and it fails again. So then you try a third way and it succeeds. Maybe to make sure it's not chance, you do it that same way again and it still succeeds, so you publish. But there might be no real underlying effect of the sort you think there is. What you might have done is find the right set of moderating factors (time of day, nonverbal experimenter cues, whatever), to get the pattern of results you want. If those factors are visible -- that is, reported in the published study -- then others can evaluate and critique and try to manipulate them. But if those factors are invisible, then you will have an irreproducible result, but one not due to chance. In a way this is a file-drawer effect, since null results are disproportionately non-reported, but it's one driven by biased search for experimental procedures that "succeed" because of real moderating factors rather than just chance fluctuations.

If failure of replication in psychology is due to publishing results that by mere statistical chance happen to fall below the threshold for statistical significance, then most failed replications will not be statistically significantly different from the originally reported results -- just closer to zero and non-significant. But if the failure of replication in psychology is due to invisible moderating factors unreported in the original experiment, then failed replications with decent statistical power will tend to find significantly different results from the original experiment.

I think that is what we see.

[revised Sep. 10]

----------------------------------------

Related posts:
What a Non-Effect Looks Like (Aug. 7, 2013)
Meta-Analysis of the Effect of Religion on Crime: The Missing Positive Tail (Apr. 11, 2014)
Psychology Research in the Age of Social Media (Jan. 7, 2015)

----------------------------------------

Note 1: In no case was the original study's effect size outside the 95% interval for the replication study because the original study's effect size was too low.

Note 2: I used the r's and N's reported in the study's z-to-r conversions for non-excluded studies, then plugged the ones that were not obviously either significant or non-significant one-by-one into Lowry's online calculator for significant difference between correlation coefficients, using one-tailed p values and a significance threshold of p < .05. Note that this analysis is different from and more conservative than simply looking at whether the the 95% CI of the replication includes the effect size of the original study, since it allows for statistical error in the original study rather than assuming a fixed original effect size.

Tuesday, September 08, 2015

Minds Online Conference

is cooking along. You might want to check out this week's lineup, on Perception and Consciousness:
  • Nico Orlandi (UC Santa Cruz): "Bayesian Perception Is Ecological Perception" (KEYNOTE)
  • Derek H. Brown (Brandon University): “Colour Layering and Colour Relationalism” (Commentators: Mazviita Chirimuuta and Jonathan Cohen)
  • Jonathan Farrell (Manchester): “‘What It Is Like’ Talk Is Not Technical Talk” (Commentators: Robert Howell and Myrto Mylopoulos)
  • E.J. Green (Rutgers University): “Structure Constancy” (Commentators: John Hummel and Jake Quilty-Dunn)
  • Assaf Weksler (Open University of Israel and Ben Gurion University): “Retinal Images and Object Files: Towards Empirically Evaluating Philosophical Accounts of Visual Perspective” (Commentators: RenĂ© Jagnow and Joulia Smortchkova)
  • Tuesday, September 01, 2015

    A Defense of the Rights of Artificial Intelligences

    ... a new essay in draft, which I've done collaboratively with a student named Mara (whose last name is currently in flux and not finally settled).

    This essay draws together ideas from several past blog posts including:
  • Our Possible Imminent Divinity (Jan. 2, 2014)
  • Our Moral Duties to Artificial Intelligences (Jan. 14, 2015)
  • Two Arguments for AI (or Robot) Rights (Jan. 16, 2015)
  • How Robots and Monsters Might Break Human Moral Systems (Feb. 3, 2015)
  • Cute AI and the ASIMO Problem (July 24, 2015)
  • How Weird Minds Might Destabilize Ethics (Aug. 3, 2015)
  • ------------------------------------------

    Abstract:

    There are possible artificially intelligent beings who do not differ in any morally relevant respect from human beings. Such possible beings would deserve moral consideration similar to that of human beings. Our duties to them would not be appreciably reduced by the fact that they are non-human, nor by the fact that they owe their existence to us. Indeed, if they owe their existence to us, we would likely have additional moral obligations to them that we don’t ordinarily owe to human strangers – obligations similar to those of parent to child or god to creature. Given our moral obligations to such AIs, two principles for ethical AI design recommend themselves: (1) design AIs that tend to provoke reactions from users that accurately reflect the AIs’ real moral status, and (2) avoid designing AIs whose moral status is unclear. Since human moral intuition and moral theory evolved and developed in contexts without AI, those intuitions and theories might break down or become destabilized when confronted with the wide range of weird minds that AI design might make possible.

    Full version available here.

    As always, comments warmly welcomed -- either by email or on this blog post. We're submitting it to a special issue of Midwest Studies with a hard deadline of September 15, so comments before that deadline would be especially useful.

    [image source]