Not quite up to doing a blog post this week, after the death of my father on the 18th. Instead, I post this picture of a highly energy efficient device in outer space:
Related post: Memories of my father.Friday, January 30, 2015
Friday, January 23, 2015
Memories of My Father
Of teaching, he said that authentic education is less about textbooks, exams, and technical skills than about moving students "toward a bolder comprehension of what the world and themselves might become." He was a beloved psychology professor at California Lutheran University.
I have never known anyone, I think, who brought as much creative fun to teaching as he did. He gave out goofy prizes to students who scored well on his exams (e.g., a wind-up robot nun who breathed sparks of static electricity: "nunzilla"). Teaching about alcoholism, he would start by pouring himself a glass of wine (actually, water with food coloring), pouring more wine and acting drunker, arguing with himself, as the class proceeded. Teaching about child development, he would bring in my sister or me, and we would move our mouths like ventriloquist dummies as he stood behind us, talking about Piaget or parenting styles (and then he'd ask our opinion about parenting styles). Teaching about neuroanatomy, he brought in a brain jello mold, which he sliced up and passed around class for the students to eat ("yum! occipital cortex!"). Etc.
As a graduate student and then assistant professor at Harvard in the 1960s and 1970s, he shared the idealism of his mentors Timothy Leary and B.F. Skinner, who thought that through understanding the human mind we can transform and radically improve the human condition -- a vision he carried through his entire life.
His comments about education captured his ideal for thinking in general: that we should always aim toward a bolder comprehension of what the world and we ourselves, and the people around us, might become.
He was always imagining the potential of the young people he met, seeing things in them that they often did not see in themselves. He especially loved juvenile delinquents, whom he encouraged to think expansively and boldly. He recruited them from street corners, paying them to speak their hopes and stories into reel-to-reel tapes, and he recorded their declining rates of recidivism as they did this, week after week. His book about this work, Streetcorner Research (1964), was a classic in its day. As a prospective graduate student in the 1990s, I proudly searched the research libraries at the schools I was admitted to, always finding multiple copies with lots of date stamps in the 1960s and 1970s.
With his twin brother Robert, he invented the electronic monitoring ankle bracelet, now used as an alternative to prison for non-violent offenders.
He wanted to set teenage boys free from prison, rewarding them for going to churches and libraries instead of street corners and pool halls. He had a positive vision rather than a penal one, and he imagined everyone someday using location monitors to share rides and to meet nearby strangers with mutual interests -- ideas which, in 1960, seem to have been about fifty years before their time.
With degrees in both law and psychology, he helped to reform institutional practice in insane asylums -- which were often terrible places in the 1960s, whose inmates had no effective legal rights. He helped force these institutions to become more humane and to release harmless inmates held against their will. I recall his stories about inmates who were often, he said, "as sane as could be expected, given their current environment", and maybe saner than their jailors -- for example an old man who decades earlier had painted his neighbor's horse as an angry prank, and thought he'd "get off easy" if he convinced the court he was insane.
As a father, he modeled and rewarded unconventional thinking. We never had an ordinary Christmas tree that I recall -- always instead a cardboard Christmas Buddha (with blue lights poking through his eyes), or a stepladder painted green, or a wild-found tumbleweed carefully flocked and tinseled -- and why does it have to be on December 25th? I remember a few Saturdays when we got hamburgers from different restaurants and ate them in a neutral location -- I believe it was the parking lot of a Korean church -- to see which burger we really preferred. (As I recall, my sister and he settled on the Burger King Whopper, while I could never confidently reach a preference, because it seemed like we never got the methodology quite right.)
He loved to speak with strangers, spreading his warm silliness and unconventionality out into the world. If we ordered chicken at a restaurant, he might politely ask the server to "hold the feathers". Near the end of his life, if we went to a bank together he might gently make fun of himself, saying something like "I brought along my brain," here gesturing toward me with open hands, "since my other brain is sometimes forgetting things now". For years, though we lived nowhere near any farm, we had a sign from the Department of Agriculture on our refrigerator sternly warning us never to feed table scraps to hogs.
I miss him painfully, and I hope that I can live up to some of the potential he so generously saw in me, carrying forward some of his spirit.
-----------------------------------------------
I am eager to hear stories about his life from people he knew, so please, if you knew him, add one story (or more!) as a comment below. (Future visitors from 2018 or whenever, still post!) Stories are also being collected on his Facebook wall.
We are planning a memorial celebration for him in July to which anyone who knew him would be welcome to come. Please email me for details if you're interested.
Posted by Eric Schwitzgebel at 9:37 AM 17 comments
Labels: moral psychology
Friday, January 16, 2015
Two Arguments for AI (or Robot) Rights: The No-Relevant-Difference Argument and the Simulation Argument
In that post, I assumed that such artificial intelligences would deserve at least some moral consideration (maybe more, maybe less, but at least some). Eric Steinhart has pressed me to defend that assumption. Why think that such AIs would have any rights?
First, two clarifications:
- (1.) I speak of "rights", but the language can be weakened to accommodate views on which beings can deserve moral consideration without having rights.
- (2.) AI rights is probably a better phrase than robot rights, since similar issues arise for non-robotic AIs, including oracles (who can speak but have no bodily robotic features like arms) and sims (who have simulated bodies that interact with artificial, simulated environments).
Now, two arguments.
-----------------------------------------
The No-Relevant-Difference Argument
Assume that all normal human beings have rights. Assume that both bacteria and ordinary personal computers in 2015 lack rights. Presumably, the reason bacteria and ordinary PCs lack rights is that there is some important difference between them and us. For example, bacteria and ordinary PCs (presumably) lack the capacity for pleasure or pain, and maybe rights only attach to beings with the capacity for pleasure or pain. Also, bacteria and PCs lack cognitive sophistication, and maybe rights only attach to beings with sufficient cognitive sophistication (or with the potential to develop such sophistication, or belonging to a group whose normal members are sophisticated). The challenge, for someone who would deny AI rights, would be to find a relevant difference which grounds the denial of rights.
The defender of AI rights has some flexibility here. Offered a putative relevant difference, the defender of AI rights can either argue that that difference is irrelevant, or she can concede that it is relevant but argue that some AIs could have it and thus that at least those AIs would have rights.
What are some candidate relevant differences?
(A.) AIs are not human, one might argue; and only human beings have rights. If we regard "human" as a biological category term, then indeed AIs would not be human (excepting, maybe, artificially grown humans), but it's not clear why humanity in the biological sense should be required for rights. Many people think that non-human animals (apes, dogs) have rights. Even if you don't think that, you might think that friendly, intelligent space aliens, if they existed, could have rights. Or consider a variant of Blade Runner: There are non-humans among the humans, indistinguishable from outside, and almost indistinguishable in their internal psychology as well. You don't know which of your neighbors are human; you don't even know if you are human. We run a DNA test. You fail. It seems odious, now, to deny you all your rights on those grounds. It's not clear why biological humanity should be required for the possession of rights.
(B.) AIs are created by us for our purposes, and somehow this fact about their creation deprives them of rights. It's unclear, though, why being created would deprive a being of rights. Children are (in a very different way!) created by us for our purposes -- maybe even sometimes created mainly with their potential as cheap farm labor in mind -- but that doesn't deprive them of rights. Maybe God created us, with some purpose in mind; that wouldn't deprive us of rights. A created being owes a debt to its creator, perhaps, but owing a debt is not the same as lacking rights. (In Wednesday's post, I argued that in fact as creators we might have greater moral obligations to our creations than we would to strangers.)
(C.) AIs are not members of our moral community, and only members of our moral community have rights. I find this to be the most interesting argument. On some contractarian views of morality, we only owe moral consideration to beings with whom we share an implicit social contract. In a state of all-out war, for example, one owes no moral consideration at all to one's enemies. Arguably, were we to meet a hostile alien intelligence, we would owe it no moral consideration unless and until it began to engage with us in a socially constructive way. If we stood in that sort of warlike relation to AIs, then we might owe them no moral consideration even if they had human-level intelligence and emotional range. Two caveats on this: (1.) It requires a particular variety of contractarian moral theory, which many would dispute. And (2.) even if it succeeds, it will only exclude a certain range of possible AIs from moral consideration. Other AIs, presumably, if sufficiently human-like in their cognition and values, could enter into social contracts with us.
Other possibly relevant differences might be proposed, but that's enough for now. Let me conclude by noting that mainstream versions of the two most dominant moral theories -- consequentialism and deontology -- don't seem to contain provisions on which it would be natural to exclude AIs from moral consideration. Many consequentialists think that morality is about maximizing pleasure, or happiness, or desire satisfaction. If AIs have normal human cognitive abilities, they will have the capacity for all these things, and so should presumably figure in the consequentialist calculus. Many deontologists think that morality involves respecting other rational beings, especially beings who are themselves capable of moral reasoning. AIs would seem to be rational beings in the relevant sense. If it proves possible to create AIs who are psychologically similar to us, those AIs wouldn't seem to differ from natural human beings in the dimensions of moral agency and patiency emphasized by these mainstream moral theories.
-----------------------------------------
The Simulation Argument
Nick Bostrom has argued that we might be sims. That is, he has argued that we ourselves might be artificial intelligences acting in a simulated environment that is run on the computers of higher-level beings. If we allow that we might be sims, and if we know we have rights regardless of whether or not we are sims, then it follows that being a sim can't, by itself, be sufficient grounds for lacking rights. There would be at least some conceivable AIs who have rights: the sim counterparts of ourselves.
This whole post assumes optimistic technological projections -- assumes that it is possible to create human-like AIs whose rights, or lack of rights, are worth considering. Still, you might think that robots are possible but sims are not; or you might think that although sims are possible, we can know for sure that we ourselves aren't sims. The Simulation Argument would then fail. But it's unclear what would justify either of these moves. (For more on my version of sim skepticism, see here.)
Another reaction to the Simulation Argument might be to allow that sims have rights relative to each other, but no rights relative to the "higher level" beings who are running the sim. Thus, if we are sims, we have no rights relative to our creators -- they can treat us in any way they like without risking moral transgression -- and similarly any sims we create have no rights relative to us. This would be a version of argument (B) above, and it seems weak for the same reasons.
One might hold that human-like sims would have rights, but not other sorts of artificial beings -- not robots or oracles. But why not? This puts us back into the No-Relevant-Difference Argument, unless we can find grounds to morally privilege sims over robots.
-----------------------------------------
I conclude that at least some artificial intelligences, if they have human-like experience, cognition, and emotion, would have at least some rights, or deserve as least some moral consideration. What range of AIs deserve moral consideration, and how much moral consideration they deserve, and under what conditions, I leave for another day.
-----------------------------------------
Related posts:
- Our Possible Imminent Divinity
- Our Moral Duties to Monsters
- Our Moral Duties to Artificial Intelligences
Posted by Eric Schwitzgebel at 9:07 AM 40 comments
Labels: science fiction, speculative fiction, stream of experience, transhumanism
Wednesday, January 14, 2015
Our Moral Duties to Artificial Intelligences
Suppose that we someday create artificial beings similar to us in their conscious experience, in their intelligence, in their range of emotions. What moral duties would we have to them?
You might think: Our moral duties to them would be similar to our moral duties to natural humans beings. A reasonable default view, perhaps. If morality is about maximizing happiness (a common consequentialist view), these beings ought to deserve consideration as loci of happiness. If morality is about respecting the autonomy of rational agents (a common deontological view), these beings ought to deserve consideration as fellow rational agents.
One might argue that our moral duties to such beings would be less. For example, you might support the traditional Confucian ideal of "graded love", in which our moral duties are greatest for those closest to us (our immediate family) and decline with distance, in some sense of "distance": You owe less moral consideration to neighbors than to family, less to fellow-citizens than to neighbors, less to citizens of another country than to citizens of your own country -- and still less, presumably, to beings who are not even of your own species. On this view, if we encountered space aliens who were objectively comparable to us in moral worth from some neutral point of view, we might still be justified in favoring our own species, just because it is our own species. And artificial intelligences might properly be considered a different species in this respect. Showing equal concern for an alien or artificial species, including possibly sacrificing humanity for the good of that other species, might constitute an morally odious disloyalty to one's kind. Go, Team Human?
Another reason to think our moral duties might be less, or more, involves emphasizing that we would be the creators of these beings. Our god-like relationship to them might be especially vivid if the AIs exist in simulated environments controlled by us rather than as ordinarily embodied robots, but even in the robot case we would presumably be responsible for their existence and design parameters.
One might think that if these beings owe their existence and natures to us, they should be thankful to us as long as they have lives worth living, even if we don't treat them especially well. Suppose I create a Heaven and a Hell, with AIs I can transfer between the two locations. In Heaven, they experience intense pleasure (perhaps from playing harps, which I have designed them to intensely enjoy). In Hell, I torture them. As I transfer Job, say, from Heaven to Hell, he complains: "What kind of cruel god are you? You have no right to torture me!" Suppose I reply: "You have been in Heaven, and you will be in Heaven again, and your pleasures there are sufficient to make your life as a whole worth living. In every moment, you owe your very life to me -- to my choice to expend my valuable resources instantiating you as an artificial being -- so you have no grounds for complaint!" Maybe, even, I wouldn't have bothered to create such beings unless I could play around with them in the Torture Chamber, so their very existence is contingent upon their being tortured. All I owe such beings, perhaps, is that their lives as a whole be better than non-existence. (My science fiction story Out of the Jar features a sadistic teenage God who reasons approximately like this.)
Alternatively (and the first narrator in R. Scott Bakker's and my story Reinstalling Eden reasons approximately like this), you might think that our duties to the artificial intelligences we create are something like the duties a parent has to a child. Since we created them, and since we have godlike control over them (either controlling their environments, their psychological parameters, or both), we have a special duty to ensure their well being, which exceeds the duty we would have to an arbitrary human stranger of equal cognitive and emotional capacity. If I create an Adam and Eve, I should put them in an Eden, protect them from unnecessary dangers, ensure that they flourish.
I tend to favor the latter view. But it's worth clarifying that our relationship isn't quite the same as parent-child. A young child is not capable of fully mature practical reasoning; that's one reason to take a paternalistic attitude to the child, including overriding the child's desires (for ice cream instead of broccoli) for the child's own good. It's less clear that I can justify being paternalistic in exactly that way in the AI case. And in the case of an AI, I might have much more capacity to control what they desire than I have in the case of my children -- for example, I might be able to cause the AI to desire nothing more than to sit on a cloud playing a harp, or I might cause the AI to desire its own slavery or death. To the extent this is true, this complicates my moral obligations to the AI. Respecting a human peer involves giving them a lot of latitude to form and act on their own desires. Respecting an AI whose desires I have shaped, either directly or indirectly through my early parameterizations of its program, might involve a more active evaluation of whether its desires are appropriate. If the AI's desires are not appropriate -- for example, if it desires things contrary to its flourishing -- I'm probably at least partly to blame, and I am obliged to do some mitigation that I would probably not be obliged to do in the case of a fellow human being.
However, to simply tweak around an AI's desire parameters, in a way the AI might not wish them to be tweaked, seems to be a morally problematic cancellation of its autonomy. If my human-intelligence-level AI wants nothing more than to spend every waking hour pressing a button that toggles its background environment between blue and red, and it does so because of how I programmed it early on, then (assuming we reject a simple hedonism on which this would count as flourishing), it seems I should do something to repair the situation. But to respect the AI as an individual, I might have to find a way to persuade it to change its values, rather than simply reaching my hand in, as it were, and directly altering its values. This persuasion might be difficult and time-consuming, and yet incumbent upon me because of the situation I've created.
Other shortcomings of the AI might create analogous demands: We might easily create problematic environmental situations or cognitive structures for our AIs, which we are morally required to address because of our role as creators, and yet which are difficult to address without creating other moral violations. And even on a Confucian graded-love view, if species membership is only one factor among several, we might still end up with special obligations to our AIs: In some morally relevant sense of "distance" creator and created might be very close indeed.
On the general principle that one has a special obligation to clean up messes that one has had a hand in creating, I would argue that we have a special obligation to ensure the well being of any artificial intelligences we create. And if genuinely conscious human-grade AI somehow becomes cheap and plentiful, surely there will be messes, giant messes -- whole holocausts worth, perhaps. With god-like power comes god-like responsibility.
[Thanks to Carlos Narziss for discussion.]
Related posts:
[Updated 6:03 pm]
Posted by Eric Schwitzgebel at 2:48 PM 25 comments
Labels: ethics, science fiction, speculative fiction, transhumanism
Monday, January 12, 2015
The 10 Worst Things About Listicles
10. Listicles destroy the narrative imagination and subtract the sublimity from your gaze.
9. The numerosities of nature never equal the numerosity of human fingers.
8. The spherical universe becomes pretzel sticks upon a brief conveyor.
7. In every listicle, opinion subverts fact, riding upon it as upon a sad pony. (Since you momentarily accept everything you hear, you already know this.)
6. The human mind naturally aspires to unifying harmonies that the listicle instead squirts into yogurt cups.
5. Those ten yogurt pretzels spoiled your dinner. That is the precisely the relation between a listicle and life.4. In their eagerness to consume the whole load, everyone skips numbers 4 and 3, thereby paradoxically failing to consume the whole load. This little-known fact might surprise you!
3. Why bother, really. La la.
2. Near the end of eating a listicle you begin to realize, once again, that if you were going to be doing something this pointless, you might as well have begun filing that report. Plus, whatever became of that guy you knew in college? Is this really your life? Existential despair squats atop your screen, a fat brown google-eyed demon.
1. Your melancholy climax is already completed. The #1 thing is never the #1 thing. Your hope that this listicle would defy that inevitable law was only absurd, forlorn, Kierkegaardian faith.
Posted by Eric Schwitzgebel at 10:00 AM 4 comments
Labels: humor
Wednesday, January 07, 2015
Psychology Research in the Age of Social Media
Popular summaries of fun psychology articles regularly float to the top of my Facebook feed. I was particularly struck by this fact on Monday when these two links popped up:
- What Wealth Does to Your Soul (The Week)
- Wheat People vs. Rice People (New York Times)
Below, I will present some dubious data on this very issue!
But first, why think these two studies are methodologically dubious? [Skip the next two paragraphs if you like.]
The first article, on whether the rich are jerks, is based largely on a study I critiqued here. Among that study's methodological oddities: The researchers set up a jar of candy in their lab, ostensibly for children in another laboratory, and then measured whether wealthy participants took more candy from the jar than less-wealthy participants. Cute! Clever! But here's the weird thing. Despite the fact that the jar was in their own lab, they measured candy-stealing by asking participants whether they had taken candy rather than by directly observing how much candy was taken. What could possibly justify this methodological decision, which puts the researchers at a needless remove from the behavior being measured and conflates honesty with theft? The linked news article also mentions a study suggesting that expensive cars are more likely to be double-parked. Therefore, see, the rich really are jerks! Of course, another possibility is that the wealthy are just more willing to risk the cost of a parking ticket.
The second article highlights a study that examines a few recently-popular measures of "individualistic" vs. "collectivistic" thinking, such as the "triad" task (e.g., whether the participant pairs trains with buses [because of their category membership, supposedly individualistic] or with tracks [because of their functional relation, supposedly collectivistic] when given the three and asked to group two together). According to the study, the northern Chinese, from wheat-farming regions, are more likely to score as individualistic than are the southern Chinese, from rice-farming regions. A clever theory is advanced: wheat farming is individualistic, rice farming communal! (I admit, this is a cool theory.) How do we know that difference is the source of the different performance on the cognitive tasks? Well, two alternative hypotheses are tested and found to be less predictive of "individualistic" performance: pathogen prevalence and regional GDP per capita. Now, the wheat vs. rice difference is almost a perfect north-south split. Other things also differ between northern and southern China -- other aspects of cultural history, even the spoken language. So although the data fit nicely with the wheat-rice theory, many other possible explanations of the data remain unexplored. A natural starting place might be to look at rice vs. wheat regions in other countries to see if they show the same pattern. At best, the conclusion is premature.
I see the appeal of this type of work: It's fun to think that the rich are jerks, or that there are major social and cognitive differences between people based on the agricultural methods of their ancestors. Maybe, even, the theories are true. But it's a problem if the process by which these kinds of studies trickle into social media has much more do to with how fun the results are than with the quality of the work. I suspect the problem is especially serious if academic researchers who are not specialists in the area take the reports at face value, and if these reports then become a major part of their background sense of what psychological research has recently revealed.
Hypothetically, suppose a researcher measured whether poor people are jerks by judging whether people in more or less expensive clothing were more or less likely to walk into a fast-food restaurant with a used cup and steal soda. This would not survive peer review, and if it did get published, objections would be swift and angry. It wouldn't propagate through Facebook, except perhaps as the butt of critical comments. It's methodologically similar, but the social filters would be against it. I conjecture that we should expect to find studies arguing that the rich are morally worse, or finding no difference between rich and poor, but not studies arguing that the poor are morally worse (though they might be found to have more criminal convictions or other "bad outcomes"). (For evidence of such a filtering effect on studies of the relationship between religion and morality, see here.)
Now I suspect that in the bad old days before Facebook and Twitter, popular media reports about psychology had less influence on philosophers' and psychologists' thinking about areas outside their speciality than they do now. I don't know how to prove this, but I thought it would be interesting to look at the usage statistics on the 25 most-downloaded Psychological Science articles in December 2014 (excluding seven brief articles without links to their summary abstracts).
The article with the most views of its abstracted summary was The Pen Is Mightier Than the Keyboard: Advantages of Longhand over Laptop Note Taking. Fun! Useful! The article had 22,389 abstract views in December, 2014. It also had 1,320 full text or PDF downloads. Thus, it looks like at most 6% of the abstract viewers bothered to glance at the methodology. (I say "at most" because some viewers might go straight through to the full text without viewing the abstract separately. See below for evidence that this happens with other articles.) Thirty-seven news outlets picked up the article, according to Psychological Science, tied for highest with Sleep Deprivation and False Memories, which had 4,786 abstract views and 645 full text or PDF downloads (at most 13% clicking through).
Contrast these articles with the "boring" articles (not boring to specialists!). The single most downloaded article was The New Statistics: Why and How: 1,870 abstract views, 4,717 full-text and PDF views -- more than twice as many full views as abstract views. Psychological Science reports no media outlets picking this one up. I guess people interested in statistical methods want to see the details of the articles about statistical methods. One other article had more full views than abstract views: the ponderously titled Retraining Automatic Action Tendencies Changes Alcoholic Patients’ Approach Bias for Alcohol and Improves Treatment Outcome: 164 abstract views and 274 full views (67% more full views than abstract views). That article was only picked up by one media outlet. Overall, I found a r = -.49 (p = .01) correlation between the number of news-media pickups and the log of the ratio of full-text views to abstract views.
I suppose it's not surprising that articles picked up by the media attract more casual readers who will view the abstract only. I have no way of knowing that many of these readers are fellow academics in philosophy, psychology, and the other humanities and social sciences. But if so, and if my hypothesis is correct that academic researchers are increasingly exposed to psychological research based on Tweetability rather than methodological quality, that's bad news for the humanities and social sciences. Even if the rich really are jerks.
Posted by Eric Schwitzgebel at 12:31 PM 12 comments
Labels: moral psychology, professional issues in philosophy, psychological methods, sociology of philosophy
Thursday, January 01, 2015
Writings of 2014
I guess it's a tradition for me now, posting a retrospect of the past year's writings on New Year's Day. (Here are the retrospects of 2012 and 2013.)
Two notable things this past year were (a.) several essays on a topic that is new to me: skeptical epistemology of metaphysics ("The crazyist metaphysics of mind", "If materialism is true, the United States is probably conscious", "1% skepticism", "Experimental evidence for the existence of an external world"); and (b.) a few pieces of philosophical science fiction, a genre in which I had only published one co-authored piece before 2014. These two new projects are related: one important philosophical function of science fiction is to enliven metaphysical possibilities that one might not otherwise have taken seriously, thus opening up more things to be undecided among.
Of course I also continued to work on some of my other favorite topics: self-knowledge, moral psychology, the nature of attitudes, the moral behavior of ethicists. Spreading myself a bit thin, perhaps!
Non-fiction appearing in print in 2014:
- “The moral behavior of ethics professors: Relationships among self-reported behavior, expressed normative attitude, and directly observed behavior” (first author, with Joshua Rust), Philosophical Psychology 27, 293-327.
- “The moral behavior of ethicists and the power of reason” (second author, with Joshua Rust), in Advances in Experimental Moral Psychology, ed. H. Sarkissian and J. Wright (Continuum).
- “The moral behavior of ethicists and the role of the philosopher”, in C. Lütge, H. Rusch, and M. Uhl, eds., Experimental Ethics (Palgrave).
- “The crazyist metaphysics of mind”, Australasian Journal of Philosophy 92, 665-682.
- “The problem of known illusion and the resemblance of experience to reality”, Philosophy of Science 81, 954-960.
- “What good philosophy does: David Chalmers’s The Conscious Mind”, Chronicle of Higher Education (Nov. 7).
- “A theory of jerks”, Aeon Magazine (Jun. 4).
- (interview) “The splintered skeptic”, in R. Marshall, Philosophy at 3:AM: Questions and Answers with 25 Top Philosophers (Oxford).
- (revision and update) “Introspection,” Stanford Encyclopedia of Philosophy.
- (reprint) “When your eyes are closed, what do you see?” (Ch. 8 of Perplexities of Consciousness, released as an MIT Press BITS).
- (reprint) “The essence of jerkitude”, The Week ("A theory of jerks", with different title).
- “Experimental evidence for the existence of an external world” (first author, with Alan T. Moore), Journal of the American Philosophical Association .
- “If materialism is true, the United States is probably conscious”, Philosophical Studies.
- “Philosophers recommend science fiction”, in S. Schneider, ed., Science Fiction and Philosophy, 2nd ed. (Wiley-Blackwell).
- “The behavior of ethicists” (first author, with Joshua Rust), in W. Buckwalter and J. Sytsma, eds., Blackwell Companion to Experimental Philosophy (Wiley-Blackwell).
- “1% skepticism”.
- “Do ethics classes influence student behavior?” [a 2013 manuscript I'm not quite happy with but which I'm still collecting thoughts about].
- “Professional philosophers’ susceptibility to order effects and framing effects in evaluating moral dilemmas” (first author, with Fiery Cushman).
- Our possible imminent divinity (Jan. 2)
- Our moral duties to monsters (Mar. 28)
- Meta-analysis of the effect of religion on crime: The missing positive tail (Apr. 11)
- Goldfish-pool immortality (May 30)
- Tononi's exclusion postulate would make consciousness (nearly) irrelevant (Jul. 16)
- SEP analysis continued: Jewish, non-Anglophone, queer, and disabled philosophers (Aug. 14)
- On aiming for moral mediocrity (Oct. 2)
- Possible psychology of a Matrioshka Brain (Oct. 9)
- “What kelp remembers”, Weird Tales: Flashes of Weirdness series, #1, April 14, 2014.
- “The tyrant’s headache”, Sci Phi Journal, issue 3. [appeared Dec. 2014, dated Jan. 2015]
- Out of the jar”, The Magazine of Fantasy and Science Fiction [forthcoming].
- “Momentary sage” The Dark [forthcoming].
- (reprint) “Reinstalling Eden” (first author, with R. Scott Bakker), in S. Schneider, ed., Science Fiction and Philosophy, 2nd ed. (Wiley-Blackwell) [forthcoming].
Posted by Eric Schwitzgebel at 11:22 AM 0 comments
Labels: announcements