Monday, July 30, 2018

On What We Tell Pollsters

Barry Lam’s podcast Hi-Phi Nation has a new episode on “information silos” and what we tell pollsters. Partway through the episode, I am briefly interviewed about the nature of belief.

Lam is always fun, and the episode has a few twists you might not expect. One theme throughout the episode is a critique of the view generally accepted as implicit background in polling and in popular reports of poll results: that people tell pollsters what they actually believe. Lam explores an empirical challenge to this and a more philosophical challenge.

Empirical challenge: People who feel uncertain might answer by "cheerleading" for their side, Republicans for example simply saying whatever they think will make Trump look good, Democrats saying whatever they think makes Trump look bad. If this is going on, when the incentives are changed (for example by paying respondents for right answers, including a smaller payment for admitting that they don’t know), they might instead reveal their true opinion. Even if they are not uncertain, they might simply lie to the pollster, saying what they plainly know to be false, to help or express support for their side.

A more philosophical challenge explores the question of what it is, really, to have a political, or politically loaded, belief. On some questions, there might not be a single straightforward fact about what you believe, hidden in a “secret compartment”, which you choose either to reveal or not reveal to the pollster. On climate change, or racial equality, or on what accommodations society owes to people with disabilities, you might be inclined to answer one way in one context or to one audience, and in quite a different way in another context or to another audience; you might wager thus-and-so when X is at stake, but quite differently when Y is at stake; your spontaneous reactions and your more guarded reactions might splinter in different directions; and so on. Among all these various thoughts and reactions, there needn’t be some privileged set that reflects your true belief while others are somehow misleading or inauthentic.

That, at least, is my view of belief. If you are sufficiently splintered, fragmented, or in-betweenish in your dispositional profile, then what you tell pollsters, even sincerely, will be only one element of a complicated picture. If what you say is misaligned with some other aspects of your speech and behavior, you might be merely cheerleading or lying, but you needn’t necessarily be. You might be answering as sincerely as you can, with the fragment of you that is called forth at the moment.

Full episode here.

[image source]

Tuesday, July 24, 2018

My New Book in Draft

Working title:

Jerks, Zombie Robots, and Other Philosophical Misadventures

[former working title: How to Be a Crazy Philosopher]

The book is composed of several dozen blog posts and popular articles, on philosophy, psychology, culture, and technology, updated and revised, selected from eleven hundred I published between 2006 and 2018.

The full draft is available here.

I will be revising it for the rest of the summer and into the fall, so feedback is appreciated! In addition to the usual content-level feedback, I also welcome feedback on: (a) alternative possible titles, (b) posts or articles that I should have included but didn't, (c) posts or articles that aren't up to the quality of the others and should be cut.

The book is divided into 61 chapters in 5 parts. Every chapter is free standing. No need to read them in order.

[a haphazard sample of the stacks of books in my office, consulted during revision]

Table of Contents:

Part One: Moral Psychology

1. A Theory of Jerks
2. Forgetting as an Unwitting Confession of Your Values
3. The Happy Coincidence Defense and The-Most-I-Can-Do Sweet Spot
4. Cheeseburger Ethics (or How Often Do Ethicists Call Their Mothers?)
5. On Not Seeking Pleasure Much
6. How Much Should You Care about How You Feel in Your Dreams?
7. Imagining Yourself in Another’s Shoes vs. Extending Your Love
8. Aiming for Moral Mediocrity
9. A Theory of Hypocrisy
10. On Not Distinguishing Too Finely Among Your Motivations
11. The Mush of Normativity
12. A Moral Dunning-Kruger Effect?
13. The Moral Compass and the Liberal Ideal in Moral Education

Part Two: Technology

14. Should Your Driverless Car Kill You So Others May Live?
15. Cute AI and the ASIMO Problem
16. My Daughter’s Rented Eyes
17. Someday, Your Employer Will Technologically Control Your Moods
18. Cheerfully Suicidal AI Slaves
19. We Have Greater Moral Obligations to Robots Than to (Otherwise Similar) Humans
20. Our Moral Duties to Monsters
21. Our Possible Imminent Divinity
22. Skepticism, Godzilla, and the Artificial Computerized Many-Branching You
23. How to Accidentally Become a Zombie Robot

Part Three: Culture

24. Dreidel: A Seemingly Foolish Game That Contains the Moral World in Miniature
25. Does It Matter If the Passover Story Is Literally True?
26. Memories of My Father
27. Flying Free of the Deathbed, with Technological Help
28. Thoughts on Conjugal Love
29. Knowing What You Love
30. The Epistemic Status of Deathbed Regrets
31. Competing Perspectives on One’s Final, Dying Thought
32. Profanity Inflation, Profanity Migration, and the Paradox of Prohibition (or I Love You, “Fuck”)
33. The Legend of the Leaning Behaviorist
34. What Happens to Democracy When the Experts Can’t Be Both Factual and Balanced?
35. On the Morality of Hypotenuse Walking
36. Birthday Cake and a Chapel

Part Four: Consciousness and Cosmology

37. Possible Psychology of a Matrioshka Brain
38. A Two-Seater Homunculus
39. Is the United States Literally Conscious?
40. Might You Be a Cosmic Freak?
41. Penelope’s Guide to Defeating Time, Space, and Causation
42. Choosing to Be That Fellow Back Then: Voluntarism about Personal Identity
43. How Everything You Do Might Have Huge Cosmic Significance
44. Goldfish-Pool Immortality
45. How Big the Moon Is, According to One Three-Year-Old
46. Tononi’s Exclusion Postulate Would Make Consciousness (Nearly) Irrelevant
47. What’s in People’s Stream of Experience During Philosophy Talks?
48. The Paranoid Jeweler and the Sphere-Eye God
49. The Tyrant’s Headache

Part Five: The Psychology and Sociology of Philosophy

50. Truth, Dare, and Wonder
51. Trusting Your Sense of Fun
52. Why Metaphysics Is Always Bizarre
53. The Philosopher of Hair
54. Kant on Killing Bastards, Masturbation, Organ Donation, Homosexuality, Tyrants, Wives, and Servants
55. Obfuscatory Philosophy as Intellectual Authoritarianism and Cowardice
56. Nazi Philosophers, World War I, and the Grand Wisdom Hypothesis
57. Against Charity in the History of Philosophy
58. Invisible Revisions
59. On Being Good at Seeming Smart
60. Blogging and Philosophical Cognition, or Why Blogging Is the Ideal Form of Philosophy!!! :-)
61. Will Future Generations Find Us Morally Loathsome?

Thursday, July 19, 2018

Think of Your Dissertation as Your Longest Work, Not Your Best Work

You* know how to write a decent seminar paper. You've done it at least a dozen times. So why is it so hard to get going on that dissertation?

[* for values of "you" of approximately 3rd-5th year in a philosophy PhD program]

Your advisor might stink. That's rough. It can definitely slow you down. But that's probably not the main reason.

You've got to teach or TA. Yes, that consumes oodles of time if you're conscientious about it. But I doubt that's the main reason either.

I suspect that the main reason, for most students, is excessive expectations. You want your dissertation to be the best thing you've ever written. You want to finally address a big, ambitious topic. You want work that will wow your advisor and everyone else.

Sure, of course you do! Those are good things to want. The problem is that the wanting interferes with the getting.

It interferes in two ways: by leading you to take responsibility for a larger literature than you can digest in the time allotted, and by encouraging perfectionism.

Against taking on too big a literature:

How do you write a seminar paper? You read the ten or thirty things assigned for the seminar, plus maybe a few other things. You especially master one or two of them, and then you develop your critique or alternative position. Now you've got a fine little criticism, say, of Gendler's view of "alief" in light of a couple of recent review papers on implicit bias. Seminar grade: A. Lovely!

But now you're on your dissertation. Time for a big, ambitious topic. Time to throw yourself deep into something important. You want to defend the existence of moral facts. Or you want to show how it's possible to have infallible introspective knowledge of your own experience. Great! To do this responsibly, what do you need to read? Um...

... the whole entire literature on moral facts or introspection?

That might take a while. Could you possibly do it in a semester or a year (while teaching, with life happening, etc.)? How do you even get started? Meanwhile, the clock is ticking. You feel like you've got to start writing. You can't not write for two and a half years while you read every book and article that has been written about these topics. But neither can you really write in an informed way without having read all of that stuff. So are you supposed to write in an uninformed way? If you're like most of us and you think in part by writing, how are you going to even organize it all in your mind and remember it if you aren't writing?

That's a nasty little pickle.

Here's the way to duck the pickle: Choose a much smaller topic that you really can master in a dedicated couple of months. Not just any topic, of course. One relevant to the big picture you have in mind.

Suppose your big topic is the (in-)fallibility of introspection of conscious experience. You might read everything pro and con in the recent literature on "containment" models of introspection, according to which the judgment I am in conscious state S literally contains conscious state S within it as a part (and is arguably thus infallibly correct). This topic is small enough to master and write about in a semester, if you are already a well-trained graduate student in philosophy of mind who has given it some preliminary thought. Though it's a small literature, really getting it right is an ample task for a big, long chapter. And the topic is centrally related to the big-picture issue of introspective infallibility in the recent literature.

Then, for the next chapter, find something nearby that is similarly small and tractable, to which you can bring some of the insights and tools from your previous chapter. Continuing the example, maybe neo-expressivist views of the nature of introspection.

Repeat a few times, and you'll find you've actually covered a fairly large territory in sum. In the process, you'll have mastered much of the literature that seemed so impossibly large at the start.

Find the tiniest thing you can find that is relevant to your topic while also being of significant interest to specialists in the area, and become the absolute bleeding world expert on it. Then bore your advisor with sixty-plus pages of a tediously thorough treatment. This is how to duck the pickle.

Against perfectionism:

If I said, "Michelangelo, next month create the best art you've ever made" could he do it? Not likely!

Don't think of your dissertation as your best work ever. Don't hold your writing to that difficult a standard. Maybe at some point down the road, after a bunch of revision, some favorite piece of it will turn out to be your best work ever. (More on this below.) But having that kind of expectation is not the way to start writing. Reconcile yourself to the likely fact that you're not going to produce the best work of your life in the next few months.

Here's a better way to think of it: Your dissertation will be the longest and most thorough thing you've ever written. That's all. Instead of aiming for brilliant prose, aim for length. Choose that little thing and cover it so exhaustively that no one who reads your chapter on the topic could doubt that you know that tiny corner of academia as well as anyone else in the world.

Send your advisor something long and hairy. It's okay. You're not being graded on prose style. Get it out to your advisor for feedback, and to others you trust, even though it's not the most beautiful of things. You can revise it later! That's the point of getting feedback, right?

Settle. Your target should be: rough, covers the bases (except for that one thing you forgot which you'll put in later), good enough to move on to the next chapter.

After the feedback, you'll see some things that definitely need changing. But unless some radical rethinking is required, don't make those changes yet! Instead, move on. Write the next chapter. Set the feedback aside to return to later, after you've rough drafted your other chapters. In the course of writing those other chapters, you'll probably have insights that influence your thinking about the earlier chapters, so they'll need significant revising for that reason anyway.

Do this several times and you'll find that you have a few hundred pages of long, ugly chapters that take you from the beginning to the end. Then revise. Rewrite the whole thing top to bottom. Now, at the end -- not in the early drafting stage -- is the time to make it... well I won't say perfect, but better. Polished. This is the last thing you do, simultaneously with hitting the job market.

Likely you will find, at the end, once you're done polishing, that your dissertation, or at least your favorite piece of it, is the best thing you've ever written. That's not because any chapter was fantastic in its initial draft, but rather because you've never before in your life spent a few years thinking about a single topic, and as a result you now have that topic down so well that you really see to the core of it. You know everyone else's views, and you can lay out the strengths and weaknesses of those views, and your great command of the material will show in the final, polished version of your work.

Start today:

So if you're sitting there stalled out on your dissertation, take Doctor Eric's three-step remedy:

(1.) Let go of the thought that you are about to produce the best work you've ever written. Lower your ambitions.

(2.) Choose a topic so narrow that you can read everything relevant in short order, and then read that stuff.

(3.) Write out that narrow thing, long and boring and ugly, blow by blow.

At the end, you'll have several dozen long, ugly, boring, thorough pages -- exactly the kind of material from which excellent dissertations are eventually built.

[image source]

Thursday, July 12, 2018

Two Ways of Being a Group Mind: Synchronic vs. Diachronic

Based on last week's post, I am now seeing ads for sunglasses everywhere, as if to say "Welcome, Eric, to the internet hypermind! Did you say SUNGLASSES?!"

Speaking of hyperminds....

I want to distinguish two ways of being a group mind, since I know you care immensely about the cognitive architecture of group minds I'm a dork. My thought is that the philosophical issues of group consciousness and personal identity play out differently in the two types of case.

Synchronic:

Examples: Star Trek's Borg (mostly), Ann Leckie's ancillaries, highly interconnected nations (according to me), Vernor Vinge's tines.

Synchronic group minds are probably the default conception. A bunch of independent, or semi-independent, or independent-enough entities remain in constant or constant-enough communication with each other. Their communication is sufficiently rich or sufficiently well structured that it gives rise to group-level mental states in the whole. In the most interesting case, the individual entities are distinct enough to have rich mental lives but also the group is well enough coordinated that it also, simultaneously, has a distinctive, rich mental life above and beyond that of its members.

Diachronic:

Examples: David Brin's Kiln People, Linda Nagata's ghosts, Benjamin Kinney's forks.

In a diachronic group mind, communication is relatively rare but high bandwidth and transformative. Post-communication, the individuals inherit mental states from the others. In the most interesting case, the inheritance process is very different from just listening credulously to someone's testimony; rather it's more like a direct transfer of memories, plans, and opinions, maybe even values or personality. Imagine "forking" into three versions of yourself, going your separate ways for the day, and then at the end of the day merging back into a single individual, integrating the memories and plans of each, and averaging any general changes in value, opinion, or personality. Tomorrow and every day thereafter, you will fork and merge in the same way.

[Cerberus might not be integrated enough to be a good example of a group mind, but I didn't want to attach another darn picture of the Borg.]

Tradeoffs Between Group-Level and Individual-Level Personhood and Autonomy:

As I have described it here, delay between information transfer episodes is the fundamental difference between these types of group minds: whether the minds are in constant or constant-enough communication, or whether instead they communicate only at long intervals. Obviously, temporal distance admits of degree, but this difference in degree creates structural pressures. If communication is infrequent, its effects have to be radical if it is to give rise to an entity sufficiently integrated to be worth calling a "group mind". If every day group of friends meets to exchange information and plan the next day's activities, in the ordinary way people sometimes do this, I suppose that in some weak sense they have formed a group mind. But they haven't done so in the radical science-fictional sense I'm imagining. For example, if there were five friends who did this, there would still be exactly five persons -- entities with serious rights whose destruction would we worth calling murder. For the emergence of something more metaphysically and morally interesting, the exchange has to be radical enough to challenge the boundaries of personal identity.

Conversely, if communication is constant and its effects are radical, it's not clear that we have a group of individuals in any interesting sense: We might just have a single non-group entity that happens to be spatially scattered (as in my Martian Smartspiders).

In other words, to be a philosophically-interesting group entity there must be some sort of interestingly autonomous mentality both at the individual level and at the group level. Massive transformative communication (as in diachronic merging of memories and values) radically reduces autonomy: If communication is both massively transformative and very frequent, there's no chance for interesting person-like autonomy at the individual level. If communication is neither massively transformative nor very frequent, there's no chance for interesting person-like autonomy at the group level.

Consciousness:

Our intuitive judgments about group-level consciousness are probably pretty crappy (as I've argued here and here). But our general theories about consciousness as they apply to the group level are probably even crappier (as I've argued here and here). At the same time, whether the group as a whole has a stream of conscious experience over and above the consciousness of its individual members seems like a very important question if we're interested in its mentality and whether it deserves moral status as a person. So we're kind of stuck. We'll have to guess.

Plausibly, in the diachronic case there is no stream of consciousness beyond that of the merging individuals. When there's one body at night, there's one stream of consciousness (at most, if it's dreaming). When there are three bodies off doing their thing, there are three streams of consciousness. We might be able to create some problematic boundary cases during the merge, but maybe that's marginal enough to dismiss with a bit of hand waving.

The synchronic case is, I think, more conceptually challenging with respect to consciousness. If we allow that minimally interactive groups do not give rise to group level consciousness and we also allow that a fully informationally integrated but spatially distributed entity does give rise to consciousness, it seems that we can create a slippery slope from one case to the other by adding more integration and communication (for example here). At some point, if there is enough coherent behavior, self-representation, and information exchange at the group level, most standard functionalist views of consciousness (unless they accept an anti-nesting principle) should allow that each individual member of the group would have a stream of experience and also that there would be a further, different stream of experience at the group level. But it's a tricky question how much integration and information exchange, and with what kind of structural properties, is necessary for group-level consciousness to arise.

Personhood:

One interesting issue that arises is the extent to which an individual's beliefs about what counts as "self-interest" and "death" define the boundaries of their personhood. Consider a diachronic case: You are walking back home after your day out and about town, with a wallet full of money and interesting new information about a job opportunity tomorrow, and you are about to merge back together with the two other entities you forked off from this morning. Is this death? Are "you" going to be gone after the merge, your memories absorbed into some entity who is not you (but who you might care about even more than you care about yourself)? In walking back, are you magnanimously sacrificing your life to give your money and information to the entity who will exist tomorrow? Would it be more in your self-interest to run away and blow your wad on something fun for this current body? Or, instead, will it still be "you" tomorrow, post-merge, with that information and that money? To some extent, in unclear cases of this sort, I think it might depend on how you think and feel about it: It's to some extent up to you whether to conceptualize the merging together as death or not.

A parallel issue might arise with synchronic groups, though my hunch is that it would play out differently. Synchronic groups, as I'm imagining them, don't have identity-threatening splits and merges. The individual members of synchronic groups would seem to have the same types of rights that otherwise similar individuals who aren't members of synchronic group minds would have -- rights depending on (for example, but it's not this simple) their capacity to suffer and think and choose as individuals. They might choose, as individuals, to view the group welfare as much more important than their own welfare (as a soldier might choose to die for sake of country); but unless there's some real loss of autonomy or consciousness, this doesn't threaten their status as persons or redefine the boundaries of what counts as death.

Related:

Possible Architectures of Group Minds: Perception (May 4, 2016)

Possible Architectures of Group Minds: Memory (Jun 14, 2016)

Group Minds on Ringworld (Oct 24, 2012)

If Materialism Is True, the United States Is Probably Conscious (academic essay in Philosophical Studies, 2015)

Our Moral Duties to Monsters (Mar 8, 2014)

Choosing to Be That Fellow Back Then: Voluntarism about Personal Identity (Aug 20, 2016).

[image source]

Friday, July 06, 2018

How to Create Immensely Valuable New Worlds by Donning Your Sunglasses

[A satisfied REI customer, creating whole gobs of new worlds.]

One world is good, so two is better, right?

(If you think one world is bad and two are worse, just invert the reflections below.)

Here's one way to look at it: Run a universe, or a world, from beginning to end, sum up all the good stuff, subtract all the bad stuff, and note the (hopefully positive) total. Now consider, from your end-of-the-universe God's-eye point of view: Should you launch another world similar to the previous one? Well, of course you should! There would be even more good stuff, and a higher positive total, after that second world has been run. Similar considerations suggest that two good worlds running in parallel would also be better than a single good world.

(To avoid problems with summing infinitudes, let's assume finite worlds with finite value. For some complexities regarding the value or disvalue of repetition specifically see this earlier post.)

Now on some interpretations of quantum mechanics, every time there's a quantum event with different possible outcomes, all of the outcomes occur, each in a different world. Such "many world" interpretations often describe the world as "splitting" into two worlds -- one world in which Outcome A occurs and one in which Outcome B occurs. You too, the observer, will split: One copy of you goes into World A, observing Outcome A, and the other goes into World B, observing Outcome B.

In a classic article on the many-worlds interpretation of quantum mechanics, Bryce S. DeWitt writes:

This universe is constantly splitting into a stupendous number of branches, all resulting from the measurementlike interactions between its myriads of components. Moreover, every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies of itself.... I still recall vividly the shock I experienced on first encountering this multiworld concept. The idea of 10^100+ slightly imperfect copies of oneself all constantly splitting into further copies, which ultimately become unrecognizable.... (DeWitt 1970, p. 33).

Notice that on DeWitt's portrayal, there are a finite but large number of worlds: 10^100+. Now here's the normative question I want to consider: Is there positive value -- would it be ethically or prudentially or aesthetically good -- to increase the amount of splitting, so that there are more worlds rather than fewer?

On the face of it, it seems that, yes, it would be good if there were more worlds. If each world independently considered is good, then plausibly more worlds is better. Suppose World W has positive value V. Suppose now that it splits into two worlds that are very similar but not identical: W1 with value V1 and W2 with value V2. If we assume V1 ~= V2 ~= V and that the value of worlds is additive, then after the split the whole W1+W2 is approximately twice the value of W. We have doubled the amount of value that the cosmos as a whole contains!

Now you might object in one of three ways:

(1.) You might reject the whole splitting-worlds interpretation. Fair enough! But then you're not really playing the game. I'm interested in thinking about the normative consequences assuming that the world does split as DeWitt describes.

(2.) You might reject the assumption that V1 ~= V. For example, you might think that after splitting, each world has only approximately half the value that the world before the split had, so that V1 + V2 ~= V. Then no value would be gained by splitting. A tempting thought, perhaps. But why would splitting make a world half as valuable? It's not clear. One consequence of this view is that our world, constantly splitting, would be constantly halving in value, so that its value is plummeting by orders of magnitude every second. Hmmm, that seems no less bizarre than the idea that splitting doubles the value of cosmos. Now if you thought that the other worlds already existed before the split, then you might reject V1 ~= V, since the subset of worlds in V1 would be about half (or whatever) of the total number of worlds in V. But the whole idea of splitting worlds is not that the worlds already existed. It's that they are created. So we're talking about whether creating a new valuable world adds value to the cosmos as a whole. Pending a good argument otherwise, it seems like it should.

(3.) You might reject additivity. You might say that although V1 ~= V2 ~= V, you can't simply sum V1 and V2 to get ~= 2V. I can feel the pull of this idea. If the worlds were strictly identical, you might say "well, there's no point in duplicating everything again"! But (a.) The worlds are not strictly identical, and in fact (given chaos theory) they might diverge increasingly over time. And (b.) Duplication plausibly adds value when worlds are temporally separated (at the end of the universe, it seems like not re-running the thing again would involve omitting possible future pleasures that future inhabitants of the world would have), and side-by-side splitting wouldn't seem to be very normatively different in principle.

Okay, let's pretend you're convinced. When new worlds arise through quantum mechanical splitting, that's terrific. Whole realms of value are added! The value of the cosmos as a whole approximately doubles. Now, accepting this, is there anything you should do differently?

Consider the following possible principle:

Conservation of Splitting. Over any time interval t, the world will split N times, no matter what you do.

As far as I can see, there's no reason to accept Conservation of Splitting. It seems like you can create situations in which there would be relatively more splitting or relatively less splitting. You can run more quantum mechanical experiments or fewer. You can make more quantum mechanical measurements or fewer. And if you run more experiments and make more measurements, there will be more splitting, and thus more worlds.

For example, that pair of polarized sunglasses you have, sitting in that dark sunglass case in that dark drawer? When a photon passes or fails to pass through a polarized lens, that's a quantum mechanical event -- a chance event, which, on this interpretation, results in a splitting of worlds. In some worlds the photon goes through; in others it does not. You could take those sunglasses out of the drawer. You could go to the beach and wear them in the sun. Many more photons will pass through those lenses! Maybe about 10^18 more photons per second. If each of those photons has an independent quantum chance of passing or not passing through those lenses, splitting the world, then you're creating 2^10^18 new worlds per second just by sitting on the beach -- worlds that would not have been created had the sunglasses remained cased in the drawer. Think of all the value you're adding to the cosmos!

Effective altruism is a movement that recommends using reason and evidence to do the most good you can do. Normally, effective altruists recommend doing things like donating money to charities that effectively help to alleviate suffering due to poverty. But the value of saving one life has to be tiny compared to the value of creating 2^10^18 new universes per second! So instead of staying indoors in the shade, writing a check to the Against Malaria Foundation, maybe you'd do better to spend the day at the beach.

Even if you have only a 0.001% credence that the splitting worlds interpretation of quantum mechanics is true, the expected utility of sitting on the beach might far exceed that of donating to poverty relief.

I know that it doesn't seem intuitively very plausible that going to the beach is far morally better than donating money to effective, life-saving charities, but consequentialist philosophers are often willing to admit what's ethically best might not match our intuitions about what's ethically best.

PS: I feel so bad about making this argument that I just donated to Oxfam.

--------------------------------------------------

Related Posts:

Duplicating the Universe (Apr 29, 2015)

Goldfish Pool Immortality (May 30, 2014)

How Robots and Monsters Might Break Human Moral Systems (Feb. 3, 2015)

[image source]