Suppose that we someday create artificially intelligent systems (AIs) who are capable of genuine consciousness, real joy and real suffering. Yes, I admit, I spend a lot of time thinking about this seemingly science-fictional possibility. But it might be closer than most of us think; and if so, the consequences are potentially huge. Who better to think about it in advance than we lovers of consciousness science, moral psychology, and science fiction?
Friday, January 27, 2023
Hedonic Offsetting for Harms to Artificial Intelligence?
Posted by Eric Schwitzgebel at 9:54 AM 24 comments
Labels: AI/robot/Martian rights
Wednesday, January 18, 2023
New Paper in Draft: Dispositionalism, Yay! Representationalism, Boo! Plus, the Problem of Causal Specification
Cynthia rises from the couch to go get that beer. If we accept industrial-strength representationalism, in particular the Kinematics and Specificity theses, then there must be a fact of the matter exactly which representations caused this behavior. Consider the following possible candidates:
- There’s beer in the fridge.
- There’s beer in the refrigerator door.
- There’s beer on the bottom shelf of the refrigerator door.
- There’s beer either on the bottom shelf of the refrigerator door or on the right hand side of the lower main shelf.
- There’s beer in the usual spot in the kitchen.
- Probably there’s beer in the place where my roommate usually puts it.
- There’s Lucky Lager in the fridge.
- There are at least three Lucky Lagers in the fridge.
- There are at least three and no more than six cheap bottled beers in the fridge.
- In the fridge are several bottles of that brand of beer with the rebuses in the cap that I used to illicitly enjoy with my high school buddies in the good old days.
- Somewhere in the fridge, but probably not on the top shelf, are a few bottles, or less likely cans, of either Lucky Lager or Pabst Blue Ribbon, or maybe some other cheap beer, unless my roommate drank the last ones this afternoon, which would be uncharacteristic of her.
This list could of course be continued indefinitely. Estimating conservatively, there are at least a billion such candidate representational contents. For simplicity, imagine nine independent parameters, each with ten possible values.
If Kinematics and Specificity [commitments of "industrial-strength" representationalism, as described earlier in the essay] are correct, there must be a fact of the matter exactly which subset of these billion possible representational contents were activated as Cynthia rose from the couch. Presumably, also, various background beliefs might or might not have been activated, such as Cynthia’s belief that the fridge is in the kitchen, her belief that the kitchen entrance is thataway, her belief that it is possible to open the refrigerator door, her belief that the kitchen floor constitutes a walkable surface, and so on – each of which is itself similarly specifiable in a massive variety of ways.
Plausibly, Cynthia believes all billion of the beer-in-the-fridge propositions. She might readily affirm any of them without, seemingly, needing to infer anything new. Sitting on the couch two minutes before the beery desire that suddenly animates her, Cynthia already believed, it seems – in the same inactive, stored-in-the-back-of-the-mind way that you believed, five minutes ago, that Obama was U.S. President in 2010 – that Lucky Lager is in the fridge, that there are probably at least three beers in the refrigerator door, that there’s some cheap bottled beer in the usual place, and so on. If so, and if we set aside for now (see Section 5) the question of tacit belief, then Cynthia must have a billion beer-in-the-fridge representations stored in her mind. Specificity requires that it be the case that exactly one of those representations was retrieved the moment before she stood up, or exactly two, or exactly 37, or exactly 814,406. Either exactly one of those representations, or exactly two, or exactly 37, or exactly 814,406, then interacted with exactly one of her desires, or exactly two of her desires, or exactly 37, or exactly 814,406. But which one or ones did the causal work?
Let’s call this the Problem of Causal Specification. If your reaction to the Problem of Causal Specification is to think, yes, what an interesting problem, if only we had the right kind of brain-o-scope, we could discover that it was exactly the representation there are 3 or 4 Lucky Lagers somewhere in the refrigerator door, then you’re just the kind of mad dog representational realist I’m arguing against.
I think most of us will recognize the problem as a pseudo-problem. This is not a plausible architecture of the mind. There are many reasonable characterizations of Cynthia’s beer-in-the-fridge belief, varying in specificity, some more apt than others. Her decision is no more caused by a single, precisely correct subset of those billion possible representations than World War I had a single, possibly conjunctive cause expressible by a single determinately true sentence. If someone attempts to explain Cynthia’s behavior by saying that she believes there is beer in the fridge, it would be absurd to fire up your brain-o-scope, then correct them by saying, “Wrong! She’s going to the fridge because she believes there is Lucky Lager in the refrigerator door.” It would be equally absurd to say that it would require wild, one-in-a-billion luck to properly explain Cynthia’s behavior absent the existence of such a brain-o-scope.
A certain variety of representationalist might seek to escape the Problem of Causal Specification by positing a single extremely complex representation that encompasses all of Cynthia’s beer-in-the-fridge beliefs. A first step might be to posit a map-like representation of the fridge, including the location of the beer within it and the location of the fridge in the kitchen. This map-like representation might then be made fuzzy or probabilistic to incorporate uncertainty about, say, the exact location of the beer and the exact number of bottles. Labels will then need to be added: “Lucky Lager” would be an obvious choice, but that is at best the merest start, given that Cynthia might not remember the brand and will represent the type of beer in many different ways, including some that are disjunctive, approximate, and uncertain. If maps can conflict and if maps and object representations can be combined in multiple ways, further complications ensue. Boldly anticipating the resolution of all these complexities, the representationalist might then hypothesize that this single, complicated representation is the representation that was activated. All the sentences on our list would then be imperfect simplifications – though workable enough for practical purposes. One could perhaps similarly imagine the full, complex causal explanation of World War I, detailed beyond any single historian’s possible imagining.
This move threatens to explode Presence, the idea that when someone believes P there is a representation with the content P present somewhere in the mind. There would be a complex representation stored, yes, from which P might be derivable. But many things might be derivable from a complex representation, not all of which we normally will want to say are believed in virtue of possessing that representation. If a map-like representation contains a triangle, then it’s derivable from the representation that the sum of the interior angles is 180 degrees; but someone ignorant of geometry would presumably not have that belief that simply in virtue of having that representation. Worse, if the representation is complex enough to contain a hidden contradiction, then presumably (by standard laws of logic) literally every proposition that anyone could ever believe is derivable from it.
The move to a single, massively complex representation also creates an architectural challenge. It’s easy to imagine a kinematics in which a simple proposition such as there is beer in the fridge is activated in working memory or a central workspace. But it’s not clear how a massively complex representation could be similarly activated. If the representation has many complex parameters, it’s hard to see how it could fit within the narrow constraints of working memory as traditionally conceived. No human could attend to or process every aspect of a massively complex representation in drawing inferences or making practical decisions. More plausibly, some aspects of it must be the target of attention or processing. But now we’ve lost all of the advantages we hoped to gain by moving to a single, complex representation. Assessing which aspects are targeted throws us back upon the Problem of Causal Specification.
Cynthia believes not only that there’s beer in the fridge but also that there’s ketchup in the fridge and that the fridge is near the kitchen table and that her roommate loves ketchup and that the kitchen table was purchased at Ikea and that the nearest Ikea is thirty miles west. This generates a trilemma. Either (a.) Cynthia has entirely distinct representations for her beer-in-the-fridge belief, her ketchup-in-the-fridge belief, her fridge-near-the-table belief, and so on, in which case even if we can pack everything about beer in the fridge into a single complex representation we still face the problem of billions of representations with closely related contents and an implausible commitment to the activation of some precise subset of them when Cynthia gets up to go to the kitchen. Or (b.) Cynthia has overlapping beer-in-the-fridge, ketchup-in-the-fridge, etc. representations, which raises the same set of problems, further complicated by commitment to a speculative architecture of representational overlap. Or (c.) all of these representations are somehow all aspects of one mega-representation, presumably of the entire world, which does all the work – a representation which of course would always be active during any reasoning of any sort, demolishing any talk about retrieving different stored representations and combining them together in theoretical inference.
Dispositionalism elegantly avoids all these problems! Of course there is some low-level mechanism or set of mechanisms, perhaps representational or partly representational, that explains Cynthia’s behavior. But the dispositionalist need not commit to Presence, Discreteness, Kinematics, or Specificity. There need be no determinate, specific answer exactly what representational content, if any, is activated, and the structures at work need have no clean or simple relation to the beliefs we ascribe to Cynthia. Dispositionalism is silent about structure. What matters is only the pattern of dispositions enabled by the underlying structure, whatever that underlying structure is.
Instead of the storage and retrieval metaphor that representationalists tend to favor, the dispositionalist can appeal to figural or shaping metaphors. Cynthia’s dispositional profile has a certain shape: the shape characteristic of that of a beer-in-the-fridge believer – but also, at the same time, the shape characteristic of a Lucky-Lager-in-the-refrigerator-door believer. There need be no single determinately correct way to specify the shape of a complex figure. A complex shape can be characterized in any of a variety of ways, at different levels of precision, highlighting different features, in ways that are more or less apt given the describer’s purposes and interests. It is this attitude we should take to characterizing Cynthia’s complex dispositional profile. Attributing a belief is more like sketching the outline of a complex figure – perhaps a figure only imperfectly seen or known – than it is like enumerating the contents of a box.
Posted by Eric Schwitzgebel at 10:28 AM 9 comments
Labels: belief
Thursday, January 12, 2023
Further Methodological Troubles for the Moralometer
[This post draws on ideas developed in collaboration with psychologist Jessie Sun.]
If we want to study morality scientifically, we should want to measure it. Imagine trying to study temperature without a thermometer or weight without scales. Of course indirect measures are possible: We can't put a black hole on a scale, but we can measure how it bends the light that passes nearby and thereby infer its mass.
Last month, I raised a challenge for the possibility of developing a "moralometer" (a device that accurately measure's a person's overall morality). The challenge was this: Any moralometer would need to draw on one or more of four methods: self-report, informant report, behavioral measures, or physiological measures. Each one of these methods has serious shortcomings as a basis for general moral measurement of one's overall moral character.
This month, I raise a different (but partly overlapping) set of challenges, concerning how well we can specify the target we're aiming to measure.
Problems with Flexible Measures
Let's call a measure of overall morality flexible if it invites a respondent to apply their own conception of morality, in a flexible way. The respondent might be the target themselves (in self-report measures of morality) or they might be a peer, colleague, acquaintance, or family member of the target (in informant-report measures of morality). The most flexible measures apply "thin" moral concepts in Bernard Williams' sense -- prompts like "Overall, I am a morally good person" [responding on an agree/disagree scale] or "[the target person] behaves ethically".
While flexible measures avoid excessive rigidity and importing researchers' limited and possibly flawed understandings of morality into the rating procedure, the downsides are obvious if we consider how people with noxious worldviews might rate themselves and others. The notorious Nazi Adolf Eichmann, for example, appeared to have thought highly of his own moral character. Alexander "the Great" was admired for millennia, including as a moral exemplar of personal bravery and spreader of civilization, despite his main contribution being conquest through aggressive warfare, including the mass slaughter and enslavement of at least one civilian population.
I see four complications:
Relativism and Particularism. Metaethical moral relativists hold that different moral standards apply to different people or in different cultures. While I would reject extreme relativist views according to which genocide, for example, doesn't warrant universal condemnation, a moderate version of relativism has merit. Cultures might reasonably differ, for example, on the age of sexual consent, and cultures, subcultures, and social groups might reasonably differ in standards of generosity in sharing resources with neighbors and kin. If so, then flexible moralometers, employed by raters who use locally appropriate standards, will have an advantage over inflexible moralometers which might inappropriately import researchers' different standards. However, even flexible moralometers will fail in the face of relativism if they are employed by raters who employ the wrong moral standards.
According to moral particularism, morality isn't about applying consistent rules or following any specifiable code of behavior. Rather, what's morally good or bad, right or wrong, frequently depends on particular features of specific situations which cannot be fully codified in advance. While this isn't the same as relativism, it presents a similar methodological challenge: The farther the researcher or rater stands from the particular situation of the target, the more likely they are to apply inappropriate standards, since they are likely to be ignorant of relevant details. It seems reasonable to accept at least moderate particularism: The moral quality of telling a lie, stealing $20, or stopping to help a stranger, might often depend on fine details difficult to know from outside the situation.
If the most extreme forms of moral relativism or particularism (or moral skepticism) are true, then no moralometer could possibly work, since there won't be stable truths about people's morality, or the truths will be so complicated or situation dependent as to defy any practical attempt at measurement. Moderate relativism and particularism, if correct, provide reason to favor flexible standards as judged by self-ratings or the ratings of highly knowledgeable peers sensitive to relevant local details; but even in such cases all of the relevant adjustments might not be made.
Incommensurability. Goods are incommensurable if there is no fact of the matter about how they should be weighed against each other. Twenty dollar bills and ten dollar bills are commensurable: Two of the latter are worth exactly one of the former. But it's not clear how to weigh, for example, health against money or family versus career. In ethics, if Steven tells a lie in the morning and performs a kindness in the afternoon, how exactly ought these to be weighed against each other? If Tara is stingy but fair, is her overall moral character better, worse, or the same as that of Nicholle, who is generous but plays favorites? Combining different features of morality into a single overall score invites commensurability problems. Plausibly, there's no single determinately best weighting of different factors.
Again, I favor a moderate view. Probably in many cases there is no single best weighting. However, approximate judgments remain possible. Even if health and money can't be precisely weighed against each other, extreme cases permit straightforward decisions. Most of us would gladly accept a scratch on a finger for the sake of a million dollars and would gladly pay $10 to avoid stage IV cancer. Similarly, Stalin was morally worse than Martin Luther King, even if Stalin had some virtues and King some vices. Severe sexual harassment of an employee is worse than fibbing to your spouse to get out of washing the dishes.
Moderate incommensurability limits the precision of any possible moralometer. Vices and virtues, and rights and wrongs of different types will be amenable only to rough comparison, not precise determination in a single common coin.
Moral error. If we let raters reach independent judgments about what is morally good or bad, right or wrong, they might simply get it wrong. As mentioned above, Eichmann appears to have thought well of himself, and the evidence suggests that he also regarded other Nazi leaders as morally excellent. Raters will disagree about the importance of purity norms (such as norms against sexual promiscuity), the badness of abortion, and the moral importance, or not, of being vegetarian. Bracketing relativism, then at least some of these raters must be factually mistaken about morality, on one side or another, adding substantial error into their ratings.
The error issue is enormously magnified if ordinary people's moral judgments are systematically mistaken. For example, if the philosophically discoverable moral truth is that the potential impact of your choices on future generations morally far outweighs the impact you have on the people around you (see my critiques of "longtermism" here and here), then the person who is an insufferable jerk to everyone around them but donates $5000 to an effective charity might be in fact far morally better than a personally kind and helpful person who donates nothing to charity -- but informants' ratings might very well suggest the reverse. Similar remarks would apply to any moral theory that is sharply at odds with commonsense moral intuition.
Evaluative bias. People are, of course, typically biased in their own favor. Most people (not all!) are reluctant to think of themselves as morally below average, as unkind, unfair, or callous, even if they in fact are. Social desirability bias is the well-known phenomenon that survey respondents will tend to respond to questions in a manner that presents them in a good light. Ratings of friends, family, and peers will also tend to be positively biased: People tend to view their friends and peers positively, and even when not they might be reluctant to "tell on" them to researchers. If the size of evaluative bias were consistent, it could be corrected for, but presumably it can vary considerably from case to case, introducing further noise.
Problems with Inflexible Measures
Given all these problems with flexible measures of morality, it might seem best to build our hypothetical moralometer instead around inflexible measures. Assuming physiological measures are unavailable, the most straightforward way to do this would be to employ researcher-chosen behavioral measures. We could try to measure someone's honesty by seeing whether they will cheat on a puzzle to earn more money in a laboratory setting. We could examine publicly available criminal records. We could see whether they are willing to donate a surprise bonus payment to a charity.
Unfortunately, inflexible measures don't fully escape the troubles that dog flexible measures, and they bring new troubles of their own.
Relativism and particularism. Inflexible measures probably aggravate the problems with relativism and particularism discussed above. With self-report and informant report, there's at least an opportunity for the self or the informant to take into account local standards and particulars of the situation. In contrast, inflexible measures will ordinarily be applied equally to all without adjustment for context. Suppose the measure is something like "gives a surprise bonus of $10 to charity". This might be a morally very different decision for a wealthy participant than for a needy participant. It might be a morally very different decision for a participant who would save that $10 to donate it to a different and maybe better charity than for a participant who would simply pocket the $10. But unless those other factors are being measured, as they normally would not be, they cannot be taken account of.
Incommensurability. Inflexible measures also won't avoid incommensurability problems. Suppose our moralometer includes one measure of honesty, one measure of generosity, and one measure of fairness. The default approach might be for a summary measure simply to average these three, but that might not accurately reflect morality: Maybe a small act of dishonesty in an experimental setting is far less morally important than a small act of unfairness in that same experimental setting. For example, getting an extra $1 from a researcher by lying in a task that transparently appears to demand a lie (and might even be best construed as a game in which telling untruths is just part of the task, in fact pleasing the researcher) might be approximately morally neutral while being unfair to a fellow participant in that same study might substantially hurt the other's feelings.
Sampling and ecological validity. As mentioned in my previous post on moralometers, fixed behavioral measures are also likely to have severe methodological problems concerning sampling and ecological validity. Any realistic behavioral measure is likely to capture only a small and perhaps unrepresentative part of anyone's behavior, and if it's conducted in a laboratory or experimental setting, behavior in that setting might not correlate well with behavior with real stakes in the real world. How much can we really infer about a person's overall moral character from the fact that they give their monetary bonus to charity or lie about a die roll in the lab?
Moral authority. By preferring a fixed measure, the experimenter or the designer of the moralometer takes upon themselves a certain kind of moral authority -- the authority to judge what is right and wrong, moral or immoral, in others' behavior. In some cases, as in the Eichmann case, this authority seems clearly preferable to deferring to the judgment of the target and their friends. But in other cases, it is a source of error -- since of course the experimenter or designer might be wrong about what is in fact morally good or bad.
Being wrong while taking up, at least implicitly, this mantle of moral authority has at least two features that potentially make it worse than the type of error that arises by wrongly deferring to mistaken raters. First, the error is guaranteed to be systematic. The same wrong standards will be applied to every case, rather than scattered in different (and perhaps partly canceling) directions as might be the case with rater error. And second, it risks a lack of respect: Others might reasonably object to being classified as "moral" or "immoral" by an alien set of standards devised by researchers and with which they disagree.
In Sum
The methodological problems with any potential moralometer are extremely daunting. As discussed in December, all moralometers must rely on some combination of self-report, informant report, behavioral measure, or physiological measure, and each of these methods has serious problems. Furthermore, as discussed today, a batch of issues around relativism, particularism, disagreement, incommensurability, error, and moral authority dog both flexible measures of morality (which rely on raters' judgments about what's good and bad) and inflexible measures (which rely on researchers' or designers' judgments).
Coming up... should we even want a moralometer if we could have one? I discussed the desirability or undesirability of a perfect moralometer in December, but I want to think more carefully about the moral consequences of the more realistic case of an imperfect moralometer.
Posted by Eric Schwitzgebel at 9:05 AM 10 comments
Labels: moral psychology, psychological methods
Friday, January 06, 2023
The Design Policy of the Excluded Middle
According to the Design Policy of the Excluded Middle, as Mara Garza and I have articulated it (here and here), we ought to avoid creating AI systems "about which it is unclear whether they deserve full human-grade rights because it is unclear whether they are conscious or to what degree" -- or, more simply, we shouldn't make AI systems whose moral status is legitimately in doubt. (This is related to Joanna Bryson's suggestion that we should only create robots whose lack of moral considerability is obvious, but unlike Bryson's policy it imagines leapfrogging past the no-rights case to the full rights case.)
To my delight, Mara's and my suggestion is getting some uptake, most notably today in the New York Times.
The fundamental problem is this. Suppose we create AI systems that some people reasonably suspect are genuinely conscious and genuinely deserve human-like rights, while others reasonably suspect that they aren't genuinely conscious and don't genuinely deserve human-like rights. This forces us into a catastrophic dilemma: Either give them full human-like rights or don't give them full human-like rights.
If we do the first -- if we give them full human or human-like rights -- then we had better give them paths to citizenship, healthcare, the vote, the right to reproduce, the right to rescue in an emergency, etc. All of this entails substantial risks to human beings: For example, we might be committed to save six robots in a fire in preference to five humans. The AI systems might support policies that entail worse outcomes for human beings. It would be more difficult to implement policies designed to reduce existential risk due to runaway AI intelligence. And so on. This might be perfectly fine, if the AI systems really are conscious and really are our moral equals. But by stipulation, it's reasonable to think that they are empty machines with no consciousness and no real moral status, and so there's a real risk that we would be risking and sacrificing all this for nothing.
If we do the second -- if we deny them full human or human-like rights -- then we risk creating a race of slaves we can kill at will, or at best a group of second-class citizens. By stipulation, it might be the case that this would constitute unjust and terrible treatment of entities as deserving of rights and moral consideration as human beings are.
Therefore, we ought to avoid putting us in the situation where we face this dilemma. We should avoid creating AI systems of dubious moral status.
A few notes:
"Human-like" rights: Of course "human rights" would be a misnomer if AI systems become our moral equals. Also, exactly what healthcare, reproduction, etc., look like for AI systems, and the best way to respect their interests, might look very different in practice from the human case. There would be a lot of tricky details to work out!
What about animal-grade AI that deserves animal-grade rights? Maybe! Although it seems a natural intermediate step, we might end up skipping it, if any conscious AI systems end up also being capable of human-like language, rational-planning, self-knowledge, ethical reflection, etc. Another issue is this: The moral status of non-human animals is already in dispute, so creating AI systems of disputably animal-like moral status doesn't perhaps add quite the same dimension of risk and uncertainty to the world that creating a dubiously human-status moral system would.
Would this policy slow technological progress? Yes, probably. Unsurprisingly, being ethical has its costs. And one can dispute whether those costs are worth paying or are overridden by other ethical considerations.
Posted by Eric Schwitzgebel at 10:00 AM 8 comments
Labels: AI/robot/Martian rights
Sunday, January 01, 2023
Writings of 2022
The biggest project this year was my new book The Weirdness of the World, submitted in November and due in print in early fall 2023. This book pulls together ideas I've been publishing over the past ten years concerning the failure of common sense, philosophy, and empirical science to explain consciousness and the fundamental structure of the cosmos, and the corresponding bizarreness and dubiety of all general theories about such matters.
-----------------------------------
Books
Submitted:
- The Weirdness of the World (under contract with Princeton University Press).
- As co-editor with Jonathan Jong, The Nature of Belief, Oxford University Press.
- As co-editor with Helen De Cruz and Rich Horton, a yet-to-be-titled anthology with MIT Press containing great classics of philosophical SF.
Full-length non-fiction essays
Appearing in print:
- "Measuring eudaimonic and non-eudaimonic goods in the pursuit of the good life: The Riverside Eudaimonia Scale and the Rich & Sexy Well-Being Scale" (with Seth Margolis, Daniel J. Ozer, Ramona Martinez, and Sonja Lyubomirsky). International Journal of Well-Being, 12, 1-20.
- "Engaging charitable giving: The motivational force of narrative versus philosophical argument" (with Christopher McVey and Joshua May), Philosophical Psychology. DOI: 10.1080/09515089.2022.2088340.
- "Uncle Iroh, from fool to sage -- or sage all along?" (with David Schwitzgebel), in H. De Cruz and J. De Smedt, eds., Avatar: The Last Airbender and Philosophy.
- "The ethics of artificial life: The moral status of life as it could be" (second author, with Olaf Witkowski), Artificial Life Conference Proceedings 2022.
- winner: Best ALIFE 2022 paper
- "How far can we get in creating a digital replica of a philosopher?" (third author, with Anna Strasser and Matt Crosby”, Robophilosophy Proceedings 2022.
- "What is unique about kindness? Exploring the proximal experience of prosocial acts relative to other positive behaviors” (with Annie Regan, Seth Margolis, Daniel J. Ozer, and Sonja Lyubomirsky), Affective Science.
- "Creating a large language model of a philosopher" (with David Schwitzgebel and Anna Strasser).
- "The full rights dilemma for A.I. systems of debatable personhood" [available on request].
- "Inflate and explode". (I'm trying to decide whether to trunk this one or continue revising it.)
- "Does the heart revolt at evil? The case of racial atrocities", Journal of Confucian Philosophy and Culture, 38, 5-13.
- "Dehumanizing the cognitively disabled: Commentary on Smith’s Making Monsters" (first author with Amelie Green), Analysis Reviews (forthcoming).
- "The nature of belief from a philosophical perspective, with theoretical and methodological implications for psychology and cognitive science", Frontiers in Psychology, https://doi.org/10.3389/fpsyg.2022.947664.
-
"If we’re living in a simulation, the gods might be crazy", Slate (Jun. 25, 2022).
"Is life a simulation? If so, be very afraid", Los Angeles Times (Apr. 22, 2022).
"The COVID jerk", The Atlantic (Feb. 3, 2022).
"The best books on science fiction and philosophy", Five Books (Jan. 22, 2022).
- "Let Everyone Sparkle: Psychotechnology in the Year 2067", Aeon Ideas (Apr 12).
- "Ethical efficiencies" (Jan 13).
- "The parable of the overconfident student -- and why philosophy still favors the socially privileged" (Mar 14).
- "Everything is valuable" (May 6).
- "Results: The computerized philosopher: Can you distinguish Daniel Dennett from a computer?" (Jul 25).
- "The washout argument against longtermism" (Aug 23).
- "The coming robot rights catastrophe" (Oct 27).
- "Credence-first skepticism" (Nov 11).
- "Fish dance", reprinted in R. M. Ambrose, Vital (2022). Inlandia Institute.
- "Diversity and philosophy journals: Practices for improving diversity in philosophy journal publishing" (with Sherri Conklin and Nicole Hassoun), reprinted as an appendix to "The past 110 years: Historical data on the underrepresentation of women in philosophy journals,” by Nicole Hassoun, Sherri Conklin, Michael Nekrasov, and Jevin West, Ethics, 132, 727-729.
Posted by Eric Schwitzgebel at 12:01 AM 1 comments
Labels: announcements