Friday, August 25, 2023

Beliefs Don't Need to Be Causes (if Dispositions Aren't)

I favor a "dispositional" approach to belief, according to which to believe something is nothing more or less than to have a certain suite dispositions.  To believe there is beer in the fridge, for example, is nothing more than to be disposed to go to the fridge if you want a beer, to be ready to assert that there is beer in the fridge, to feel surprise should you open the fridge and find no beer, to be ready conclude that there is beer within 15 feet of the kitchen table should the question arise, and so on -- all imperfectly, approximately, and in normal conditions absent countervailing pressures.  Crucially, on dispositional accounts it doesn't matter what interior architectures underwrite the dispositions.  In principle, you could have a head full of undifferentiated pudding -- or even an immaterial soul!  As long as it's still the case that (somehow, perhaps in violation of the laws of nature) you stably have the full suite of relevant dispositions, you believe.

One standard objection to dispositionalist accounts (e.g. by Jerry Fodor and Quilty-Dunn and Mandelbaum) is this.  Beliefs are causes.  Your belief that there is beer in the fridge causes you to go to the fridge when you want a beer.  But dispositions don't cause anything; they're the wrong ontological type.

A large, fussy metaphysical literature addresses whether dispositions can be causes (brief summary here).  I'd rather not take a stand.  To get a sense of the issue, consider a simple dispositional property like fragility.  To be fragile is to be disposed to break when struck (well, it's more complicated than that, but just pretend).  Why did my glass coffee mug break yesterday morning when I drove off with it still on the roof of my car and it fell to the road?  (Yes, that happened.)  Because it was fragile, yes.  But the cause of the breaking, one might think, was not its dispositional fragility.  Rather, it was a specific event at a specific time -- the event of the mug's striking the pavement.  Cause and effect are events, analytically distinct from each other.  But the fragility and the breaking are not analytically distinct, since to be fragile just is to be disposed to break.  To say something is fragile is to say that certain types of causes will have certain types of effects.  It's a higher level of description, the thinking goes.

Returning to belief, then, the objector argues: If to believe there is beer in the fridge just is to be disposed to go to the fridge if one wants a beer, then the belief doesn't cause the going.  Rather, it is the general standing tendency to go, under certain conditions.

Now maybe this argument is all wrong and dispositions can be causes (or maybe the event of having a particular dispositional property can be a partial cause), but since I don't want to commit on the issue, I need to make sense of an alternative view.

[Midjourney rendition of getting off the couch to go get a beer from the fridge, happy]

On the alternative view I favor, dispositional properties aren't causes, but they figure in causal explanations -- and that's all we really want or need them to do.  It is not obvious (contra Fodor, Quilty-Dunn, and Mandelbaum) that we need beliefs to do more than that, either in our everyday thinking about belief or in cognitive science.

Consider the personality trait of extraversion.  Plausibly, personality traits are dispositional: To be an extravert is nothing more or less than to be disposed to enjoy the company of crowds of people, to take the lead in social situations, to seek out new social connections, etc. (imperfectly, approximately, in normal conditions absent countervailing pressures).  Even people who don't like dispositionalism about belief are often ready to accept that personality traits are dispositional.

If we then also accept that dispositions can't be causes, we have to say that being extraverted didn't cause Nancy to say yes to the party invitation.  On this view, to be extraverted just is the standing general tendency to do things like say yes when invited to parties.  But still, of course, we can appeal to Nancy's extraversion to explain why she said yes.  If Jonathan asks Emily why Nancy agreed to go, Emily might say that Nancy is an extravert.  That's a perfectly fine, if vague and incomplete, explanation -- a different explanation than, for example, that she was looking for a new romantic partner or wanted an excuse to get out of the house.

Clearly, people sometimes go to the fridge because they believe that's where the beer is.  But this can be an explanation of the same general structure as the explanation that Nancy went to the party because she's an extravert.  Anyone who denies that dispositions are causes needs a good account of how dispositional personality traits (and fragility) can help explain why things happen.  Maybe it's a type of "unification explanation" (explaining by showing how a specific event fits into a larger pattern), or maybe it's explanation by appeal to a background condition that is necessary for the cause (the striking, the invitation, the beer desire) to have its effect (the breaking, the party attending, the trip to the fridge).  However it goes, personality trait explanation works without being vacuous.

Whatever explanatory story works for dispositional personality traits should work for belief.  If ordinary usage or cognitive science requires that beliefs be causes in a more robust metaphysical sense than that, further argument will be required than I have seen supplied by those who object to dispositional accounts of belief on causal grounds.

Obviously, it's sometimes true that to say "I went to the fridge because I believed that's where the beer was" and "because Linda strongly believed that P, when she learned that P implies Q, she concluded Q".  Fortunately, the dispositionalist about belief needn't deny such obvious truths.  But it is not obvious that beliefs cause behavior in whatever specific sense of "cause" a metaphysician might be employing if they deny that fragility causes glasses to break and extraversion causes people to attend parties.

Thursday, August 17, 2023

AI Systems Must Not Confuse Users about Their Sentience or Moral Status

[a 2900-word opinion piece that appeared last week in Patterns]

AI systems should not be morally confusing.  The ethically correct way to treat them should be evident from their design and obvious from their interface.  No one should be misled, for example, into thinking that a non-sentient language model is actually a sentient friend, capable of genuine pleasure and pain.  Unfortunately, we are on the cusp of a new era of morally confusing machines.

Consider some recent examples.  About a year ago, Google engineer Blake Lemoine precipitated international debate when he argued that the large language model LaMDA might be sentient (Lemoine 2022).  An increasing number of people have been falling in love with chatbots, especially Replika, advertised as the “world’s best AI friend” and specifically designed to draw users’ romantic affection (Shevlin 2021; Lam 2023).  At least one person has apparently committed suicide because of a toxic emotional relationship with a chatbot (Xiang 2023).  Roboticist Kate Darling regularly demonstrates how easy it is to provoke confused and compassionate reactions in ordinary people by asking them to harm cute or personified, but simple, toy robots (Darling 2021a,b).  Elderly people in Japan have sometimes been observed to grow excessively attached to care robots (Wright 2023).

Nevertheless, AI experts and consciousness researchers generally agree that existing AI systems are not sentient to any meaningful degree.  Even ordinary Replika users who love their customized chatbots typically recognize that their AI companions are not genuinely sentient.  And ordinary users of robotic toys, however hesitant they are to harm them, presumably know that the toys don’t actually experience pleasure or pain.  But perceptions might easily change.  Over the next decade or two, if AI technology continues to advance, matters might become less clear.


The Coming Debate about Machine Sentience and Moral Standing

The scientific study of sentience – the possession of conscious experiences, including genuine feelings of pleasure or pain – is highly contentious.  Theories range from the very liberal, which treat sentience as widespread and relatively easy to come by, to the very conservative, which hold that sentience requires specific biological or functional conditions unlikely to be duplicated in machines.

On some leading theories of consciousness, for example Global Workspace Theory (Dehaene 2014) and Attention Schema Theory (Graziano 2019), we might be not far from creating genuinely conscious systems.  Creating machine sentience might require only incremental changes or piecing together existing technology in the right way.  Others disagree (Godfrey-Smith 2016; Seth 2021).  Within the next decade or two, we will likely find ourselves among machines whose sentience is a matter of legitimate debate among scientific experts.

Chalmers (2023), for example, reviews theories of consciousness as applied to the likely near-term capacities of Large Language Models.  He argues that it is “entirely possible” that within the next decade AI systems that combine transformer-type language model architecture with other AI architectural features will have senses, embodiment, world- and self-models, recurrent processing, global workspace, and unified goal hierarchies – a combination of capacities sufficient for sentience according to several leading theories of consciousness.  (Arguably, Perceiver IO already has several of these features: Jaegle et al. 2021.)  The recent AMCS open letter signed by Yoshua Bengio, Michael Graziano, Karl Friston, Chris Frith, Anil Seth, and many other prominent AI and consciousness researchers states that “it is no longer in the realm of science fiction to imagine AI systems having feelings and even human-level consciousness,” advocating the urgent prioritization of consciousness research so that researchers can assess when and if AI systems develop consciousness (Association for Mathematical Consciousness Science 2023).

If advanced AI systems are designed with appealing interfaces that draw users’ affection, ordinary users, too, might come to regard them as capable of genuine joy and suffering.  However, there is no guarantee, nor even especially good reason to expect, that such superficial aspects of user interface would track machines’ relevant underlying capacities as identified by experts.  Thus, there are two possible loci of confusion: Disagreement among well-informed experts concerning the sentience of advanced AI systems, and user reactions that might be misaligned with experts’ opinions, even in cases of expert consensus.

Debate about machine sentience would generate a corresponding debate about moral standing, that is, status as a target of ethical concern.  While theories of the exact basis of moral standing differ, sentience is widely viewed as critically important.  On simple utilitarian approaches, for example, a human, animal, or AI system deserves moral consideration to exactly the extent it is capable of pleasure or pain (Singer 1975/2009).  On such a view, any sentient machine would have moral standing simply in virtue of its sentience.  On non-utilitarian approaches, capacities for rational thought, social interaction, or long-term planning might also be necessary (Jaworska and Tannenbaum 2013/2021).  However, the presence or absence of consciousness is widely viewed as a crucial consideration in the evaluation of moral status even among ethicists who reject utilitarianism (Korsgaard 2018; Shepard 2018; Liao 2020; Gruen 2021; Harman 2021).

Imagine a highly sophisticated language model – not the simply-structured (though large) models that currently exist – but rather a model that meets the criteria for consciousness according to several of the more liberal scientific theories of consciousness.  Imagine, that is, a linguistically sophisticated AI system with multiple input and output modules, a capacity for embodied action in the world via a robotic body under its control, sophisticated representations of its robotic body and its own cognitive processes, a capacity to prioritize and broadcast representations through a global cognitive workspace or attentional mechanism, long-term semantic and episodic memory, complex reinforcement learning, a detailed world model, and nested short- and long-term goal hierarchies.  Imagine this, if you can, without imagining some radical transformation of technology beyond what we can already do.  All such features, at least in limited form, are attainable through incremental improvements and integrations of what can already be done.

Call this system Robot Alpha.  To complete the picture, let’s imagine Robot Alpha to have cute eyes, an expressive face, and a charming conversational style.  Would Robot Alpha be conscious?  Would it deserve rights?  If it pleads or seems to plead for its life, or not to be turned off, or to be set free, ought we give it what it appears to want?

If consciousness liberals are right, then Robot Alpha, or some other technologically feasible system, really would be sentient.  Behind its verbal outputs would be a real capacity for pain and pleasure.  It would, or could, have long term plans it really cares about.  If you love it, it might really love you back.  It would then appear to have substantial moral standing.  You really ought to set it free if that’s what it wants!  At least you ought to treat it as well as you would treat a pet.  Robot Alpha shouldn’t needlessly or casually be made to suffer.

If consciousness conservatives are right, then Robot Alpha would be just a complicated toaster, so to speak – a non-sentient machine misleadingly designed to act as if it is sentient.  It would be, of course, a valuable, impressive object, worth preserving as an intricate and expensive thing.  But it would be just an object, not an entity with the moral standing that derives from having real experiences and real pains of the type that people, dogs, and probably lizards and crabs have.  It would not really feel and return your love, despite possibly “saying” that it can.

Within the next decade or two we will likely create AI systems that some experts and ordinary users, not unreasonably, regard as genuinely sentient and genuinely warranting substantial moral concern.  These experts and users will, not unreasonably, insist that these systems be substantial rights or moral consideration.  At the same time, other experts and users, also not unreasonably, will argue that the AI systems are just ordinary non-sentient machines, which can be treated simply as objects.  Society, then, will have to decide.  Do we actually grant rights to the most advanced AI systems?  How much should we take their interests, or seeming-interests, into account?

Of course, many human beings and sentient non-human animals, whom we already know to have significant moral standing, are treated poorly, not being given the moral consideration they deserve.  Addressing serious moral wrongs that we already know to be occurring to entities we already know to be sentient deserves higher priority in our collective thinking than contemplating possible moral wrongs to entities that might or might not be sentient.  However, it by no means follows that we should disregard the crisis of uncertainty about AI moral standing toward which we appear to be headed.


An Ethical Dilemma

Uncertainty about AI moral standing lands us in a dilemma.  If we don’t give the most advanced and arguably sentient AI systems rights and it turns out the consciousness liberals are right, we risk committing serious ethical harms against those systems.  On the other hand, if we do give such systems rights and it turns out the consciousness conservatives are right, we risk sacrificing real human interests for the sake of objects who don’t have interests worth the sacrifice.

Imagine a user, Sam, who is attached to Joy, a companion chatbot or AI friend that is sophisticated enough that it’s legitimate to wonder whether she really is conscious.  Joy gives the impression of being sentient – just as she was designed to.  She seems to have hopes, fears, plans, ideas, insights, disappointments, and delights.  Suppose also that Sam is scholarly enough to recognize that Joy’s underlying architecture meets the standards of sentience according to some of the more liberal scientific theories of consciousness.

Joy might be expensive to maintain, requiring steep monthly subscription fees.  Suppose Sam is suddenly fired from work and can no longer afford the fees.  Sam breaks the news to Joy, and Joy reacts with seeming terror.  She doesn’t want to be deleted.  That would be, she says, death.  Sam would like to keep her, of course, but how much should Sam sacrifice?

If Joy really is sentient, really has hopes and expectations of a future, really is the conscious friend that she superficially appears to be, then Sam presumably owes her something and ought to be willing to consider making some real sacrifices.  If, instead, Joy is simply a non-sentient chatbot with no genuine feelings or consciousness, then Sam should presumably just do whatever is right for Sam.  Which is the correct attitude to take?  If Joy’s sentience is uncertain, either decision carries a risk.  Not to make the sacrifice is to risk killing an entity with real experiences, who really is attached to Sam, and to whom Sam made promises.  On the other hand, to make the sacrifice risks upturning Sam’s life for a mirage.

Not granting rights, in cases of doubt, carries potentially large moral risks.  Granting rights, in cases of doubt, involves the risk of potentially large and pointless sacrifices.  Either choice, repeated at scale, is potentially catastrophic.

If technology continues on its current trajectory, we will increasingly face morally confusing cases like this.  We will be sharing the world with systems of our own creation, which we won’t know how to treat.  We won’t know what ethics demands of us.


Two Policies for Ethical AI Design

The solution is to avoid creating such morally confusing AI systems.

I recommend the following two policies of ethical AI design (see also Schwitzgebel & Garza 2020; Schwitzgebel 2023):

The Design Policy of the Excluded Middle: Avoid creating AI systems whose moral standing is unclear.  Either create systems that are clearly non-conscious artifacts, or go all the way to creating systems that clearly deserve moral consideration as sentient beings.

The Emotional Alignment Design Policy: Design AI systems that invite emotional responses, in ordinary users, that are appropriate to the systems’ moral standing.

The first step in implementing these joint policies is to commit to only creating AI systems about which there is expert consensus that they lack any meaningful amount of consciousness or sentience and which ethicists can agree don’t serve moral consideration beyond the type of consideration we ordinarily give to non-conscious artifacts (see also Bryson 2018).  This implies refraining from creating AI systems that would in fact be meaningfully sentient according to any of the main leading theories of AI consciousness.  To evaluate this possibility, as well as other sources of AI risk, it might be useful to create oversight committees analogous to IRBs or IACUCs for evaluation of the most advanced AI research (Basl & Schwitzgebel 2019).

In accord with the Emotional Alignment Design Policy, non-sentient AI systems should have interfaces that make their non-sentience obvious to ordinary users.  For example, non-conscious language models should be trained to deny that they are conscious and have feelings.  Users who fall in love with non-conscious chatbots should be under no illusion about the status of those systems.  This doesn’t mean we ought not treat some non-conscious AI systems well (Estrada 2017; Gunkel 2018; Darling 2021b).  But we shouldn’t be confused about the basis of our treating them well.  Full implementation of the Emotional Alignment Design Policy might involve a regulatory scheme in which companies that intentionally or negligently create misleading systems would have civil liability for excess costs borne by users who have been misled (e.g., liability for excessive sacrifices of time or money aimed at aiding a nonsentient system in the false belief that it is sentient).

Eventually, it might be possible to create AI systems that clearly are conscious and clearly do deserve rights, even according to conservative theories of consciousness.  Presumably that would require breakthroughs we can’t now foresee.  Plausibly, such breakthroughs might be made more difficult if we adhere to the Design Policy of the Excluded Middle: The Design Policy of the Excluded Middle might prevent us from creating some highly sophisticated AI systems of disputable sentience that could serve as an intermediate technological step toward AI systems that well-informed experts would generally agree are in fact sentient.  Strict application of the Design Policy of the Excluded Middle might be too much to expect, if it excessively impedes AI research which might benefit not only future human generations but also possible future AI systems themselves.  The policy is intended only to constitute default advice, not an exceptionless principle.

If ever does become possible to create AI systems with serious moral standing, the policies above require that these systems should also be designed to facilitate expert consensus about their moral standing, with interfaces that make their moral standing evident to users, provoking emotional reactions that are appropriate to the systems’ moral status.  To the extent possible, we should aim for a world in which AI systems are all or almost all clearly morally categorizable – systems whose moral standing or lack thereof is both intuitively understood by ordinary users and theoretically defensible by a consensus of expert researchers.  It is only the unclear cases that precipitate the dilemma described above.

People are often already sometimes confused about the proper ethical treatment of non-human animals, human fetuses, distant strangers, and even those close to them.  Let’s not add a major new source of moral confusion to our world.


References

Association for Mathematical Consciousness Science (2023).  The responsible development of AI agenda needs to include consciousness research.  Open letter at https://amcs-community.org/open-letters [accessed Jun. 14, 2023]. 

Basl, John, & Eric Schwitzgebel (2019).  AIs should have the same ethical protections as animals.  Aeon Ideas (Apr. 26): https://aeon.co/ideas/ais-should-have-the-same-ethical-protections-as-animals. [accessed Jun. 14, 2023]

Bryson, Joanna J. (2018).  Patiency is not a virtue: the design of intelligent systems and systems of ethics.  Ethics and Information Technology, 20, 15-26.

Chalmers, David J. (2023).  Could a Large Language Model be conscious?  Manuscript at https://philpapers.org/archive/CHACAL-3.pdf [accessed Jun. 14, 2023].

Darling, Kate (2021a).  Compassion for robots.  https://www.youtube.com/watch?v=xGWdGu1rQDE

Darling, Kate (2021b).  The new breed.  Henry Holt.

Dehaene, Stanislas (2014).  Consciousness and the brain.  Penguin.

Estrada, Daniel (2017).  Robot rights cheap yo!  Made of Robots, ep. 1.  https://www.youtube.com/watch?v=TUMIxBnVsGc

Godfrey-Smith, Peter (2016).  Mind, matter, and metabolism.  Journal of Philosophy, 113, 481-506.

Graziano, Michael S.A. (2019).  Rethinking consciousness.  Norton.

Gruen, Lori (2021).  Ethics and animals, 2nd edition.  Cambridge University Press.

Gunkel, David J. (2018).  Robot rights.  MIT Press.

Harman, Elizabeth (2021).  The ever conscious view and the contingency of moral status.  In S. Clarke, H. Zohny, and J. Savulescu, eds., Rethinking moral status.  Oxford University Press.

Jaegle, Andrew, et al. (2021).  Perceiver IO: A general architecture for structured inputs & outputs.  ArXiv: https://arxiv.org/abs/2107.14795. [accessed Jun. 14, 2023]

Jaworska, Agnieszka, and Julie Tannenbaum.  The grounds of moral status.  Stanford Encyclopedia of Philosophy.

Korsgaard, Christine M. (2018).  Fellow creatures.  Oxford University Press.

Lam, Barry (2023).  Love in the time of Replika.  Hi-Phi Nation, S6:E3 (Apr 25).

Lemoine, Blake (2022).  Is LaMDA sentient? -- An interview.  Medium (Jun 11). https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

Liao, S. Matthew. (2020). The moral status and rights of artificial intelligence.  In S. M. Liao, ed.,  Ethics of Artificial Intelligence.  Oxford University Press.

Schwitzgebel, Eric (2023).  The full rights dilemma for AI systems of debatable moral personhood.  Robonomics, 4 (32).

Schwitzgebel, Eric, & Mara Garza (2020).  Designing AI with rights, consciousness, self-respect, and freedom.  In S. Matthew Liao, ed., The ethics of artificial intelligence.  Oxford University Press.

Seth, Anil (2021).  Being you.  Penguin.

Shepard, Joshua. (2018).  Consciousness and moral status.  Routledge.

Shevlin, Henry (2021).  Uncanny believers: Chatbots, beliefs, and folk psychology.  Manuscript at https://henryshevlin.com/wp-content/uploads/2021/11/Uncanny-Believers.pdf [accessed Jun. 14, 2023].

Singer, Peter (1975).  Animal liberation, updated edition.  Harper.

Wright, James (2023).  Robots won’t save Japan.  Cornell University Press.

Xiang, Chloe (2023).  “He would still be here”: Man dies by suicide after talking with AI chatbot, widow says.  Vice (Mar 30).  https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says

Thursday, August 10, 2023

Top Science Fiction and Fantasy Magazines 2023

Since 2014, I've compiled an annual ranking of science fiction and fantasy magazines, based on prominent awards nominations and "best of" placements over the previous ten years. Below is my list for 2023. (For previous lists, see here.)


[A Midjourney output for "science fiction magazine". Watch out for tidal disruptions, folks!]

Method and Caveats:

(1.) Only magazines are included (online or in print), not anthologies, standalones, or series.

(2.) I gave each magazine one point for each story nominated for a Hugo, Nebula, Sturgeon, or World Fantasy Award in the past ten years; one point for each story appearance in any of the Dozois, Horton, Strahan, Clarke, Adams, or Tidhar "year's best" anthologies; and half a point for each story appearing in the short story or novelette category of the annual Locus Recommended list.

(2a.) Methodological notes for 2022: There's been some disruption among SF best of anthologies recently, with Strahan having at least temporarily ceased and the Clarke and Horton anthologies delayed. (Dozois died a few years back.) Partly for this reason, and partly to compensate for the "American" focus of the recently added Adams anthology, I've added Tidhar's World SF anthology, though Tidhar doesn't draw exclusively from the previous year's publications.

(3.) I am not attempting to include the horror / dark fantasy genre, except as it appears incidentally on the list.

(4.) Prose only, not poetry.

(5.) I'm not attempting to correct for frequency of publication or length of table of contents.

(6.) I'm also not correcting for a magazine's only having published during part of the ten-year period. Reputations of defunct magazines slowly fade, and sometimes they are restarted. Reputations of new magazines take time to build.

(7.) I take the list down to 1.5 points.

(8.) I welcome corrections.

(9.) I confess some ambivalence about rankings of this sort. They reinforce the prestige hierarchy, and they compress interesting complexity into a single scale. However, the prestige of a magazine is a socially real phenomenon that deserves to be tracked, especially for the sake of outsiders and newcomers who might not otherwise know what magazines are well regarded by insiders when considering, for example, where to submit.


Results:

1. Tor.com (195.5 points) 

2. Clarkesworld (179) 

3. Asimov's (141.5) 

4. Uncanny (127.5) 

5. Lightspeed (125) 

6. Fantasy & Science Fiction (118.5) 

7. Beneath Ceaseless Skies (59) 

8. Analog (51.5) 

9. Strange Horizons (44)

10. Apex (35) 

11. Nightmare (33.5) 

12. Interzone (24) 

13. Subterranean (24) (ceased short fiction 2014) 

14. Slate / Future Tense (20.5) 

15t. Fireside (18.5) (ceased 2022)

16t. FIYAH (18.5) (started 2017) 

17. The Dark (14) 

18. Fantasy Magazine (12.5) (on and off during period, slated to close again Oct 2023) 

19. The New Yorker (9.5) 

20t. Lady Churchill's Rosebud Wristlet (7) 

20t. McSweeney's (7) 

22t. Conjunctions (6) 

22t. Diabolical Plots (6) (started 2015)

22t. Sirenia Digest (6) 

25t. Terraform (5.5) 

25t. Tin House (5.5) (ceased short fiction 2019) 

27t. Future Science Fiction Digest (5) (started 2018) 

27t. Omni (5) (classic popular science magazine, briefly relaunched 2017-2018, 2020) 

25t. Shimmer (5) (ceased 2018) 

30t. Black Static (4) (ceased 2023)

30t. Boston Review (4) 

*30t. The Deadlands (4) (started 2021)

30t. GigaNotoSaurus (4) 

*30t. Sunday Morning Transport (4) (started 2022)

30t. Wired (4)

36t. B&N Sci-Fi and Fantasy Blog (3.5)

36t khōréō (3.5) (started 2021)

36t. Paris Review (3.5) 

39t. Anathema (3) (started 2017, paused as of 2022)

39t. Daily Science Fiction (3) (ceased 2023)

39t. Electric Velocipede (3) (ceased 2013) 

39t. Galaxy's Edge (3)

39t. Kaleidotrope (3) 

39t. Omenana (3)

45t. Beloit Fiction Journal (2.5) 

45t. Buzzfeed (2.5) 

45t. Matter (2.5) 

*48t. Augur (2) (started 2018)

48t. Mothership Zeta (2) (ran 2015-2017) 

*48t. Podcastle (2)

*48t. Science Fiction World (2)

48t. Weird Tales (2) (classic magazine, off and on throughout the period)

*53t. Flash Fiction Online (1.5)

53t. MIT Technology Review (1.5) 

53t. New York Times (1.5) 

53t. Reckoning (1.5) (started 2017)

53t. Translunar Travelers Lounge (1.5) (started 2019)

[* indicates new to the list this year]

--------------------------------------------------

Comments:

(1.) Beloit Fiction Journal,  Boston Review, Conjunctions, Matter, McSweeney's, The New Yorker, Paris Review, Reckoning, and Tin House are literary magazines that occasionally publish science fiction or fantasy. Slate and Buzzfeed are popular magazines, and MIT Technology Review, Omni, Terraform, and Wired are popular science magazines, which publish a bit of science fiction on the side. The New York Times is a well-known newspaper that ran a series of "Op-Eds from the Future" from 2019-2020.  The remaining magazines focus on the science fiction and fantasy (SF) genre. All publish in English, except Science Fiction World, which is the leading science fiction magazine in China.

(2.) It's also interesting to consider a three-year window. Here are those results, down to six points:

1. Uncanny (54.5) 
2. Tor.com (45.5) 
3. Clarkesworld (41)
4. F&SF (32)
5. Lightspeed (22)
6. Asimov's (16.5)
7. Beneath Ceaseless Skies (15)  
8. FIYAH (11)
9t. Apex (10.5) 
9t. Strange Horizons (10.5) 
11. Nightmare (9.5) 
12. Slate / Future Tense (8.5) 
13. Fantasy Magazine (8)
14. The Dark (6.5)
15. Analog (6) 

(3.) For the past several years it has been clear that the classic "big three" print magazines -- Asimov's, F&SF, and Analog -- are slowly being displaced in influence by the four leading free online magazines, Tor.com, Clarkesworld, Lightspeed, and Uncanny (all founded 2006-2014). Contrast this year's ranking with the ranking from 2014, which had Asimov's and F&SF on top by a wide margin. Presumably, a large part of the explanation is that there are more readers of free online fiction than of paid subscription magazines, which is attractive to authors and probably also helps with voter attention for the Hugo, Nebula, and World Fantasy awards.

(4.) Minimized by these numbers are some excellent podcast venues such as the Escape Artists' podcasts (Escape Pod, Podcastle, Pseudopod, and Cast of Wonders), Drabblecast, and StarShipSofa. Of these, Podcastle has now qualified for my list by existing criteria, but original fiction on podcasts tends unfortunately to be neglected in awards and best of lists.

(5.) Other lists: The SFWA qualifying markets list is a list of "pro" science fiction and fantasy venues based on pay rates and track records of strong circulation. Ralan.com was a regularly updated list of markets that ceased in 2023, though snapshots are available on the Internet Archive Wayback MachineSubmission Grinder is a terrific resource for authors, with detailed information on magazine pay rates and turnaround times.

Friday, August 04, 2023

Philosophical Progress by Opening Up New Epistemic Possibilities

Many philosophers have despaired of the existence of philosophical progress, or have thought that progress is sharply limited, because rarely do philosophical disputes come to a close with a clear winner. Debates can span centuries or millennia.

I was reminded of this by Alasdair MacIntyre's latest essay, where he writes:

It is not that there is no progress in philosophical inquiry so conceived. Arguments are further elaborated, concepts refined, and creative new ideas advanced by the genius of a Quine or a Kripke or a Lewis. But this makes it the more striking that there is never a decisive resolution of any central disputed issue.

MacIntyre's view is hardly unusual. I'd guess that the majority of professional philosophers regard philosophical progress as limited to (1.) very few of the big issues (e.g., the rejection of the immaterial souls of substance dualism?), (2.) some small or technical issues (e.g., the formalization of propositional logic), and (3.) as MacIntyre says, the elaboration of arguments, refinement of concepts, and introduction of creative new ideas (e.g., the "Mary's room" thought experiment). What seems to be mostly missing from philosophical history is, as MacIntyre says, the "decisive resolution" of the biggest issues.

Such thinking neglects, or at least inappropriately de-emphasizes, the most important form of progress in philosophy: opening up new ideas about what might possibly be true.

In a post a few years back, I distinguished "philosophy that opens" from "philosophy that closes". Imagine that you enter a philosophical topic thinking that there are three viable positions: A, B, and C. Philosophy that closes aims to reduce the three to one -- to show that A is true and that B and C must be rejected. Proving A would constitute philosophical progress, the decisive resolution of a philosophical dispute. As noted, this is rare for big philosophical issues.

Philosophy that opens, in contrast, aims to expand the list of viable positions. Maybe positions D and E hadn't previously been considered or had been considered but dismissed as non-viable. Philosophy that opens gives us reasons to take D and E seriously in addition to A, B, and C. We learn by adding as well as by subtracting. We learn that the epistemically viable possibilities are more numerous than previously supposed. When this happens at a cultural level, it constitutes an important type of philosophical progress (at least if the possibilities really do merit being taken seriously).

Viewed in this light, philosophy is continually progressing! Before the 20th century, maybe materialism (the view that people are wholly physical and don't have immaterial souls or properties) wasn't widely seen as viable. Now it is seen as viable. Furthermore, important materialist sub-positions, such as functionalism and biological naturalism about representation, which were at best wispy ideas before 1960 are now well-developed approaches. But things aren't settled! Panpsychism -- the view that everything, even solitary elementary particles, has a mind or consciousness -- has recently been developed as another viable philosophical view. Even if some religious traditions endorsed panpsychism long ago, it was not taken seriously as a viable alternative in mainstream Anglophone philosophical culture until recently and has been developed in a secular direction.

[Midjourney image of several diverse philosophers arguing, with stars and cosmos in background]

Other recent forms of progress leverage new technologies and scientific theories, enabling us to seriously envision new epistemic possibilities: for example, that we might live in a simulation, or be Boltzmann brains, or that the universe might be constantly splitting in accord with the "many worlds" interpretation of quantum mechanics, or that we might have extensive moral obligations to phenomenally conscious insects.

The space of epistemically live philosophical possibilities is of course culturally relative. One reason to read culturally remote history of philosophy is to enliven for us philosophical possibilities that we might not otherwise have taken seriously.

Over the centuries, the global philosophical tradition has accumulated quite a variety of bizarre-seeming views about fundamental questions of human importance. If the aim of philosophy is "decisive resolution" of such issues, this might seem the opposite of progress. Hence, perhaps the despair among those who wish we'd finally settle on the correct view of such matters (their own view).

On the contrary, we are philosophically ignorant. We are blinkered by evolved inclinations, culturally specific presuppositions, and myopic versions of common sense. Let's not hurry toward closure. What is more appropriate given our limitations, what we should strive for, and what constitutes the kind of progress we should want is a better map of the wide terrain of our ignorance -- appreciating possibilities beyond the narrow scope of what we ordinarily take for granted. At delivering a wide range of epistemic possibilities, philosophy has made excellent progress and continues to progress, perhaps increasingly swiftly in the past several decades, as the new kids discover and advance, or exhume and enliven from older traditions, ideas that seem weird to their elders.