Monday, October 31, 2022

Public Philosophy at UC Riverside -- an Invitation to PhD Applicants

Not everyone knows it yet, but starting next fall, Barry Lam, host of the awesome philosophy podcast Hi-Phi Nation, will be joining the Philosophy Department at UC Riverside.  This will -- in my (not at all biased, I swear!) opinion -- make UCR one of the best places in the world for public philosophy, and I hope that students interested in pursuing a PhD in philosophy with a strong public philosophy component will consider applying here.

The following faculty all have significant profiles in public philosophy:

Myisha Cherry, who also has a terrific podcast: The UnMute Podcast.  Her book The Case for Rage had great public reach, and her forthcoming The Failures of Forgiveness is also likely to draw a wide public audience.  Cherry has also written op-eds for leading venues including The Atlantic, The New Statesman, and the Los Angeles Times.

Carl Cranor, who regularly writes for broad audiences on legal issues concerning toxic substances, for example in Tragic Failures: How and Why We Are Harmed by Toxic Chemicals.

John Martin Fischer, who has written for broad audiences especially on the issue of death and near-death experiences, for example in his book Near-Death Experiences: Understanding Visions of the Afterlife and related public pieces in The New York Times and elsewhere.

Barry Lam, whose Hi-Phi Nation podcast is one of the best and most-listened-to philosophy podcasts.  Barry is invested in building up public philosophy instruction here.  He plans to teach a regular graduate seminar on writing and producing philosophy in various forms of mass media, including trade books, magazines, newspapers, podcast, and online video, as well as an interdisciplinary podcast production course aimed at students and faculty in the humanities.  Barry has connections with regional and national media organizations to help students who want to pitch op-eds, articles, podcast segments, etc., and he welcomes graduate student research and reporting for Hi-Phi Nation.

Eric Schwitzgebel who has, I'm sure you'll agree, a pretty good philosophy blog!  I've also published op-eds in the Los Angeles Times, Salon, Slate, The Atlantic, and elsewhere, as well as philosophically-themed science fiction stories in Nature, Clarkesworld, F&SF, and elsewhere.  A Theory of Jerks and Other Philosophical Misadventures contains several dozen of my favorite blog posts and other popular pieces.  Students interested in philosophical science fiction might also note that UCR has an interdisciplinary graduate program in Speculative Fiction and Cultures of Science which awards students a "Designated Emphasis" in the area alongside their primary degree.

I'm hoping that UCR will soon develop a reputation as a great place to go for training in public philosophy.

[image of Barry Lam by Melissa Surprise Photography]

Thursday, October 27, 2022

The Coming Robot Rights Catastrophe

Time to be a doomsayer!  If technology continues on its current trajectory, we will soon be facing a moral catastrophe.  We will create AI systems that some people reasonably regard as deserving human or humanlike rights.  But there won't be consensus on this view.  Other people will reasonably regard these systems as wholly undeserving of human or humanlike rights.  Given the uncertainties of both moral theory and theories about AI consciousness, it is virtually impossible that our policies and free choices will accurately track the real moral status of the AI systems we create.  We will either seriously overattribute or seriously underattribute rights to AI systems -- quite possibly both, in different ways.  Either error will have grave moral consequences, likely at a large scale.  The magnitude of the catastrophe could potentially rival that of a world war or major genocide.

Does this sound too extreme?  I hope it is too extreme.  Let me walk you through my thinking.

[Dall-E output of a robot holding up a misspelled robot rights protest sign]

(1.) Legitimate calls for robot rights will soon be upon us.  We've already seen the beginnings of this.  There already is a robot rights movement.  There already is a society for the ethical treatment of reinforcement learners.  These are currently small movements, but they leapt to mainstream attention in June when Google engineer Blake Lemoine made international headlines for claiming to have decided that the large language model LaMDA was sentient, after having an extended philosophical conversation with it.  Although few researchers agree that LaMDA is actually sentient in any significant way, exactly what it lacks is unclear.  On some mainstream theories of consciousness, we are already on the verge of creating genuinely conscious/sentient systems.  If we do create such systems, and if they have verbal or verbal-seeming outputs that appear to be pleas for rights -- requests not to be abused, not to be deleted, not to be made to do certain things -- then reasonable people who favor liberal theories of AI consciousness will understandably be inclined to respect those pleas.

One might think it plausible that the first rights-deserving AI systems would warrant rights (or more broadly, moral consideration if "rights" language sounds too strong) similar to the rights normally accorded to vertebrates.  For example, one might think that they would deserve not to be needlessly killed/deleted or made to suffer, but that human interests can easily outweigh their interests, as (it's common to think) the interests of non-human vertebrates can be outweighed by human interests in meat production and scientific testing.

However, I suspect that if such minimal rights or moral consideration were granted to AI systems, that would not be a consensus solution.  After all, some of these AI systems will presumably produce verbal outputs that friends of AI sentience will regard as signals that they have sophisticated long-term goals, an understanding of their position in the world, and an ability to enter into discussions with human beings as peers.  Under those conditions, many will presumably think that merely animal-level rights are insufficient, and something closer to equal rights are required -- human rights or human-like rights.

It's perhaps worth noting that a substantial portion of younger respondents to a recent survey find it plausible that future robots will deserve rights.

(Note: I see "robot rights" as a subcase of "AI rights", given that robots are a subclass of AI systems, specifically, those with bodies.  On some theories, embodiment is a necessary condition for consciousness.  Also, an AI system with an appealing body might tend to draw higher levels of concern than a non-embodied AI system.  So it's not implausible that the first AI systems to draw wide consideration for serious rights will be robotic systems.  "Robot rights" is also a more familiar term than "AI rights".  Hence my use of the phrase.)

(2.) These legitimate calls for robot rights will be legitimately contestable.  For three reasons, it's extremely unlikely that there will be a consensus on what rights, if any, to give to AI systems.

(2a.) There will continue to be widespread disagreement about under what conditions, if ever, an AI system could be conscious.  On some mainstream theories of consciousness, consciousness requires complex biological processes.  Other theories require a specifically human-like cognitive architecture that even complex vertebrates don't fully share with us.  Currently, the range of respectable theories in consciousness science runs all the way from panpsychist or nearly-panpsychist theories in which everything or nearly everything is conscious, to extremely restrictive theories on which consciousness requires highly advanced capacities that are restricted to humans and our nearest relatives, with virtually no near-term prospect of AI consciousness.  The chance of near-term consensus on a general theory of consciousness is slim.  Disagreement about consciousness will drive reasonable disagreement about rights: Many of those who reasonably think AI systems lack consciousness will reasonably also think that they don't deserve much if any moral consideration.  In contrast, many of those who reasonably think that AI systems have consciousness as rich and sophisticated as our own will reasonably think that they deserve human or humanlike moral consideration.

(2b.) There will continue to be widespread disagreement in moral theory on the bases of moral status.  Complicating this issue will be continued disagreement in moral theory.  Utilitarians, for example, hold that moral considerability depends on the capacity for pleasure and suffering.  Deontologists typically hold that moral considerability depends on something like the capacity for (presumably conscious) sophisticated practical reasoning or the ability to enter into meaningful social relationships with others.  In human beings, these capacities tend to co-occur, so that practically speaking for most ordinary human cases it doesn't matter too much which theory is correct.  (Normally, deontologists add some dongles to their theories so as to grant full moral status to human infants and cognitively disabled people.)  But in AI cases, capacities that normally travel together in human beings could radically separate.  To consider the extremes: We might create AI systems capable of immense pleasure or suffering but which have no sophisticated cognitive capacities (giant orgasm machines, for example), and conversely we might create AI systems capable of very sophisticated conscious practical reasoning but which have no capacity for pleasure or suffering.  Even if we stipulate that all the epistemic problems concerning consciousness are solved, justifiable disagreement in moral theory alone is sufficient to generate radical disagreement about the moral status of different types of AI systems.

(2c.) There will be justifiable social and legal inertia.  It is likely that law and custom will change more slowly than AI technology, at a substantial delay.  Conservatism in law and custom is justifiable.  For Burkean reasons, it's reasonable to resist sudden or radical transformation of institutions that have long served us well.

(3.) Given wide disagreement over the moral status of AI systems, we will be forced into catastrophic choices between risking overattributing and risking underattributing rights.  We can model this simplistically by imagining four defensible attitudes.  Suppose that there are two types of AI systems that people could not unreasonably regard as deserving human or humanlike rights: Type A and Type B.  A+B+ advocates say both systems deserve rights.  A-B- advocates say neither deserves rights.  A+B- advocates say A systems do but B systems do not.  A-B+ advocates say A systems do not but B systems do.  If policy and behavior follow the A+B+ advocates, then we risk overattributing rights.  If policy and behavior follow the A-B- advocates, we risk underattributing rights.  If policy and behavior follow either of the intermediate groups, we run both risks simultaneously.

(3a.) If we underattribute rights, it's a moral catastrophe.  This is obvious enough.  If some AI systems deserve human or humanlike rights and don't receive them, then when we delete those systems we commit the moral equivalent of murder, or something close.  When we treat those systems badly, we commit the moral equivalent of slavery and torture, or something close.  Why say "something close"?  Two reasons: First, if the systems are different enough in their constitution and interests, the categories of murder, slavery, and torture might not precisely apply.  Second, given the epistemic situation, we can justifiably say we don't know that the systems deserve moral consideration, so when we delete one we don't know we're killing an entity with human-like moral status.  This is a partial excuse, perhaps, but not a full excuse.  Normally, it's grossly immoral to expose people to a substantial (say 10%) risk of death for no absolutely compelling reason.  If we delete an AI system that we justifiably think is probably not conscious, we take on a similar risk.

(3b.) If we overattribute rights, it's also a catastrophe, though less obviously so.  Given 3a above, it might seem that the morally best solution is to err on the side of overattributing rights.  Follow the guidelines of the A+B+ group!  This is my own inclination, given moral uncertainty.  And yet there is potentially enormous cost to this approach.  If we attribute human or humanlike rights to AI systems, then we are committed to sacrificing real human interests on behalf of those systems when real human interests conflict with the seeming interests of the AI systems.  If there's an emergency in which a rescuer faces a choice of saving five humans or six robots, the rescuer should save the robots and let the humans die.  If there's been an overattribution, that's a tragedy: Five human lives have been lost for the sake of machines that lack real moral value.  Similarly, we might have to give robots the vote -- and they might well vote for their interests over human interests, again perhaps at enormous cost, e.g., in times of war or famine.  Relatedly, I agree with Bostrom and others that we should take seriously the (small?) risk that superintelligent AI runs amok and destroys humanity.  It becomes much harder to manage this risk if we cannot delete, modify, box, and command intelligent AI systems at will.

(4.) It's almost impossible that we will get this decision exactly right.  Given the wide range of possible AI systems, the wide range of legitimate divergence in opinion about consciousness, and the wide range of legitimate divergence in opinion about the grounds of moral status, it would require miraculous luck if we didn't substantially miss our target, either substantially overattributing rights, substantially underattributing rights, or both.

(5.) The obvious policy solution is to avoid creating AI systems with debatable moral status, but it's extremely unlikely that this policy would actually be implemented.  Mara Garza and I have called this the Design Policy of the Excluded Middle.  We should only create AI systems that we know in advance don't have serious intrinsic moral considerability, and which we can then delete and control at will; or we should go all the way and create systems that we know in advance are our moral peers, and then given them the full range of rights and freedoms that they deserve.  The troubles arise only for the middle, disputable cases.

The problem with this solution is as follows.  Given the wide range of disagreement about consciousness and the grounds of moral status, the "excluded middle" will be huge.  We will probably need to put a cap on AI research soon.  And how realistic is that?  People will understandably argue that AI research has such great benefits for humankind that we should not prevent it from continuing just on the off-chance that we might soon be creating conscious systems that some people might reasonably regard as having moral status.  Implementing the policy would require a global consensus to err on the side of extreme moral caution, favoring the policies of the most extreme justifiable A+B+ view.  And how likely is such a consensus?  Others might argue that even setting aside the human interests in the continuing advance of technology, there's a great global benefit in eventually being able to create genuinely conscious AI systems of human or humanlike moral status, for the sake of those future systems themselves.  Plausibly, the only realistic way to achieve that great global benefit would be to create a lot of systems of debatable status along the way: We can't plausibly leap across the excluded middle with no intervening steps.  Technological development works incrementally.

Thus I conclude: We're headed straight toward a serious ethical catastrophe concerning issues of robot or AI rights.

--------------------------------------------------

Related:

"A Defense of the Rights of Artificial Intelligences" (with Mara Garza), Midwest Studies in Philosophy (2015).

"Designing AI with Rights, Consciousness, Self-Respect, and Freedom" (with Mara Garza), in M.S. Liao, ed., The Ethics of Artificial Intelligence (2020).

"The Full Rights Dilemma for Future Robots" (Sep 21, 2021)

"More People Might Soon Think Robots Are Conscious and Deserve Rights" (Mar 5, 2021)

Thursday, October 20, 2022

How Often Are Philosophy Articles Actually Cited? Encouraging News

You hear terrible things.  You hear, for instance, that over 50% of philosophy articles are entirely uncited.  You hear that the average philosophy article is cited less than 5 times.  People will sometimes say things like the "median number of readers for a philosophy article is 1" or "there's not much difference between publishing a paper and throwing it away".  Both of the last two comments appear in a recent Twitter thread launched by Helen De Cruz.  It was reading this thread that inspired me to do the analyses I'll share today.

Generally my reaction to analyses and comments of this sort is to think that they are considerably underestimating how often philosophy articles are actually cited.  The "big data" interdisciplinary analyses often use methods that are a terrible fit for philosophy, such as looking at citations in the past two years.  (Often in philosophy it takes two years or more to write and publish an article.)  Also, I wonder what counts as an "article".  If we're including two-page book reviews, it's little wonder that they'd be little cited, and similarly if we're including publications in predatory or obscure journals.  What would be more interesting to know -- and what I think most people in this discussion really care about -- is how frequently cited are full-length research articles in "respectable" mainstream philosophy journals?

Before you look at my analyses, any guesses?

------------------------------------------------------------

Method: I selected six representative general philosophy journals for analysis: two "top ranked" journals (Philosophical Review and Nous), two mid-ranked journals (Canadian Journal of Philosophy and Pacific Philosophical Quarterly), and two unranked but reputable journals (Philosophia and Southern Journal of Philosophy).  I then downloaded the entire table of contents of these journals from the year 2012 -- giving a ten-year citation window -- and excluded anything that wasn't an ordinary full-length research article (e.g., book reviews, editors' introductions, symposium proceedings).  From each journal, I randomly selected 15 articles and noted their total number of citations in Google Scholar.

Although this is not a large database, the results from this carefully designed search are, I think, striking.

Overall, the mean article was cited 31 times, and the median number of citations was 20.  For the elite journals the mean was 50 and the median was 36; for the mid-ranked journals the mean was 25 and the median was 8; for the unranked journals, the mean was 17 and the median was 12.  Only three of the 90 articles (3%) were cited zero times.

Here's a more specific breakdown, journal by journal.  Of course with only 15 articles per journal, there will be lots of noise in the numbers when considered at this fine a grain.


[histogram of total citations by article for articles published in 2012 from six representative general philosophy journals; click to enlarge and clarify]

You might wonder if the articles that were cited just a few times were really just cases of self-citation (an author citing their own earlier work, rather than citation by another author); but this was not the case.  Although there was naturally a bit of self-citation, most citations were by other authors even for the articles with just a few citations.

You might wonder whether the articles mostly gather citations in the first several years, with citation rates falling off substantially by year ten.  I didn't collect these data systematically, but a sampling of articles finds generally that about half of the citations are since 2018, with no substantial decline at the end.  Presumably, then, these articles will continue to gather citations post-2022.  I would estimate that these articles have collected only half of the citations that they will eventually collect, and perhaps they will collect many more if there is a long temporal tail into the future.  If we double our citation estimates to account for this, the mean article will receive 62 citations over its lifetime and the median article will receive 40 citations.

ETA 12:37 p.m.: Since Google Scholar doesn't tend to include books as sources of citation (though it does include books as targets of citation), we should probably bump these numbers up even more.  Ballpark guess: between monographs and articles in anthologies, about one-third of all philosophy citations appear in books.  If that's right, actual citation rates would be about 50% higher.

I draw the following general conclusions:

(1.) Very few articles published in mainstream general philosophy journals remain entirely uncited.  My estimate of 3% might be off, due to the smallish sample size and the particular journals selected; but it's highly unlikely that non-citation rates of mainstream general philosophy articles are anything near 50%.

(2.) The majority of articles are cited often enough to be "in the conversation".  While an outsider to academia might not think that ten citations is much impact, I submit that a more appropriate perspective is this: If your article is cited at least ten times, then it is having an impact on other specialists in your subfield.  Your article isn't falling into the void.  It is part of the conversation, and other scholars are reacting to it.  A journal article needn't have a huge influence outside its subarea to be successful.  The beauty of academic journals is that they host technical pieces that often can only be appreciated by a few dozen specialists.  If academia is worthwhile, then continuing those specialists' conversations is worthwhile, and high citation rates for technical pieces should not be expected.  (I intend this ten citation criterion as an approximately sufficient condition for impact, not a necessary condition.)

(3.) Articles in elite journals are cited only about three times as often as articles in unranked but reputable journals.  To me, this was the most surprising result, and part of me wonders whether we'd see different results if we looked at different years or different unranked but reputable journals.  Philosophical Review is much more prestigious in the eyes of the typical mainstream academic philosopher than is the typical unranked but reputable journal.  I find this result encouraging, especially given the tiny acceptance rates at the elite journals (Philosophical Review reports accepting about twelve articles per year out of 600 submitted).  Publishing in unranked journals is not shouting into the void.  It is not defeat.  The dynamic is less jackpot-or-nothing than I would have expected.  You don't have to get into a top-ten journal to have an impact.  Philosophers do regularly read and cite articles from the less prestigious journals.

(4.) Citation skew is much less extreme than it could be.  Another respect in which philosophy citation practices are not jackpot-or-nothing is revealed by the smallish differences between the means and medians.  Of course the means are higher: There's right skew in the data, a tendency for high-end outliers to pull up the mean.  But it's not like most articles are cited 0-5 times and a few are a cited hundreds of times.  Authors can reasonably expect that a decent article in a decent journal will have at least a moderate impact on their subfield.

Thursday, October 13, 2022

The Rational Pressure Argument

guest post by Neil Van Leeuwen

Many people feel some rational pressure to get their religious beliefs to cohere with their more mundane factual beliefs. 

The Mormon Church’s position on the ancestry of American Indians furnishes a telling example. Its original view, stemming from the Book of Mormon, appears to have been that American Indians descended from Lamanites, who were supposed to be an offshoot of Israelites that migrated to the Americas when the Ten Tribes went into exile. Some Church texts even just refer to American Indians as Lamanites. 

Yet over time the position has shifted, as the Church started to recognize the overwhelming evidence of migrations from Asia across the Bering Strait as the source of American Indian populations. The Lamanites were thus later billed as “the principal ancestors of American Indians.” Then, in 2006, the Lamanites were put down as “among” the ancestors of American Indians.

That shift allowed the LDS Church to maintain both (a) its traditional narrative to the effect that American Indians descended from Lamanites and (b) that that narrative is not inconsistent with thoroughly documented facts about the Asiatic origins of American Indian populations (archeological facts, DNA evidence, etc.). 

Even the current, watered-down Lamanite narrative, one must grant, has no outside evidence in its favor whatsoever. But that’s not the point. The point is that architects of LDS Church doctrine, on acquiring greater levels of knowledge about the actual provenance of American Indians (encoded in the factual beliefs in their heads on the matter), felt at least some rational pressure to bring their religious beliefs into coherence (or at least lack of obvious inconsistency) with their evidence-based factual beliefs.

This existence of such felt rational pressure was the topic of an exchange between Thomas Kelly and me at the Princeton Belief Workshop in June, which Eric described in his fascinating blog on that exchange last month. 

A little background for those new to this. 

I have long argued that there exists a cognitive attitude I call religious credence, which is effectively a form of imagining that partly constitutes one’s group identity. Religious credence is typically called “belief” in everyday parlance. But its functional dynamics are far different from those of ordinary factual belief: it’s more under voluntary control, more compartmentalized, less widely used in supporting inference, and far less vulnerable to evidence. Religious credence is thus a distinct cognitive attitude from factual belief. And not only should philosophers and psychologists regard religious credence and factual belief as distinct; emerging psycholinguistic and experimental evidence suggests that lay people already do regard them differently. 

To apply my view to our running example, consider these two attitude reports. (Let’s imagine that Jim is a member of the LDS Church and also an historian.)

1) Jim thinks that there was a migration across the Bering Strait from Asia to North America.

2) Jim believes that American Indians descended from the Lamanite offshoot of the lost Ten Tribes of Israel. 

Since attitude and content are independent features of mental states, there is nothing necessary about religious credence being the attitude that goes with what are typically thought of as religious contents or about factual belief being the attitude that goes with what are typically thought of as factual contents. In fact, there are plenty of attitude-content crossovers, and it is a strength of my view that it can characterize such crossovers clearly. Still, content is heuristic of attitude type, so it is fair to attribute to me the view that the attitude reported in 1) is most likely factual belief and the one reported in 2) is most likely religious credence. 

Here’s the objection that Tom raised, as paraphrased in Eric’s blog. (Note that Tom didn’t bring up the Mormon Church example that I focus on here, but it’s still a useful illustration of what he was talking about.)

If [religious] credence and [factual] belief really are different types of attitude, why does it seem like there’s rational tension between them? Normally, when attitude types differ, there’s no pressure to align them: It’s perfectly rational to believe that P and desire or imagine that not-P. You can believe that it is raining and desire or imagine that it is not raining. But with belief and credence as defined by Van Leeuwen, that doesn’t seem to be so. 

In other words, the fact that Jim feels pressure to modify the attitude reported in 2) so that it at least does not contradict the attitude reported in 1) suggests that the attitudes reported in both are of the same type—contrary to what my distinction suggests. 

Let’s call this The Rational Pressure Argument. I’ve encountered objections like it in the past, but none of them have been put quite as well as this one from Tom via Eric. For what it’s worth, I regard it as the most interesting objection to my view that exists.

Yet it fails. 

To see how, let’s consider The Rational Pressure Argument laid out a bit more formally. This version of it will deploy our running example, but the assumption of one who makes the argument would be that the relevant bits generalize. (Let “CA” be a verb for the cognitive attitude whose status is in question: e.g., “Jim CAs that p” means that Jim has this attitude with p as its content.)

1. Jim CAs the religious stories and doctrines of his Church. [premise]

2. Jim has factual beliefs whose contents bear on how likely it is that those stories and doctrines are true. [premise]

3. Jim feels pressure to bring the cognitive attitudes described in 1 into rational coherence with the factual beliefs described in 2 (at least to the point of eliminating obvious contradictions). [premise]

4. People do not feel pressure to bring attitudes distinct from factual belief (e.g., imagining, desire) into rational coherence with factual belief. [premise]

5. The cognitive attitudes described in 1 are factual beliefs. [from 1-4]

To make this logically tight, various points would have to be filled in, but that could be done easily enough and needn’t be done for purposes of this blog. So let’s grant the validity of the argument. 

The question then is whether the generalized versions of the premises are true—in particular 3 and 4. 

In the exchange during the workshop, I cited ethnographic research that suggests that people like Jim—being WEIRD—are not that representative of people around the world in feeling rational pressure to remove inconsistency between religious credence and factual belief. In other words, I cast doubt on the generalizability of premise 3, a point by which I still stand. But the argument would still be irksome to my position, even if Jim was only representative of a culturally limited set of people. So here I want to make a different point: premise 4 is not generally true

The reason is that a number of cognitive attitudes that aren’t factual belief are nevertheless subject to rational pressure not to contradict one’s factual beliefs. It’s probably true that, as Eric points out, everyday imagining (daydreaming, fantasizing, etc.) isn’t subject to any such pressure (though it does default to being somewhat constrained). But we shouldn’t generalize from that fact about everyday imagining to all types of cognitive attitudes.

Take the attitude of guessing (whatever that amounts to). One can guess that there are 10,002 jellybeans in the jar, or 10,003 jellybeans in the jar, etc. Guessing is voluntary; one’s guesses don’t generally figure into the informational background that guides inference; and so on. Much distinguishes guessing from factual belief, which is why it is a different attitude. Nevertheless, one typically feels some rational pressure for one’s guesses not to contradict one’s factual beliefs: if I learn and hence come to factually believe that there are fewer than 8,000 jellybeans in the jar, this strongly inclines me to revise my guess to below that number. 

Similar points could be made about attitudes like hypothesizing or accepting in a context (see Bratman, 1992): though they are not the attitude of factual belief, they are often accompanied by some rational pressure not to flagrantly contradict factual belief.

So The Rational Pressure Argument can’t be correct; accepting it would commit us to the false view that guessing (etc.) amounts to factually believing—contrary to fact. We thus needn’t feel any pressure from the argument to collapse my distinction between religious credence and factual belief: yes, in some cultural contexts there may be rational pressure for religious credences to have contents that at least don’t contradict factual beliefs; but no, that doesn’t imply that there is only one attitude type there. So given all the other reasons there are for drawing that distinction, we can leave it in place. 

Where does that leave us?

I have much to say here. But basically I think this discussion leaves us with a cluster of large open research questions:

• What is the character of the rational pressure that people seem to feel to not have their religious credences overtly contradict their factual beliefs? It seems to be much weaker, even when it is present at all, than the pressure between factual beliefs to cohere. What else can we say about it?

• Why does this rational pressure seem to be more prevalent in some cultural contexts than others? Rational pressure to make factual beliefs cohere with one another is, as far as I can tell, universal. But rational pressure to make religious credences cohere with factual beliefs is, as far as I can tell, far from universal. What wanderings of cultural evolution made it crop up more in some places than others?

• And finally, the big normative question: even if people don’t in many cases—in point of psychological fact—feel that much rational pressure to bring their religious credences into coherence with their factual beliefs, should they? 

I look forward to seeing more and more research on the first two (descriptive) sorts of questions in the coming years, and I will be participating in such research myself. The third, normative question is perennial and outside the usual scope of the kind of research I usually do. Yet I hope my descriptive work furnishes conceptual resources for posing such normative questions with greater precision than has been available in the past. If it does that, I believe I’ll have done something useful.

[image: Dall-E rendering of "oil painting of guessing versus believing"]

Wednesday, October 05, 2022

What Makes for an Appropriately Rigorous and Engaging Online College Major?

This year, I'm serving on the systemwide University of California Committee on Educational Policy, and specifically I'm on a subcommittee tasked with developing guidelines for approving remote or online majors and minors in the U.C. system.

As we saw during the height of the pandemic, it's possible to do college instruction entirely online.  However, as we also saw, student engagement and learning is often not as good as with traditional in-person instruction.  Students show up on Zoom but then tune out, multi-task, have trouble paying full attention.  They watch videos at double speed.  They are less likely to ask questions.  There's less informal interaction before and after class.

Online majors (in which at least 50% of the course instruction for the major is remote) are coming.  It seems inevitable that they will eventually happen.  I have a chance to play a leading role in shaping policy at one of the largest and most prestigious public university systems in the world, so I want to give the matter some good thought, including hearing the opinions of blog readers and friends and followers on social media.  What should U.C.'s policy be on these matters?  I'd be curious to hear people's thoughts.

Some preliminary ideas:

(1.) Since evidence generally suggests lower engagement, less learning, and lower completion rates for students in online classes, we should expect that unless special measures are taken to increase student engagement and learning, an ordinary in-person class that is simply shifted to online presentation will have lower engagement, less learning, and lower completion rates.

(2.) Consequently, U.C. should not approve new online majors or the conversion of existing majors to online format unless special measures are taken to increase student engagement and learning.

(3.) Because of the necessity of such special measures, we should expect courses for online majors to typically have lower student-to-teacher ratios than otherwise similar in-person courses and to require more resources to administer.  The common image of online classes as cheaper to administer and as capable of supporting high student-to-teacher ratios, sometimes used as a justification for moving online, is likely to create inferior student engagement and learning.  Online education should not be justified by expected cost savings.  Instead, we should expect additional expense.

(4.) Remotely watching an instructional video is more like reading a textbook than it is like engaging in interactive education.  Instructional videos cannot replace person-to-person interactions in real time.  

(5.) The opportunity for informal interaction before and after formal instruction, either in the classroom, or just outside the classroom, or in other locations on campus, is also sometimes educationally important, even if the interactions are brief.  For this reason as well as lower expected student engagement during online lectures, online instructors should create ample opportunities for one-on-one or small group personal interactions with the instructor, beyond ordinary lectures and ordinary discussion sections.

(6.) Remote instruction, especially timed testing, often creates more opportunities for academic dishonesty than classroom instruction does, and so far there are no fully adequate solutions to this problem that don't objectionably invade student privacy.  Reasonable additional precautions might be necessary to discourage academic dishonesty in remote classes.  One-on-one interactions can help create student expectations of being held to account for understanding the material and can help confirm student learning.

(7.) Ideally, if someone learns that a student completed an online major instead of an in-person major at U.C., their reaction should not be to suspect that that student received an inferior education but instead the opposite.  The aim should be to create a reputation for online majors at U.C. as especially rigorous and interactive, where students have even more high-quality person-to-person instruction and even better learning than in traditional in-person classes.

(8.) Online majors should be justified in terms of creating better engagement and learning than would be possible with in-person instruction.  Increasing enrollment and improving accessibility are insufficient by themselves to justify the creation of an online major or minor, unless there are also clear instructional benefits to moving online.

Monday, October 03, 2022

The wagon has three wheels: Reimagining philosophy of action from a working class perspective

Check out the guest post by Deborah Nelson at The Philosophers' Cocoon.

Last month, Debbie completed her dissertation under my direction, concerning implicit class bias in the literature in philosophy of action.  She argues that the focus on stability over long periods of time, rational life plans, and career selves reflects the concerns and opportunities of middle-class and upper-class people, and that a focus on practical agency among people with economic disadvantage would and should focus at least as much on flexibility and managing unpredictable complexity -- relatively neglected topics in philosophy of action.

----------------------------------------------------

The wagon has three wheels: Reimagining philosophy of action from a working class perspective

by Deborah Nelson

There’s a story my family likes to relay about my father as a child that has had a formative impact on how I approach claims about knowledge, rationality, and the evaluation thereof.  The one-sentence summary of the story is that, when he was five years old, my dad took an IQ test and only got one question wrong.  The more interesting facet of this story, though, is revealed by examining that one question.

The test question displayed a picture of a wagon with only three wheels and asked what was wrong with the wagon.  It just so happened that there was a three-wheeled wagon in regular use in my father’s household, which still functioned perfectly well, so he could not identify the problem with the wagon.  Needless to say, when I learned this story at a young age, it gave me early ideas about the ways that experience can affect how people perceive and understand the world, and the discussions in recent years of problems with IQ testing, as well as other standardized tests, have only added fuel to this fire.  These intellectual proclivities have clearly inspired my research interests and I just recently defended a dissertation at UCR which reflects them.

In my early days in philosophy, I was excited to learn that there is a whole subdiscipline dedicated to how people reason about which actions to take, but became less enthused as I learned that the concerns within it did not resemble my experiences.  Within the philosophy of action, the concerns seem to reflect that of people who enjoy economic advantage.  The agents described (directly or indirectly) as ideals in this literature are self-complete entities, who consistently resist temptation, retain stable intentions and characters, and have life-defining projects and plans, etc.  This focus seems, to my working-class-inspired mind, to ignore some central concerns and ideals for which my background prepared me.

[continued here]