I decided to look at UC Riverside's grade distributions since 2013, since faculty now have access to a tool to view this information. (It would be nice to look back farther, but even the changes since 2013 are interesting.)
The following chart lists grade distributions quarter by quarter for the regular academic year, from 2013 through the present. The dark blue bars at the top are As, medium blue Bs, light blue Cs, and red is D, F, or W.
[click to enlarge and clarify]Three things are visually obvious from this graph:
In Fall 2013, 32% of enrolled students received As. In Fall 2023, 45% did. (DFW's were 9% in both terms.)
One open question is whether the new normal of about 45% As reflects a general trend independent of the pandemic spike or whether the pandemic somehow created an enduring change. Another question is whether the higher percentage of As reflects easier grading or better performance. The term "inflation" suggests the former, but of course data of this sort by themselves don't distinguish between those possibilities.
The increase in percentage As is evident in both lower division and upper division classes, increasing from 32% to 43% in lower division and from 33% to 49% in upper division.
How about UCR philosophy in particular? I'd like to think that my own department has consistent and rigorous standards. However, as the figure below shows, the trends in UCR philosophy are similar, with an increase from 26% As in Fall 2013 to 41% As in Fall 2024:
[click to enlarge and clarify]Lower division philosophy classes at UCR increased from 25% As in Fall 2013 to 40% As in Fall 2023, while upper division classes increased from 26% to 47% As.
Smoothing out quarter-by-quarter differences, here is the percentage of As, Fall 2013 - Spring 2014 vs Winter 2023 - Fall 2023 for Philosophy and some selected other disciplines at UCR for comparison: Philosophy: 27% to 43% (28% to 42% lower, 25% to 46% upper) English: 20% to 33% (15% to 28% lower, 38% to 64% upper) History: 28% to 52% (23% to 52% lower, 48% to 52% upper) Business: 28% to 46% (20% to 24% lower, 29% to 49% upper) Psychology: 32% to 51% (33% to 51% lower, 31% to 51% upper) Biology: 22% to 38% (28% to 36% lower, 17% to 41% upper) Physics: 26% to 39% (26% to 37% lower, 40% to 41% upper)
As you can see, in some disciplines at some levels, the percentage of As has almost doubled over the ten-year time period.
UCR is probably not unusual in the respects I have described. However, if other people have similar analyses for their own institutions, I'd be interested to hear, especially if the pattern is different.
I doubt, unfortunately, that students are actually performing that much better. UCR philosophy students in 2023 were not dramatically better at writing, critical thinking, and understanding historical material than were students in 2013. I conjecture that the main cause of grade inflation is institutional pressures toward easier grading.
I see two institutional pressures toward higher grades and more relaxed standards:
Teaching evaluations: Generally students give better teaching evaluations to professors from whom they expect better grades.[1] Other things being equal, a professor who gives few As will get worse evaluations than one who gives many As. Since professors' teaching is often judged in large part on student evaluations, professors will tend to be institutionally rewarded for giving higher grades, ensuring happier students who give them better evaluations. Professors who are easier graders, if this fact is known among the student body, will also tend to get higher enrollments.
Graduation rates: At the institutional level, success is often evaluated in terms of graduation rates. If students fail to complete their degrees or take longer than expected to so do because they are struggling with classes, this looks bad for the institution. Thus, there is institutional pressure toward lower standards to ensure high levels of student graduation and "success".
There are fewer countervailing institutional pressures toward higher rigor and more challenging grading schemes. If classes are too unrigorous, a school might risk losing its WASC accreditation, but few well-established colleges and universities are at genuine risk of losing their accreditation.
At some point, the grade "A" loses its strength as a signal of excellence. If over 50% of students are receiving As, then an A is consistent with average performance. Yes, for some inspiring teachers and some amazing student groups, average performance might be truly excellent! But that's not the typical scenario.
I have one positive suggestion for how to deal with grade inflation. But before I get to it, I want to mention one other striking phenomenon: the variation in the grade distributions between terms for what is nominally the same course. For example, here is the distribution chart for one of the lower division classes in UCR's Philosophy Deparment:
[click to enlarge and clarify]The distribution ranges from 11% As in Fall 2014 to 72% As in Fall 2020.
Some departments in some universities have moved to standardized curricula and tests so that the same class in each term is taught and graded similarly. In philosophy, this is probably not the right approach, since different instructors can reasonably want to focus on different material, approached and graded differently. Still, that degree of term-by-term variation in what is nominally the same class raises issues of fairness to students.
My suggestion is: sunlight. Let course grade distributions be widely shared and known.
Sunlight won't solve everything -- far from it -- but I do think that in looking at students' teaching evaluations, seeing the professor's grade distribution provides valuable context that might disincentivize cynical strategies to inflate grades for good evaluations. I've evaluated teaching for teaching awards, for visiting instructors, and for my own colleagues, and I'm struck by how rare it is for information about grade distributions even to be supplied in the context of evaluating teaching. A full picture of a professor's teaching should include an understanding of the range of grades they are distributing and, ideally, random samples of tests and assignments that earn As and Bs and Cs. This situates us to better celebrate the work of professors with high standards and the students in their classes who live up to those high standards.
Similarly, grade distributions should be made available at the departmental and institutional level. In combination with other evidence -- again, ideally random samples of assignments awarded A, B, and C -- this can help in evaluating the extent to which those departments and institutions are holding students to high standards.
Student transcripts, too, might be better understood in the context of institutions' and departments' grading standards. This would allow viewers of the transcript to know whether a student's 3.7 GPA is a rare achievement in their institutional context, or simply average performance.
--------------------------------------------------
[1] A recent study suggests that grade satisfaction might be the primary driver of the correlation between students' expected grades and their course evaluations, rather than grading leniency per se -- these can come apart when a student is satisfied with their grade as a result of their hard work for it -- but grading leniency is an instructor's easiest path to generating student grade satisfaction, generating the institutional pressure.
I think there is another important cause of grade inflation besides those mentioned in your post (though closely related to your comments about teaching evaluations). Professors, like most people, want to be liked. Even if a professor does not care at all about their official student evaluations, they might still prefer to walk into a classroom full of students who feel warmly towards them, rather than hostile. Of course, there are many ways to get students to like you, but one of the simplest methods is to make assessments slightly easier and assign grades slightly higher than students expect.
ReplyDeleteAdditionally, I have taught a number of university courses and I have found that it is quite common to receive a few emails after the course ends from students who are unhappy with their grades and either question grading decisions or ask how they can improve their grade (which, after the course is over, is impossible). Dealing with these emails, while not a big deal, is not most people's idea of fun. If such complaints can be avoided by giving everyone slightly higher grades, then that may be a worthwhile tradeoff to many professors.
Thanks for the comment, Patrick. Yes, I'm inclined to agree that this is another source of pressure toward more lenient grading standards.
ReplyDeleteMy own institution publishes (on a website that few know about) the grade averages and distributions by department. This is a kind of (partial) sunlight but (a) the wide disparity between departments seems not to have any effect on the inflating departments and (b) as Philosophy is among the four departments with the lowest averages (among Mathematics, Physics, Chemistry), I fear that wider knowledge of these facts will just drive students to the departments with stunningly high averages.
ReplyDeleteJP: Yes, I agree that's a risk. But I suspect students get wind of this, anyway, and its fellow faculty who are more likely to be ignorant.
ReplyDeleteThis is all very interesting! I'm not a fan of grade inflation, but I think that there are possible causes of the trends towards better grades that are rarely being considered in the hypotheses space, namely, that teaching has gotten better, and as a result, student performance has become better, so students are getting better grades (or, in any case, that students simply have gotten better for other reasons). Of course, this explanation might be false, but it's something to be ruled out before jumping to grade-inflation (i.e., better grades for same performance) conclusions.
ReplyDeleteAnon 01:01 -- Yes, entirely fair point. My basis for thinking that's not the main explanation is mostly impressionistic rather than rigorously grounded. There is a small academic subliterature on this, for example:
ReplyDeletehttps://www.nber.org/system/files/working_papers/w28710/w28710.pdf
which suggests that it's not primarily improvement in student quality, but opinions differ.
I've been told by an economist, the main purpose of college is sorting out candidates for the job market- kids crave elite grades to make money after a degree
ReplyDeleteThe big idea is to show discipline and the ability to do intellectual work.
This system I'd gather doesn't reward independent thinking or talent but following orders effectively and efficiently. If teachers teach to the test and worry about ratings, grades are meaningless.
You Eric care about actual teaching and actually care about your students and care about grades.
You are not alone, but you are in a minority
The problem is that college operates on a business model now and no longer as a sequestered institution- the lawn on the great college green is astroturf and everyone in a graduation gown looks the same more or less
In my department (community college math), one issue is subsequent courses. If students pass but don't really understand, we hear about it from the next teacher! Other disciplines have sequences of courses too, but that might be a particularly strong effect in Math (and other STEM).
ReplyDeleteNevermind - I see that the passing rate hasn't changed all that much. That actually surprises me, but again, maybe that's a math perspective. So many of our (non-major) students are just trying to get that C.
ReplyDeleteFor what it is worth: the undergrad which I attended (Reed College: graduated 10 years or so ago) has *not* had serious grade inflation, or at least had not had grade inflation at the time of my graduation. And they adopted at least one of your approaches: they attached slips discussing, in depth, the lack of grade inflation at Reed to *every* grade history mailed out to every grad school, during apps season. This has worked, at least in the sense that Reed has been remarkably effective at placing students. So there's a working test case for your suggestion!
ReplyDeleteLinks:
Reed College slip describing grades: https://www.reed.edu/registrar/pdfs/grades-at-reed.pdf
Reed College grad school placement: https://www.reed.edu/ir/phd.html
I see no disadvantage to having transcripts include two grades for each course, a letter grade and a course rank (e.g., 4/40, meaning 4th in a class of 40), where there can be lots of ties in rank to the extent instructors cannot or do not want to make more fine-grained distinctions. I predict it would lead to grade "dis-inflation", as people start to realize that the inflated letter grades are unable to hide information about how students actually did in the course.
ReplyDeleteThanks for the continuing comments, folks!
ReplyDeleteHowie: A pessimistic view! I think there's some truth in that, but I hope I'm not actually in the minority. :-)
Kari: I agree that if the higher level classes keep their standards the same, that puts pressure on the lower level classes to not lower their standards. It is interesting that the pass rate hasn't changed that much while the percentage of As has changed. Also of course, community colleges might be quite different!
Anon: Reed is often a class apart!
Eddy: I like that idea!