Well, surprising and disappointing to me at least! You might be neither surprised or disappointed.
Here's the background: I'm on the UCR Philosophy PhD admissions committee again this year. I'm a fan of treating senior-year GPA in philosophy as a crucially important part of an application. Here's the kind of thing I tend to say to other members of the admissions committee: "For each A-minus, imagine an unwritten letter saying that this student isn't quite top notch and not really ready for a PhD program in philosophy". I'm also a fan of not putting too much weight on the reputation of students' undergrad institutions. I'm cheering for the Cal State underdogs. And the GRE I'm inclined to regard as having no predictive value once GPA, writing sample, and letters of recommendation are factored in. (I compare GRE to the bench press as a predictor of athletic performance. If you knew nothing else, it would be somewhat predictive, but once you've seen the person in the field, it really doesn't matter.)
Being an empirically-minded philosopher, though, I'm not happy with mere armchair plausibility. So I thought I could give extra weight to my arguments by looking at how current UCR grad-student performance relates to undergrad GPA, to undergrad institution of origin, and to GRE scores. The staff in the department office kindly provided me with data for all grad students from the entering class of 2007 to the present. This is only 37 students total (with some missing cells), but I thought I might at least pick up some trends. My two measures of academic success at UCR are GPA in the program (setting 3.5 as a floor because of two outliers with some Fs) and whether the student dropped out of the program.
It looks like everything I thought I knew is wrong.
I couldn't make undergrad GPA predictive. I tried several different ways. I tried including undergrad GPA for all students and I tried excluding those who had done some master's work before coming to UCR. When overall GPA didn't work I asked the staff to recode the data with just senior-year GPA in philosophy courses. Still no correlation. Maybe on a much larger sample GPA would show up as predictive, but on this sample it's not even close, not even close to close.
Two important caveats: First, all the students had excellent undergrad GPAs (except for one who repaired with a Master's degree). We're comparing 3.7's vs. 3.9's here, not 3.0's vs. 3.9's. In philosophy courses, their GPAs are even better: Median senior year philosophy GPA was 3.91. So surely this is a ceiling effect of some sort. Still, in my mind, 50% A-minuses in senior-year philosophy looks very different in a PhD application than does straight A's in senior-year philosophy. I would have thought the second sort of student much more promising overall. Second: The students with relatively lower GPA's who were nonetheless admitted (and thus the only students in this sample) presumably had especially excellent letters and writing samples, to help compensate for their disadvantage in GPA relative to other applicants. So maybe that's the explanation. (See, I just can't abandon my opinion!)
Equally annoyingly, given my biases, the verbal section of the GRE was highly predictive. (Math was not predictive.) Verbal GRE score correlates at .49 with graduate student GPA in Philosophy at UCR (p = .004). Our students' median verbal GRE score is 680. Those who scored above median have a mean UCR GPA of 3.91. Those who scored median or below have a mean UCR GPA of 3.76 (t test, p = .001). (Yes, we mostly give our PhD students grades ranging from B+ to A. If you're getting mostly A-minuses and B-pluses in our program, you're "struggling".) This shows up especially strikingly in a 2x2 split-half analysis. 11/14 (79%) of students with above-median verbal GREs have above-median GPAs in our program, while only 5/18 (28%) of students with at-or-below-median verbal GREs have above-median GPAs in our program (chi-square, p = .004).
There was also a trend for overall GRE score to predict sticking with the program: Dropouts had a mean GRE of 1243. Non-dropouts had a mean GRE of 1385. (This was not statistically significant, partly because the dropout group had much higher GRE variance, messing up straightforward application of the t test; p = .13, p = .01 assuming equal variances.)
And reputation of undergrad institution was predictive of GPA in our program. Students whose undergrad institution is a US News top-50 National University or top-25 National Liberal Arts College had a mean 3.91 GPA at UCR, while those not from those elite institutions had a mean GPA of 3.77 (t test, p = .01, equal variance not assumed).
The one bright spot for my preconceptions was this: Students with graduate-level training (usually an MA) before entering UCR tended to do well in the program. Their GPA was 3.87, compared to 3.77 for students with no prior graduate-level training (t test, p = .06); and they were much less likely to drop out: 1/18 (6%) vs. 8/19 (42%) (chi-square, p = .01).
I'm tempted to claim the philosopher's prerogative of rejecting empirical evidence I don't like and sticking with my armchair intuitions. Surely there's something in Kant I can use to prove a priori from principles of pure reason that undergrad GPA is more important than GRE! One thought is this: Since we don't take GRE very seriously in admissions and we do take undergrad GPA very seriously, whatever predictiveness GRE has isn't washed out through the admissions process in the way that the predictive value of GPA probably is to some extent washed out (per caveat 2 in the GPA discussion). If we started taking GRE very seriously in admissions, its predictive value among admitted students might vanish, since those with low GREs would have to be all the more excellent in the other dimensions of their application.
I'd be interested to hear if other professors at PhD have similar data, and whether they find similar results.
Wednesday, February 15, 2012
The Surprising and Disappointing Predictors of Success in UCR's Philosophy PhD Program
Posted by Eric Schwitzgebel at 6:00 PM
Labels: advice, professional issues in philosophy
Subscribe to:
Post Comments (Atom)
31 comments:
This is great stuff, Eric! I would tend to largely agree with your hypotheses and argue that your results are due to a) small sample size and b) low levels of variance in your variables (i.e., everyone entering has a very high HSGPA and most graduate GPAs vary very little between that B/A- range).
My MA thesis at UCR examined HSGPA, SAT scores, and a personality trait (conscientiousness) as predictors of freshman and senior college GPA. To support your hypothesis, I found that while SAT scores predicted significant variance in freshman GPA, it did little for senior GPA (that is, it basically predicted survival of the first year!). By senior year, conscientiousness was a better predictor than SAT scores, and HSGPA was the best predictor of both freshman and senior college GPA, hands down. The best predictor of future behavior is past behavior.
In terms of elite schools, there are possible confounding factors: grade inflation, SES (i.e., people who go to Ivy League schools probably come from higher income families and have greater educational opportunities and preparation for graduate school).
I'll bet you dollars to donuts if you asked for more data from the Psych Department, they would share it with you, and you could increase your sample size! It wouldn't tell you anything about senior philosophy grades, but it would re: GRE scores, HSGPA, school prestige, and graduate performance.
Finally, I just wanted to say that I *love* your bench-press analogy. Stolen. Even if (currently) empirically undemonstrable in nature. =)
Seth
Thanks for summing up that period of my life so succinctly! If only I had gone to a GRE test prep class, I might not have been overcome by the deep depression and self-doubt that eventually scuttled my academic career.
Great article - I always did appreciate your discomfort in the armchair.
Your premise that GPA in a graduate program is an indicator of success in that program is flawed. A PhD is a research degree, and can only be evaluated in terms of the quality and impact of the research/thesis- something not easily quantifiable.
As a PhD student myself, I intentionally earn near the minimum (3.5 GPA) in order to focus on my research. I see classes as a formality that simply wastes my time, since I have already learned how to find information I need on my own. I could earn a 4.0 with a bit more effort, but either my research would be lower in quality, or I'd take longer to graduate.
I wonder if you are measuring what you think you are measuring. And some of your hesitant remarks and caveats suggest that you feel this worry, too.
What you seem to want to know is whether among all applicants, some variables are predictive. To what degree should we care about GPA, GRE, and undergraduate institutional status in admitting students in the first place. Or, in other words, would a student with better GPA, GRE, and/or UG rank perform better in graduate school than a student with worse GPA, GRE, and/or UG rank if both were admitted?
But the measurements you can actually make seem directed at a very different question: among those who are admitted, which variables predict success going forward.
Another way to say this: with respect to your question of interest, you should be worried that your sample suffers from serious selection bias.
On another note, did you check to see whether any interaction terms were significant?
I really do think you're too harsh on Kant. Here's one of my favourite comments of his that I would imagine is right down your alley:
“But though it is older than all other sciences, and would survive even if all the rest were swallowed up in the abyss of an all-destroying barbarism, it has not yet had the good fortune to enter upon the secure path of a science. For in it reason is perpetually being brought to a stand, even when the laws into which it is seeking to have, as it professes, an a priori insight are those that are confirmed by our most common experiences. Ever and again we have to retrace our steps, as not leading us in the direction in which we desire to go. So far, too, are the students of metaphysics from exhibiting any kind of unanimity in their contentions, that metaphysics has rather to be regarded as a battleground quite peculiarly suited for those who desire to exercise themselves in mock combats, and in which no participant has ever yet succeeded in gaining even so much as an inch of territory, not at least in such manner as to secure him in its permanent possession. This shows, beyond all questioning, that the procedure of metaphysics has hitherto been a merely random groping, and, what is worst of all, a groping among mere concepts
You seem to assume not only that GPA must predict success, but that it must *reliably* indicate anything in the first place. Now, there might me a whole bunch of reasons why you get a given grade at a given class, and none of these reasons may entirely be rule out when assessing even average GPAs. You clearly cannot assume that how undergrad years were felt by a young student indicate how she's likely to succeed in a wholly different kind of degree.
I appreciate your article and your insight. However, a more interesting comparison for me would be of the "predictors" of graduate success and your (or other professor's) anecdotal opinion about that student. Do students, for example, with a higher GRE score and undergraduate GPA seem to provide the most insight in class or write the most 'interesting' papers? Do they seem the best read, or most capable researchers? etc.
Also, how does their graduate GPA compare to the overall quality of their thesis? that with their undergraduate GPA?
Very interesting! I did something similar for our graduate program at Texas in the 1990s, but covering a longer period of time. I came to similar conclusions. GREs were overall the most predictive factor; undergraduate GPA had a very low correlation.
I did find that this varied depending on the specialization of the student. Verbal GRE mattered for students working in continental and non-technical analytic fields. Quantitative mattered for students in technical areas and in ancient philosophy. GPA mattered some for non-technical analytic but not for continental, technical areas, or history.
Thanks for all the comments, folks!
@ Seth: Right. See my FB comments.
@ Anon 7:42: Interesting point! I suspect your condition is somewhat unusual. But you're definitely right that there are weaknesses to GPA as a success measure. I choose it because it's easy to get and to quantify and because I do think it has some validity. My *general* impression is that the stronger students tend to have higher GPAs, but there are definitely exceptions.
@ Jonathan: If I'm correctly understanding your selection bias worry, that's the same worry I express as caveat 2.
Here's another way of thinking about the issue. If the PhD admission market were perfectly efficient in terms of unidimensional overall quality both of student and of grad program then one might expect that no variable would be predictive, since students who were relatively strong in one dimension would have to be relatively weak in some other dimension not to be picked up by a higher ranked school and students who were relatively weak in one dimensions who weren't also relatively strong in another would be rejected by UCR. So part of what I'm doing is thinking about whether there are inefficiencies in the system that I can exploit Moneyball-style.
I did look for interaction effects in multiple regressions, though with this few data points I'm not sure about the value of multiple regressions. I found a couple marginal interactions and a significant (p = .04) three-way interaction with GRE being more predictive for students from elite undergrad institutions who have Master's training than for other students -- though I'm not sure what to make of that or whether it's just noise given the number of tests and relatively high p value.
ToneMasterTone: Kant doesn't seem quite as modest about the value of his own work. I'm inclined to think that he thinks of himself as having repaired this situation.
Nicolas: Right. But when an admissions committee has 100-200 applications from students most of whom seem to be strong students with good grades and good letters, what are we to do? All else being equal, shouldn't we choose the 3.9 student over the 3.7 student? Should we just flip a coin? Of course we also look at letters and the sample, and maybe those really are better predictors than GPA. They certainly play a big role in our thinking; they're not minor factors in the decision. I would like to explore the predictive value of letters and sample, but it's difficult to quantify! There are lots of excellent samples and letters, too.
Dissata: Yes, I've thought about polling professors for their opinions about the overall quality of students and the quality of dissertations and the like. But I'm somewhat apprehensive about doing this, since I think if word got out that I was doing this, it could have negative repercussions for the student experience here -- especially because I think sometimes students profit from having a fresh start with a new advisor who doesn't have preconceptions about their ability. I'd prefer there not to be a secret drawer containing professors overall rankings of student quality.
@ Dan: Interesting! Thanks for that. Yes, if I follow up on this with a larger data set it would be interesting to break down by subfield to see if the predictors play out differently for the different subfields.
Since you hypothesize that variables such as undergrad GPA, letters, etc. are greater predictors than GRE scores, it would be interesting (given a larger sample) to run a multivariate regression model to see if the significant variance found by GRE scores hold up when controlling for those other factors. It would seem that it may be all of those factors and not just GRE (I hope anyway). Good stuff! Very interesting!
Tony La Russa played this predictability game decades ago with baseball statistics and found some predicability factors, but in particular situations at the plate/mound, they proved of course unreliable, because as Aristotle sagely noted, one cannot generalize over particulars. Many of your stats are obviously unreliable, since GPA differs so widely from school to school, professor to professor. Verbal aptitude seems more promising since most of philosophy is a mere vocabulary game, inventing alternative meanings for distorting perfectly good words. And "success" ought to take into consideration long term employment --so you need a wider metric.
My father was once a professor of statistics and loved the phrase: "garbage in, garbage out". But rather than take his cynical stance to all your efforts, I suggest you find a marker for wisdom, instead.
It's amusing just how much good academics profess to hate standardized tests. Even when numbers (and surely you know ETS has far more informative and high-N tests than this) show that 3-hour fill-in-the-circles quasi-IQ tests are highly informative, the bias of the poster and the comments is to try and find as many loopholes as possible...
Anon: Yes, I messed around with multiple regression, but with so few datapoints it's not really interpretable I think.
David: Yes, a marker for wisdom would be very useful. Perhaps count up the number of moments in which the person stands lost in thought, gazing vaguely into the distance? Socrates would have done well by this metric, I'm told. I agree somewhat with the GIGO observation, but it's interesting to play with the numbers anyway.
Prasad: Yes, I plead guilty. That's my bias. Has ETS looked at philosophy in particular, controlling for GPA? But ETS is also a biased source. As Seth points out in an earlier comment, and as I myself have seen in UCR data I looked at some years ago, SAT is often not very predictive of undergraduate performance when other relevant factors are properly factored in. In fact, an analysis of UCR GPA I did on all existing UCR students circa 1999, I found SAT was entirely null as a predictor of performance once one particular version of the HS GPA measure was included in the model (which version, I forget, but there are several versions).
Hi Eric, Thanks for another interesting thought-provoking post!Here's a slightly different way of spinning points in the ballpark of ones you've made:
You started off asking: among students who apply to UCR which features would be most predictive of success? (Where you've taken "success" to be measured, well enough, by GPA in grad school).
You gathered data that would be directly relevant to answering a different question: among the students who were both accepted to and chose to attend UCR, which features are most predictive of success?
Of course, your sample dataset is not at all a random sampling of the population of applicants you initially asked about, not least because the only way somebody could make it into your sample was either by having a high GPA, and/or by having some other convincing indicator(s) of potential success.
So your sample data set is a quite poor sample to try to use to answer your intitial question, as this sample can't detect any valid signs of success that your committee already used to determine who would be in the sample.
However, your sample might still help answer other questions in the ballpark, especially questions about whether your admissions committee under- or over-valued any cues.
If your committee were doing its job perfectly (assuming there were abundant candidates of every quality) then you would expect there to have been no reliable signs of which students would do better. (If there were such signs, your committee should have used them to reject students who predictably would do poorly, and instead accept wait-listees who would predictable do better.) So, if your study found *no* correlations, that would be heartening evidence that your committee hasn't been missing out on valuable informational cues. So, e.g., it seems that your admissions committees have been valuing GPA close to right.
When you *do* find correlations, this indicates that your committee was probably using suboptimal criteria -- had you better used these cues you could have admitted more successful students. So, e.g., it seems that your committees must have been undervaluing the relevance of Verbal GRE scores, and would probably have done better had they rejected some low-GRE candidates and admitted some higher-GRE wait-listees instead.
This doesn't mean that you were wrong to think GPA is more relevant than GRE at predicting how *applicants* will do -- it just means that your committee already correctly values GPA, but not GRE.
Justin, that is very nicely put. Thanks for helping me conceptualize it more clearly!
Two minor caveats: (1.) I tend to be the senior-year-philosophy-GPA pusher on the committee, so my antecedent opinion would be that the average admissions committee would still be undervaluing that as an indicator. But maybe not. (2.) Your background assumption is (as you acknowledge) pretty idealized, and to that extent I would still have expected GPA to be predictive.
Regarding (1): The evidence fails to support your claim that skewing the committee more towards valuing GPA would improve the amount of success of your admitted candidates. On the contrary, your data suggest that, so long as the committee retains its relative valuations of other parameters, increasing their valuation of GPA would tend to drive their selection closer to whatever the mean quality of applicants was (which is presumably *not* the direction they want to go). Your data *do* however, justify pushing for your committee to take GRE scores more seriously, which entails at least sometimes preferring high-GRE applicants who would (barely) have been wait-listed over low-GRE people who would (barely) have been accepted by the committee's old standards.
I'm not sure I understood your (2). Was your thought something like the following? "I know we've admitted a quite diverse group of students with quite little regard to their GPA. So, this sample should have been random enough that, if GPA were a good predictor for applicants in general, then it should still have been correlated with success even in this not-really-random sampling of applicants." If you're confident of that, then, yes, this data should make you question your earlier high valuation of GPAs. But I'm not sure how you *could* be confident of that. Surely your committee has been taking GPA into account, along with other factors, right? If so, then it should be possible that both (a) GPA is correlated with potential success among applicants, and yet (b) your committee has been doing well enough at reading and combining various correlates of success that GPA is *not* correlated with success among accepted students. So I don't think your failure to observe a correlation in (b) is evidence that there wasn't a correlation in (a).
The caveat *I* had thought about adding to my earlier argument was this: if a study like yours reveals no correlations, that really just means that the committee has been consistently reading all the cues of a certain level of quality, but this in no way guarantees that they've been consistently reading all the cues of the top level of quality. Data of non-correlation are even compatible with the claim that your committee was consistently choosing the *worst* students in the applicant pool! (Of course, we have background theoretical reasons to think that the sorts of criteria they used probably were correlated with *high* quality, but that is not in any way confirmed or disconfirmed by non-correlations in the data you gathered.)
I take the upshot to be this: When you see that feature X was correlated with success among people you chose to admit, then you should start to value X more. If you don't see any correlations, then that means you've been exhausting the informational value of all the cues of whatever level of quality it is that you've been getting. If you're not happy with that level of quality, then you'll probably have to tweak multiple valuations at once. Or you could spend a few years admitting a truly random sampling to serve as an unbiased dataset for empirically figuring out what valuations to use in the future.
Justin: I don't entirely disagree with your final upshot, and I think my thinking on that is clearer now than it was when I started this project. However, I continue to think that GPA should have been predictive even if we are weighting it exactly right in admissions, because of the large failures in the efficient markets interpretation of grad school admissions.
Here's one assumption of the efficient-markets hypothesis: That students will choose always to go to the most selective school to which they are admitted. Suppose that we admit 20 students across the range of expected quality from minimum-for-UCR to Princeton/NYU quality. Any student who turns down a more selective school for UCR would then be expected to perform better than others in the entering class. Predictive factors like GPA should then be expected to differentiate performance. And while it is unusual for students to turn down top-10 schools for UCR, students often do turn down somewhat more prestigious schools for UCR either because of reasons of fit with the strengths of our program or for personal reasons. Maybe this is especially true for UCR given our unusual profile of faculty strengths, blending analytic and Continental.
Another idealization failure of the rational-markets model is this: We consider area of interest as an important factor in admissions. If there are a dozen top-notch applicants in one area, we can't let them all in and so we start to apply stricter criteria. Also, since it is sometimes hard to compare writing samples across radically different subareas (e.g., technical analytic metaphysics vs. Nietzsche interpretation), a tempting approach is to say something like, "well, let's admit three students of broadly this sort, and two of broadly this sort, and..." and then make comparisons within those subgroups. Differential performance between the subgroups would be likely to follow, and once again factors like GPA should start to re-emerge as predictors.
One would also expect imperfection and noise and luck in the process -- this seems very hard to deny once one has been in the admissions room! -- so any noise that results in what should have been below-cutoff candidates being admitted should also result in inefficiencies that reveal the power of the predictors. And I'm inclined to think that the noise in the process is pretty large.
My argument didn't actually depend on efficient-market assumptions. When I was first drafting my initial comment, I thought I would need such assumptions (as you had suggested them earlier), but then I talked myself out of this. Here's how.
Suppose the market is inefficient: i.e., suppose other schools often opt for suboptimal candidates and candidates often opt for suboptimal schools. Still, there will be a certain subset of applicants that you could attract to UCR, and there would be certain statistical cues as to their potential success. If your committee exhausts the informational value of those cues, then there will be no remaining correlates of success among the students who actually come. If your committee doesn't exhaust the informational value of those cues, then there will be remaining correlates, and whatever those correlates are, your committee would have been better served to place more value on them. None of this depends on the rest of the market being efficient. (There are of course the standard Humean uniformity-of-nature assumptions that underlie any inductive argument, and also your assumption that GPA at UCR is a good measure of "success", but, as far as I can tell, there are no strong idealizing assumptions about market efficiency.)
I take your points about noise and practical incommensurability across subfields to be akin to the reasoning I suggested above: you think your enrolled students are a *close-enough-to-random* sampling of your applicants that, if senior-GPA was correlated with potential success among applicants, you'd also expect it to be a correlate in this semi-random sample.
I'm still not convinced. Here's another way of explaining why.
Suppose senior-GPA really is a fairly strong correlate of potential success. But suppose your admissions committee, perhaps in response to your urging, has actually been *over-valuing* senior-GPA. Then you would expect your committee to have admitted some comparatively poor students on the basis of their deceptiviely high GPA's, so, among your students, there would actually be a *negative* correlation between senior-GPA and success in grad school. Depending on the degree to which your committee overvalued GPA, you might expect this negative correlation to show up in your medium-sized sample even if there was quite a bit of noise in your selection process. This example shows that, even in a noisy selection process, there's no strong reason to expect that the positive correlates of success among applicants will also be positive correlates of success among the students who actually come to UCR, especially when you already know that the admissions committee bases its decisions (in part) on those correlates.
I like an idea with coin. 2 pills red and blue have to solve admission question. Big lucky follows lucky grads. Unlucky applicants are really not so unlucky. 50% of PhD in philosophy students drop it. Suddenly it doesnt depend them GPAs, GREs, essays, statements, recommendations, etc.
Why would you use graduate GPA as your measure of "success" in a program? Why not something more telling of philosophical promise, like publications and/or job acquisition? Do you really just want to admit PhDs who will just be reallu good at getting all As in graduate seminars?
Thanks for posting this, Eric, it's really interesting. I have done admissions for the past couple of years for our MA program, and there are some important differences both between us and a PhD program in terms of our admissions criteria and how to operationalize "success in the program"---but without having done any real statistical analysis, informally crunching the data I've found that verbal GRE percentile correlates much more strongly with success than does quantitative, analytic, or overall GRE percentile. Which is to say that it does correlate to some apparently significant extent, whereas those other numbers don't appear to do so at all. That was a surprise to me, since I had an armchair theory that quantitative GRE would be a better predictor of success, and it turns out in our case that it's not at all, so I'm interested to learn that you didn't find so either.
One additional worry about the GPA measure---at our university, there are no official "minus" and "plus" grades (I believe that's set to change soon, though not sure when). I don't know how many other institutions are like that but if there are enough of them we'd expect that to munge up any significance in the difference between a 3.7 and a 3.9.
Thanks, Geoff, for that confirmatory data about verbal GRE!
In my admissions experience, most U.S. universities do use pluses and minuses, but there are a minority that don't (maybe 10-15% among our applicant pool). I find it quite annoying when they don't, since I think of the A vs. A-minus difference as very significant as an indicator of PhD-program readiness!
>I've found that verbal GRE percentile correlates much more strongly with success than does quantitative, analytic, or overall GRE percentile
I believe it works for MA students only. Verbal is how many words you know. Quantative is how you operate with these words. In relation to a sports, MA is a sprint. PhD is a marathon.
It is certainly possible that the verbal GRE predicts performance in the first couple of years of coursework than it predicts completing an excellent dissertation. Right now, I have no real quantitative data on the latter.
I call shenanigans on Anon 2/25 @ 11 pm saying, re the GRE, "Verbal is how many words you know. Quantative is how you operate with these words."
Quantitative is nothing more than high school geometry and algebra. If you've memorized the relevant set of formulas (area of a circle, volume of a sphere, the Pythagorean theorem, etc.), you're golden. If you haven't, you're not.
The only thing, though, that makes verbal something other than "how many words you know" is the reading passage sections. Those typically require you to be able to detect basic implications and to draw basic inferences.
As for senior year GPA, that seems highly questionable to me. But then again I failed several of my senior year courses because I had a Non-Academic-Life-Episode (thus dragging my overall GPA down to 3.36; I like to claim that I must be the only member of Phi Beta Kappa with a 3.3 GPA). Yet the two times I've taken the GRE I've been in the 97-98th percentile in verbal (combined score in the 1400s on the old scale; so, mid-600 quant scores).
Interesting findings!
But did your study account for the fact that one major difference between the Quantitative and Verbal sections of the GRE is that the Quantitative section is not really something one can “study” for in the same way that one can study for the Verbal section?
For example, many people study an enormous number of words for months before the taking the GRE. It is undeniable that this will help them on the Verbal section of the exam. Now, certain questions on both the old and the new GRE are not really oriented toward critical thinking—and indeed critical thinking often won’t make any difference in getting such questions right! For example, suppose you are given a fill in the blank question in the Verbal section where you have to pick two out of six words such that filling in the blank with either word results in the same overall meaning for the sentence. Now, if one doesn't know any of the words on the list, then critical thinking will be of little use for getting the question right. However, if one happened to have studied, say, all six words on the list for a month before taking the exam, then one would likely get the question right.
In contrast, the Quantitative section tests one’s ability to apply general mathematical principles in a wide variety of situations. And often, getting a question right turns on "seeing" the problem in the correct way. I suppose one could practice a bunch of questions to learn the "tricks of the trade," but if one doesn't have a certain level of preexisting mathematical competence, I'm not sure there is much that endless studying can do.
What does this mean for the predictive value of the GRE's Verbal section score for success in UCR’s philosophy graduate program? Your particular findings are one possible conclusion. But another is that people who do well on the Verbal section might also be the very test-takers who prepared the most for the GRE, which might simply mean that they are people who prepare the most for things in general, including, of course, for their exams and coursework in UCR’s philosophy program. If this is true, then the Verbal GRE might be a more reliable indicator for one’s ability to prepare well for challenging situations, rather than for one’s innate intelligence or cleverness.
It seems to me that there is a limit as to how much creativity can be required of students in their philosophy midterm exams! If so, then the ability to prepare well for the tests is probability a better predictor for success on these exams then one's overall abilities to be philosophically creative!
Could it then be the case that the people who don’t do well at UCR and also have low GRE Verbal scores simply are not good at preparation? If properly taught, I wonder how they would fare in the graduate program.
One final thought. Some people clam up when taking timed exams. This is true for some people when taking both the GRE and also exams at university. Even on a section such as the GRE Verbal section where time is less of an issue than it is for the Quantitative section, a distraught mind can really wreak havoc on one's scores! It would be interesting to see how these students fare AFTER they are done taking classes and done with their qualifying exams.
As always Professor, a fascinating post! Thanks for sharing your results!
Anon Dec 18: Interesting thoughts, thanks! Is there publicly available evidence for what you say about there being less time pressure on the verbal section and studying being more profitable in improving one's verbal than one's quantitative scores? I can see how both might be relevant factors if the differences between the sections are large in these respects.
Sounds like maybe you should let in a couple 3.0 gpa students with high GRE and see what happens!
C: I have been collecting some data on how well GRE vs. GPA predict success in our program, so we'll see!
Post a Comment