tag:blogger.com,1999:blog-26951738.post3170759495871291723..comments2024-03-18T23:49:35.716-07:00Comments on The Splintered Mind: The Surprising and Disappointing Predictors of Success in UCR's Philosophy PhD ProgramEric Schwitzgebelhttp://www.blogger.com/profile/11541402189204286449noreply@blogger.comBlogger31125tag:blogger.com,1999:blog-26951738.post-28552363330461239852016-01-12T09:27:21.797-08:002016-01-12T09:27:21.797-08:00C: I have been collecting some data on how well GR...C: I have been collecting some data on how well GRE vs. GPA predict success in our program, so we'll see!Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-80227823378650606642016-01-09T17:51:08.089-08:002016-01-09T17:51:08.089-08:00Sounds like maybe you should let in a couple 3.0 g...Sounds like maybe you should let in a couple 3.0 gpa students with high GRE and see what happens!Chttps://www.blogger.com/profile/04146766345884042758noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-69389422759738639092012-12-20T08:41:39.770-08:002012-12-20T08:41:39.770-08:00Anon Dec 18: Interesting thoughts, thanks! Is the...Anon Dec 18: Interesting thoughts, thanks! Is there publicly available evidence for what you say about there being less time pressure on the verbal section and studying being more profitable in improving one's verbal than one's quantitative scores? I can see how both might be relevant factors if the differences between the sections are large in these respects.Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-56413210232400903782012-12-18T17:13:48.304-08:002012-12-18T17:13:48.304-08:00Interesting findings!
But did your study account...Interesting findings! <br /><br />But did your study account for the fact that one major difference between the Quantitative and Verbal sections of the GRE is that the Quantitative section is not really something one can “study” for in the same way that one can study for the Verbal section?<br /><br />For example, many people study an enormous number of words for months before the taking the GRE. It is undeniable that this will help them on the Verbal section of the exam. Now, certain questions on both the old and the new GRE are not really oriented toward critical thinking—and indeed critical thinking often won’t make any difference in getting such questions right! For example, suppose you are given a fill in the blank question in the Verbal section where you have to pick two out of six words such that filling in the blank with either word results in the same overall meaning for the sentence. Now, if one doesn't know any of the words on the list, then critical thinking will be of little use for getting the question right. However, if one happened to have studied, say, all six words on the list for a month before taking the exam, then one would likely get the question right.<br /><br />In contrast, the Quantitative section tests one’s ability to apply general mathematical principles in a wide variety of situations. And often, getting a question right turns on "seeing" the problem in the correct way. I suppose one could practice a bunch of questions to learn the "tricks of the trade," but if one doesn't have a certain level of preexisting mathematical competence, I'm not sure there is much that endless studying can do.<br /><br />What does this mean for the predictive value of the GRE's Verbal section score for success in UCR’s philosophy graduate program? Your particular findings are one possible conclusion. But another is that people who do well on the Verbal section might also be the very test-takers who prepared the most for the GRE, which might simply mean that they are people who prepare the most for things in general, including, of course, for their exams and coursework in UCR’s philosophy program. If this is true, then the Verbal GRE might be a more reliable indicator for one’s ability to prepare well for challenging situations, rather than for one’s innate intelligence or cleverness.<br /><br />It seems to me that there is a limit as to how much creativity can be required of students in their philosophy midterm exams! If so, then the ability to prepare well for the tests is probability a better predictor for success on these exams then one's overall abilities to be philosophically creative!<br /><br />Could it then be the case that the people who don’t do well at UCR and also have low GRE Verbal scores simply are not good at preparation? If properly taught, I wonder how they would fare in the graduate program.<br /><br />One final thought. Some people clam up when taking timed exams. This is true for some people when taking both the GRE and also exams at university. Even on a section such as the GRE Verbal section where time is less of an issue than it is for the Quantitative section, a distraught mind can really wreak havoc on one's scores! It would be interesting to see how these students fare AFTER they are done taking classes and done with their qualifying exams. <br /><br />As always Professor, a fascinating post! Thanks for sharing your results!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-26951738.post-16973128958043504492012-03-17T17:55:23.464-07:002012-03-17T17:55:23.464-07:00I call shenanigans on Anon 2/25 @ 11 pm saying, re...I call shenanigans on Anon 2/25 @ 11 pm saying, re the GRE, "Verbal is how many words you know. Quantative is how you operate with these words."<br /><br />Quantitative is nothing more than high school geometry and algebra. If you've memorized the relevant set of formulas (area of a circle, volume of a sphere, the Pythagorean theorem, etc.), you're golden. If you haven't, you're not. <br /><br />The only thing, though, that makes verbal something other than "how many words you know" is the reading passage sections. Those typically require you to be able to detect basic implications and to draw basic inferences. <br /><br />As for senior year GPA, that seems highly questionable to me. But then again I failed several of my senior year courses because I had a Non-Academic-Life-Episode (thus dragging my overall GPA down to 3.36; I like to claim that I must be the only member of Phi Beta Kappa with a 3.3 GPA). Yet the two times I've taken the GRE I've been in the 97-98th percentile in verbal (combined score in the 1400s on the old scale; so, mid-600 quant scores).Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-26951738.post-82275912545564631862012-02-26T13:32:55.186-08:002012-02-26T13:32:55.186-08:00It is certainly possible that the verbal GRE predi...It is certainly possible that the verbal GRE predicts performance in the first couple of years of coursework than it predicts completing an excellent dissertation. Right now, I have no real quantitative data on the latter.Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-42412009892782044582012-02-25T11:10:53.047-08:002012-02-25T11:10:53.047-08:00>I've found that verbal GRE percentile corr...>I've found that verbal GRE percentile correlates much more strongly with success than does quantitative, analytic, or overall GRE percentile<br />I believe it works for MA students only. Verbal is how many words you know. Quantative is how you operate with these words. In relation to a sports, MA is a sprint. PhD is a marathon.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-26951738.post-74330445579387852662012-02-22T18:12:20.840-08:002012-02-22T18:12:20.840-08:00Thanks, Geoff, for that confirmatory data about ve...Thanks, Geoff, for that confirmatory data about verbal GRE!<br /><br />In my admissions experience, most U.S. universities do use pluses and minuses, but there are a minority that don't (maybe 10-15% among our applicant pool). I find it quite annoying when they don't, since I think of the A vs. A-minus difference as very significant as an indicator of PhD-program readiness!Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-55087815018173131972012-02-22T11:08:14.268-08:002012-02-22T11:08:14.268-08:00Thanks for posting this, Eric, it's really int...Thanks for posting this, Eric, it's really interesting. I have done admissions for the past couple of years for our MA program, and there are some important differences both between us and a PhD program in terms of our admissions criteria and how to operationalize "success in the program"---but without having done any real statistical analysis, informally crunching the data I've found that verbal GRE percentile correlates much more strongly with success than does quantitative, analytic, or overall GRE percentile. Which is to say that it does correlate to some apparently significant extent, whereas those other numbers don't appear to do so at all. That was a surprise to me, since I had an armchair theory that quantitative GRE would be a better predictor of success, and it turns out in our case that it's not at all, so I'm interested to learn that you didn't find so either. <br /><br />One additional worry about the GPA measure---at our university, there are no official "minus" and "plus" grades (I believe that's set to change soon, though not sure when). I don't know how many other institutions are like that but if there are enough of them we'd expect that to munge up any significance in the difference between a 3.7 and a 3.9.Geoffhttp://www.niu.edu/~gpynnnoreply@blogger.comtag:blogger.com,1999:blog-26951738.post-41068807103207879372012-02-20T21:39:52.499-08:002012-02-20T21:39:52.499-08:00Why would you use graduate GPA as your measure of ...Why would you use graduate GPA as your measure of "success" in a program? Why not something more telling of philosophical promise, like publications and/or job acquisition? Do you really just want to admit PhDs who will just be reallu good at getting all As in graduate seminars?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-26951738.post-48629974524669641152012-02-19T20:17:55.206-08:002012-02-19T20:17:55.206-08:00I like an idea with coin. 2 pills red and blue hav...I like an idea with coin. 2 pills red and blue have to solve admission question. Big lucky follows lucky grads. Unlucky applicants are really not so unlucky. 50% of PhD in philosophy students drop it. Suddenly it doesnt depend them GPAs, GREs, essays, statements, recommendations, etc.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-26951738.post-72831803542831270672012-02-17T14:25:49.931-08:002012-02-17T14:25:49.931-08:00My argument didn't actually depend on efficien...My argument didn't actually depend on efficient-market assumptions. When I was first drafting my initial comment, I thought I would need such assumptions (as you had suggested them earlier), but then I talked myself out of this. Here's how.<br /><br />Suppose the market is inefficient: i.e., suppose other schools often opt for suboptimal candidates and candidates often opt for suboptimal schools. Still, there will be a certain subset of applicants that you could attract to UCR, and there would be certain statistical cues as to their potential success. If your committee exhausts the informational value of those cues, then there will be no remaining correlates of success among the students who actually come. If your committee doesn't exhaust the informational value of those cues, then there will be remaining correlates, and whatever those correlates are, your committee would have been better served to place more value on them. None of this depends on the rest of the market being efficient. (There are of course the standard Humean uniformity-of-nature assumptions that underlie any inductive argument, and also your assumption that GPA at UCR is a good measure of "success", but, as far as I can tell, there are no strong idealizing assumptions about market efficiency.)<br /><br />I take your points about noise and practical incommensurability across subfields to be akin to the reasoning I suggested above: you think your enrolled students are a *close-enough-to-random* sampling of your applicants that, if senior-GPA was correlated with potential success among applicants, you'd also expect it to be a correlate in this semi-random sample. <br /><br />I'm still not convinced. Here's another way of explaining why.<br /><br />Suppose senior-GPA really is a fairly strong correlate of potential success. But suppose your admissions committee, perhaps in response to your urging, has actually been *over-valuing* senior-GPA. Then you would expect your committee to have admitted some comparatively poor students on the basis of their deceptiviely high GPA's, so, among your students, there would actually be a *negative* correlation between senior-GPA and success in grad school. Depending on the degree to which your committee overvalued GPA, you might expect this negative correlation to show up in your medium-sized sample even if there was quite a bit of noise in your selection process. This example shows that, even in a noisy selection process, there's no strong reason to expect that the positive correlates of success among applicants will also be positive correlates of success among the students who actually come to UCR, especially when you already know that the admissions committee bases its decisions (in part) on those correlates.Justin Fisherhttp://www.justin-fisher.comnoreply@blogger.comtag:blogger.com,1999:blog-26951738.post-1439600146667725652012-02-17T10:00:00.044-08:002012-02-17T10:00:00.044-08:00Justin: I don't entirely disagree with your fi...Justin: I don't entirely disagree with your final upshot, and I think my thinking on that is clearer now than it was when I started this project. However, I continue to think that GPA should have been predictive even if we are weighting it exactly right in admissions, because of the large failures in the efficient markets interpretation of grad school admissions.<br /><br />Here's one assumption of the efficient-markets hypothesis: That students will choose always to go to the most selective school to which they are admitted. Suppose that we admit 20 students across the range of expected quality from minimum-for-UCR to Princeton/NYU quality. Any student who turns down a more selective school for UCR would then be expected to perform better than others in the entering class. Predictive factors like GPA should then be expected to differentiate performance. And while it is unusual for students to turn down top-10 schools for UCR, students often do turn down somewhat more prestigious schools for UCR either because of reasons of fit with the strengths of our program or for personal reasons. Maybe this is especially true for UCR given our unusual profile of faculty strengths, blending analytic and Continental.<br /><br />Another idealization failure of the rational-markets model is this: We consider area of interest as an important factor in admissions. If there are a dozen top-notch applicants in one area, we can't let them all in and so we start to apply stricter criteria. Also, since it is sometimes hard to compare writing samples across radically different subareas (e.g., technical analytic metaphysics vs. Nietzsche interpretation), a tempting approach is to say something like, "well, let's admit three students of broadly this sort, and two of broadly this sort, and..." and then make comparisons within those subgroups. Differential performance between the subgroups would be likely to follow, and once again factors like GPA should start to re-emerge as predictors.<br /><br />One would also expect imperfection and noise and luck in the process -- this seems very hard to deny once one has been in the admissions room! -- so any noise that results in what should have been below-cutoff candidates being admitted should also result in inefficiencies that reveal the power of the predictors. And I'm inclined to think that the noise in the process is pretty large.Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-3645828606511062842012-02-16T18:16:35.190-08:002012-02-16T18:16:35.190-08:00Regarding (1): The evidence fails to support your...Regarding (1): The evidence fails to support your claim that skewing the committee more towards valuing GPA would improve the amount of success of your admitted candidates. On the contrary, your data suggest that, so long as the committee retains its relative valuations of other parameters, increasing their valuation of GPA would tend to drive their selection closer to whatever the mean quality of applicants was (which is presumably *not* the direction they want to go). Your data *do* however, justify pushing for your committee to take GRE scores more seriously, which entails at least sometimes preferring high-GRE applicants who would (barely) have been wait-listed over low-GRE people who would (barely) have been accepted by the committee's old standards.<br /><br />I'm not sure I understood your (2). Was your thought something like the following? "I know we've admitted a quite diverse group of students with quite little regard to their GPA. So, this sample should have been random enough that, if GPA were a good predictor for applicants in general, then it should still have been correlated with success even in this not-really-random sampling of applicants." If you're confident of that, then, yes, this data should make you question your earlier high valuation of GPAs. But I'm not sure how you *could* be confident of that. Surely your committee has been taking GPA into account, along with other factors, right? If so, then it should be possible that both (a) GPA is correlated with potential success <i>among applicants</i>, and yet (b) your committee has been doing well enough at reading and combining various correlates of success that GPA is *not* correlated with success <i>among accepted students</i>. So I don't think your failure to observe a correlation in (b) is evidence that there wasn't a correlation in (a).<br /><br />The caveat *I* had thought about adding to my earlier argument was this: if a study like yours reveals no correlations, that really just means that the committee has been consistently reading all the cues <i>of a certain level of quality</i>, but this in no way guarantees that they've been consistently reading all the cues of <i>the top level of quality</i>. Data of non-correlation are even compatible with the claim that your committee was consistently choosing the *worst* students in the applicant pool! (Of course, we have background theoretical reasons to think that the sorts of criteria they used probably were correlated with *high* quality, but that is not in any way confirmed or disconfirmed by non-correlations in the data you gathered.) <br /><br />I take the upshot to be this: When you see that feature X was correlated with success among people you chose to admit, then you should start to value X more. If you don't see any correlations, then that means you've been exhausting the informational value of all the cues of whatever level of quality it is that you've been getting. If you're not happy with that level of quality, then you'll probably have to tweak multiple valuations at once. Or you could spend a few years admitting a truly random sampling to serve as an unbiased dataset for empirically figuring out what valuations to use in the future.Justin Fisherhttp://www.justin-fisher.comnoreply@blogger.comtag:blogger.com,1999:blog-26951738.post-38417261067678756402012-02-16T15:42:12.751-08:002012-02-16T15:42:12.751-08:00Justin, that is very nicely put. Thanks for helpi...Justin, that is very nicely put. Thanks for helping me conceptualize it more clearly!<br /><br />Two minor caveats: (1.) I tend to be the senior-year-philosophy-GPA pusher on the committee, so my antecedent opinion would be that the average admissions committee would still be undervaluing that as an indicator. But maybe not. (2.) Your background assumption is (as you acknowledge) pretty idealized, and to that extent I would still have expected GPA to be predictive.Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-91557421793525626372012-02-16T13:09:01.700-08:002012-02-16T13:09:01.700-08:00Hi Eric, Thanks for another interesting thought-p...Hi Eric, Thanks for another interesting thought-provoking post!Here's a slightly different way of spinning points in the ballpark of ones you've made:<br /><br />You started off asking: <i>among students who apply to UCR</i> which features would be most predictive of success? (Where you've taken "success" to be measured, well enough, by GPA in grad school).<br /><br />You gathered data that would be directly relevant to answering a different question: <i>among the students who were both accepted to and chose to attend UCR</i>, which features are most predictive of success?<br /><br />Of course, your sample dataset is not at all a random sampling of the population of applicants you initially asked about, not least because the only way somebody could make it into your sample was either by having a high GPA, and/or by having some other convincing indicator(s) of potential success. <br /><br />So your sample data set is a quite poor sample to try to use to answer your intitial question, as this sample can't detect any valid signs of success that your committee already used to determine who would be in the sample.<br /><br />However, your sample might still help answer other questions in the ballpark, especially questions about whether your admissions committee under- or over-valued any cues.<br /><br />If your committee were doing its job perfectly (assuming there were abundant candidates of every quality) then you would expect there to have been no reliable signs of which students would do better. (If there were such signs, your committee should have used them to reject students who predictably would do poorly, and instead accept wait-listees who would predictable do better.) So, if your study found *no* correlations, that would be heartening evidence that your committee hasn't been missing out on valuable informational cues. So, e.g., it seems that your admissions committees have been valuing GPA close to right.<br /><br />When you *do* find correlations, this indicates that your committee was probably using suboptimal criteria -- had you better used these cues you could have admitted more successful students. So, e.g., it seems that your committees must have been undervaluing the relevance of Verbal GRE scores, and would probably have done better had they rejected some low-GRE candidates and admitted some higher-GRE wait-listees instead. <br /><br />This doesn't mean that you were wrong to think GPA is more relevant than GRE at predicting how *applicants* will do -- it just means that your committee already correctly values GPA, but not GRE.Justin Fisherhttp://www.justin-fisher.comnoreply@blogger.comtag:blogger.com,1999:blog-26951738.post-27411937784415312152012-02-16T12:55:39.992-08:002012-02-16T12:55:39.992-08:00Anon: Yes, I messed around with multiple regressio...Anon: Yes, I messed around with multiple regression, but with so few datapoints it's not really interpretable I think.<br /><br />David: Yes, a marker for wisdom would be very useful. Perhaps count up the number of moments in which the person stands lost in thought, gazing vaguely into the distance? Socrates would have done well by this metric, I'm told. I agree somewhat with the GIGO observation, but it's interesting to play with the numbers anyway.<br /><br />Prasad: Yes, I plead guilty. That's my bias. Has ETS looked at philosophy in particular, controlling for GPA? But ETS is also a biased source. As Seth points out in an earlier comment, and as I myself have seen in UCR data I looked at some years ago, SAT is often not very predictive of undergraduate performance when other relevant factors are properly factored in. In fact, an analysis of UCR GPA I did on all existing UCR students circa 1999, I found SAT was entirely null as a predictor of performance once one particular version of the HS GPA measure was included in the model (which version, I forget, but there are several versions).Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-14944355290462415902012-02-16T12:01:28.969-08:002012-02-16T12:01:28.969-08:00It's amusing just how much good academics prof...It's amusing just how much good academics profess to hate standardized tests. Even when numbers (and surely you know ETS has far more informative and high-N tests than this) show that 3-hour fill-in-the-circles quasi-IQ tests are highly informative, the bias of the poster and the comments is to try and find as many loopholes as possible...prasadnoreply@blogger.comtag:blogger.com,1999:blog-26951738.post-65626289141973261792012-02-16T09:57:05.495-08:002012-02-16T09:57:05.495-08:00Tony La Russa played this predictability game deca...Tony La Russa played this predictability game decades ago with baseball statistics and found some predicability factors, but in particular situations at the plate/mound, they proved of course unreliable, because as Aristotle sagely noted, one cannot generalize over particulars. Many of your stats are obviously unreliable, since GPA differs so widely from school to school, professor to professor. Verbal aptitude seems more promising since most of philosophy is a mere vocabulary game, inventing alternative meanings for distorting perfectly good words. And "success" ought to take into consideration long term employment --so you need a wider metric.<br /><br />My father was once a professor of statistics and loved the phrase: "garbage in, garbage out". But rather than take his cynical stance to all your efforts, I suggest you find a marker for wisdom, instead.David Gliddennoreply@blogger.comtag:blogger.com,1999:blog-26951738.post-49303252978192293922012-02-16T09:08:24.186-08:002012-02-16T09:08:24.186-08:00Since you hypothesize that variables such as under...Since you hypothesize that variables such as undergrad GPA, letters, etc. are greater predictors than GRE scores, it would be interesting (given a larger sample) to run a multivariate regression model to see if the significant variance found by GRE scores hold up when controlling for those other factors. It would seem that it may be all of those factors and not just GRE (I hope anyway). Good stuff! Very interesting!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-26951738.post-53568134902932939002012-02-16T08:31:45.405-08:002012-02-16T08:31:45.405-08:00@ Dan: Interesting! Thanks for that. Yes, if I ...@ Dan: Interesting! Thanks for that. Yes, if I follow up on this with a larger data set it would be interesting to break down by subfield to see if the predictors play out differently for the different subfields.Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-79378790933058953002012-02-16T08:29:34.392-08:002012-02-16T08:29:34.392-08:00ToneMasterTone: Kant doesn't seem quite as mod...ToneMasterTone: Kant doesn't seem quite as modest about the value of his own work. I'm inclined to think that he thinks of himself as having repaired this situation.<br /><br />Nicolas: Right. But when an admissions committee has 100-200 applications from students most of whom seem to be strong students with good grades and good letters, what are we to do? All else being equal, shouldn't we choose the 3.9 student over the 3.7 student? Should we just flip a coin? Of course we also look at letters and the sample, and maybe those really are better predictors than GPA. They certainly play a big role in our thinking; they're not minor factors in the decision. I would like to explore the predictive value of letters and sample, but it's difficult to quantify! There are lots of excellent samples and letters, too.<br /><br />Dissata: Yes, I've thought about polling professors for their opinions about the overall quality of students and the quality of dissertations and the like. But I'm somewhat apprehensive about doing this, since I think if word got out that I was doing this, it could have negative repercussions for the student experience here -- especially because I think sometimes students profit from having a fresh start with a new advisor who doesn't have preconceptions about their ability. I'd prefer there not to be a secret drawer containing professors overall rankings of student quality.Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-81750369253309912072012-02-16T08:17:45.045-08:002012-02-16T08:17:45.045-08:00Thanks for all the comments, folks!
@ Seth: Right...Thanks for all the comments, folks!<br /><br />@ Seth: Right. See my FB comments.<br /><br />@ Anon 7:42: Interesting point! I suspect your condition is somewhat unusual. But you're definitely right that there are weaknesses to GPA as a success measure. I choose it because it's easy to get and to quantify and because I do think it has some validity. My *general* impression is that the stronger students tend to have higher GPAs, but there are definitely exceptions.<br /><br />@ Jonathan: If I'm correctly understanding your selection bias worry, that's the same worry I express as caveat 2.<br /><br />Here's another way of thinking about the issue. If the PhD admission market were perfectly efficient in terms of unidimensional overall quality both of student and of grad program then one might expect that no variable would be predictive, since students who were relatively strong in one dimension would have to be relatively weak in some other dimension not to be picked up by a higher ranked school and students who were relatively weak in one dimensions who weren't also relatively strong in another would be rejected by UCR. So part of what I'm doing is thinking about whether there are inefficiencies in the system that I can exploit Moneyball-style.<br /><br />I did look for interaction effects in multiple regressions, though with this few data points I'm not sure about the value of multiple regressions. I found a couple marginal interactions and a significant (p = .04) three-way interaction with GRE being more predictive for students from elite undergrad institutions who have Master's training than for other students -- though I'm not sure what to make of that or whether it's just noise given the number of tests and relatively high p value.Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-64495702001169489552012-02-16T06:34:26.327-08:002012-02-16T06:34:26.327-08:00Very interesting! I did something similar for our ...Very interesting! I did something similar for our graduate program at Texas in the 1990s, but covering a longer period of time. I came to similar conclusions. GREs were overall the most predictive factor; undergraduate GPA had a very low correlation.<br /><br />I did find that this varied depending on the specialization of the student. Verbal GRE mattered for students working in continental and non-technical analytic fields. Quantitative mattered for students in technical areas and in ancient philosophy. GPA mattered some for non-technical analytic but not for continental, technical areas, or history.Dan Bonevachttps://www.blogger.com/profile/09591259743326904349noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-25060224124251580132012-02-16T06:05:49.091-08:002012-02-16T06:05:49.091-08:00I appreciate your article and your insight. Howeve...I appreciate your article and your insight. However, a more interesting comparison for me would be of the "predictors" of graduate success and your (or other professor's) anecdotal opinion about that student. Do students, for example, with a higher GRE score and undergraduate GPA seem to provide the most insight in class or write the most 'interesting' papers? Do they seem the best read, or most capable researchers? etc. <br /><br />Also, how does their graduate GPA compare to the overall quality of their thesis? that with their undergraduate GPA?dissatahttps://www.blogger.com/profile/03258066135701695545noreply@blogger.com