National Research Council Releases Decennial Ranking of Ph.D. Programs

By DAVID HAGMANN

Published: October 20, 2010

While Fordham’s undergraduate colleges are rising in rank and renown, its graduate programs have disappointing outcomes in the National Research Council’s (NRC) ranking of Ph.D. programs.

In the best case scenario of a complicated methodology, only the theology department managed to break into the top third of evaluated programs. Biology, economics, English, history and sociology end up in the bottom quarter.

The previous NRC evaluation was released 15 years ago and provided a straightforward ranking similar to the U.S. News & World Report ranking for undergraduate programs. This time, however, the NRC chose a different methodology, which is explained in a nearly 200-page addendum with 21 variables gathered for each program.

As one might expect, many universities are unhappy with their relative place and have taken issue with the methodology. Any evaluation is bound to have minor flaws, but critics say there were substantial issues with the system. The data collection process was so complicated, claims the University of Washington’s computer science & engineering department, that staff made mistakes when filling out the forms, leading to a worse placement for the program.

Dominick Salvatore, Ph.D., director of the Ph.D. program in economics at Fordham, defended the economics department, which scored near the very bottom: out of the 118 programs evaluated it ranked 97-115. Salvatore said, “over one hundred [economics programs] refused to participate in the evaluation. Do you think those over 100 that chose not to be evaluated are the strong departments or the weak departments?”

Participation in the NRC evaluation is voluntary, but gives programs an idea of where they stand and what they must do to improve in order to remain competitive with other programs.

Two different methods were used to determine the weighting for each variable. In the survey-based method, the NRC asked faculty in the field what they thought the most important variables were. Surveys to collect data were given to the institution, each department, the faculty and Ph.D. students. The surveys add up to 100 pages and faculty were asked, for example, how many committees they had been part of in the past five years between 2001 and 2006.

The second method was regression-based, asking faculty to rank a list of programs and calculate the weighting that would most closely result in that ranking. Once these variables were estimated, they ran a computer simulation that produced 500 rankings. The best and worst 25 runs for each program were discarded, and the other 450 ranks provide the range for a program. If, for example, a department’s best rank was 3 and its worst rank was 12, its ranking range would be 3-12. This ranking system, while not as succinct as the ranking of undergraduate programs, reflects the uncertainty inherent in any such evaluation.

Other variables that were taken into consideration for the ranking are: publications per faculty, cites per publication, percent faculty with grants, percent faculty that is interdisciplinary, percent non-Asian minority faculty, percent female faculty, awards per faculty, average GRE score, percent of first -year students with full support, percent of first-year students with external funding, percent non-Asian minority students, percent female students, percent international students, average number of Ph.D.s granted, percent who complete their Ph.D. within six years, average time to degree, percent of students who find employment in academic positions, whether the university provides student work space, health insurance and the number of student activities offered.

The low ranking is “primarily because of [a lack of] publications,” Salvatore said. He argues that the methodology’s weighing of factors does not adequately evaluate the program in its entirety. “The problem is that publications and citations together account for almost half of the ranking,” Salvatore said. The other 19 variables combined make up the other half. The weighting of factors for each discipline was determined by faculty who responded to the surveys at the time, so this weighting reflects the view of the profession at large.

Salvatore said the program at Fordham does a good job in the applied field to help graduates find employment outside of academia, where, according to Salvatore, they keep up with or do better than the graduates of higher ranked schools. “We want to do what we do well. And so we feel that we should be here and we deserve to be here,” Salvatore said.

Salvatore said “We know what we are, we want to improve, and we are not afraid to be evaluated.” If the graduate school wants to imitate the rise of the undergraduate program, the departments have plenty of time for that, as the next ranking is unlikely to be released before 2020.