One of the main criticisms of international rankings is that they measure research quality rather than teaching quality. This is especially the case in for the Shanghai Jiao Tong Ranking. The THES Ranking uses proxies like employer surveys, student staff ratios and the number of international students in order to indicate education quality. The best known national university ranking is probably the one of the US News and World Report. However, their proxies for educational quality (such as selectivity) can not be applied in a standardised global setting.
The most ambitious project to date to rank universities on education quality is the plan of the OECD to rank according to learning outcomes. Andreas Schleicher, the OECD’s head of education research explained this in the Economist in November last year:
“Rather than assuming that because a university spends more it must be better, or using other proxy measures for quality, we will look at learning outcomes”
Just as the OECD assesses primary and secondary education in their PISA assessment, it will sample university students to see what they have learned. Once enough universities are taking part, it may publish league tables showing where each country stands, just as it now does for compulsory education. This of course is a very ambitious project, if not over-ambitious. But at the same time, the OECD is probably one of the few international organisations that have the capacity and experience to assess educational outcomesat a (near) global level. Or not?
We at CCAP have long complained that most rankings of colleges are largely based on inputs used in providing services, things like the faculty-student ratio or the average SAT score of entering students. Better would to evaluate schools on either consumer satisfaction (like we evaluate most other things) or on the post-graduate achievements of the products of the education –the alumni.
The data for measuring consumer satisfaction come from the popular website ratemyprofessors.com. Today in their blog, CCAP research associates show what the ranking would look like if it would be solely based on the data from this site:
rankings are calculated by taking a weighted average of all faculty members at a university in the categories of: overall quality, average easiness (with ease treated as a negative quality), and average “hotness.”
And what is the result for the US national universities?
1. Boston College
4. California Tech
7. U. of Chicago
9. Wake Forest
10. Brigham Young University (BYU)
13. U. of Pennsylvania
What surprised me in the list is that it is composed only of private universities. None of the prestigious public universities such as Berkeley, Michigan, Wisconsin, Virginia, North Carolina, etc., appear in the list. In addition to a ranking for national universities, they also present the top 15 of Liberal Arts Colleges. Interestingly, these show a higher consumer satisfaction than the research universities.
Of course…ratemyprofessors.com has received a lot of criticism and using it for a ranking brings along many problems. Nevertheless, the idea of including student satisfaction as a measure of quality, is not that strange. Student surveys have been used by the Dutch magazine Elsevier in their rankings and also is the basis for part of the data behind the CHE/Die Zeit rankings in Germany. And after all, it has been applied – albeit indirectly – as a measure since the very beginnings of the university:
One of the oldest universities in the world, the University of Bologna chartered in 1158 by Frederick I Barbarossa, was designed to cater to student desires. Students collectively hired professors, set tuition rates, evaluated and even dismissed low-quality instructors. While we have moved away from that model (perhaps somewhat for the better) student instruction remains the primary function of a university. Faculty research and other things distract from this goal. Our findings that students at liberal arts colleges are more satisfied with their professors than those at national research institutions is not surprising. Perhaps it is time that our national research universities shifted priorities more toward satisfying high paying customers. Since students are the main consumers of the university product, any ranking of schools should include a student satisfaction variable.
It will be interesting to see how the inclusion of teaching and learning in international rankings will develop. And in the longer term, it will even be more interesting to see how this will change universities. Will it indeed cause a shift in priorities toward satisfying (high paying) customers?