Rankings and Reality

Summer holidays are over. In the global field of higher education, this also means that it is ranking season. Last month it was the Shanghai ranking, This week the QS World Universities Ranking were revealed and in two weeks the all new Times Higher Education ranking (THE) will be published. Ranking season also means discussions about the value of rankings and about their methodologies. Two points of critique are addressed here: the volatility of (some) rankings and the overemphasis of research in assessing universities’ performance.

Volatility and stability in international rankings 

This year’s discussion has gotten extra fierce (and nasty now and then) because of the THE’s decision to part with consultancy agency QS and to collaborate with Thomson Reuters, a global research-data specialist. The previous joint THE/QS rankings usually received quite some media attention. This was not just because their methodology was heavily criticized (and rightly so) but also because this disputed methodology led to enormous fluctuations in the league tables from year to year. The critique has made THE to join forces with Thomson while QS continues their ranking.

Although the various rankings differed in their methodology, they all seemed to agree on two things: the hegemony of the United States universities and the undisputed leadership of Harvard. This week’s QS rankings again showed the volatility of their methodology. For the first time Cambridge beat Harvard and for the first time the top ten is not dominated by US universities. The top ten is now occupied by five US universities and five UK universities.

The Shanghai ranking on the other side shows much less fluctuations in its rankings. This probably does reflect reality better, but makes it less sensational and therewith less attractive for wide media coverage. The two graphs below clearly show the difference between the stable Shanghai rankings and the volatile QS rankings for a selection of Dutch universities.

Image

 

The graphs show the positions in the past six years for the four Dutch universities that are in the top 100 of the Shanghai and/or QS ranking. To illustrate the relative meaning of the absolute positions, the Shanghai rankings groups institutions above rank 100 (this also explains the relatively steep drop from Erasmus University in the 2006 ranking). Although Amsterdam has remained fairly stable in the rankings, Leiden and Utrecht show quite some fluctuation. Much more than its real quality would justify.

And who thinks this is volatile, it can be much worse. Simon Marginson in a 2007 paper lists dozens of cases where drops are increases of more than 50 positions (sometimes even up to 150 positions) occur in a year. A case in point is the Universiti Malaya who went from “close to world class” to “a national shame” in only two years…

 It will be interesting to see in the coming years how the new THE/Thomson methodology will work out in this respect. The Times Higher published its methodology this week. While the QS ranking based their listing on only 6 indicators (with 50% weighting going to reputational surveys), the new THE ranking takes into account 13 indicators (grouped in five categories). Considering this higher number of indicators and considering that the weight of reputational surveys is significantly lower, it is also likely that fluctuation will be lower than in the QS ranking. Time will tell…

Are international rankings assessing teaching quality?  

Another frequently mentioned critique on the existing international rankings is that they put too much emphasis on assessing research and neglect the teaching function of the university. Since the new THE ranking more than doubled the number of indicators, it is likely that the assessment will correspond better with the complex mission of universities.

If we look at the new methodology this indeed seems to be the case. The teaching function now constitutes 30% of the whole score and is based on 5 indicators. In the QS ranking, it was based on only 2 indicators (employer’s survey and staff/student ratio).

The 5 indicators are:

  • Reputational survey on teaching (15%)
  • PhD awards per academic
  • Undergraduates admitted per academic (4.5%)
  • Income per academic (2.25%)
  • PhD and Bachelor awards

A closer look at these 5 indicators however leaves the question on how much they are related to teaching.

  1. First of all, one can wonder whether a reputational survey really measures the quality of teaching or whether this in reality is another proxy for research reputation. Colleagues and peers around the world often do have some idea of the quality of research in other institutions, but is it likely that they can seriously evaluate the teaching in other institutions? Apart from the institutions where they graduated or worked themselves, it is unlikely that they can give a fair judgment about the teaching quality in other institutions, in particular in institutions abroad.
  2. Two other questionable indicators for the quality of teaching are the number of PhD’s awarded and the number of PhD awards per academic. In the Netherlands, and in many other countries in continental European and elsewhere, this says much more about the research quality and the research intensity of an institution than about the teaching quality
  3. The indicator ‘Undergraduates admitted per academic’ seems the same as their old indicator of student/staff ratio. Assuming here that a lower number is better, this again benefits research intensive institutions more than other institutions. Research intensive institutions employ relative many academics, but many of them will have research only contracts. Yet, in this indicator they will still lead to a higher score on teaching quality
  4. ‘Income per academic’ is also a dubious indicator. Assuming this concerns the average annual income of academics, there is no reason to believe that higher salaries benefits the quality of  teaching in particular. It could be argued that salaries are nowadays more related to research quality and productivity than to teaching quality. If income per academic refers to the external financial resources that an academic attracts, it would even more be an indicator of research intensity.

Although the new THE ranking methodology seems to put more emphasis on teaching, at a closer look this is rather misleading. All this again shows how difficult it is to measure teaching quality. But as long as we do not address teaching quality sufficiently in the international rankings, they cannot fulfill their function as transparency instrument for international students.

Leave a Reply

Your email address will not be published. Required fields are marked *