Counting what is measured or measuring what counts?

The Higher Education Funding Council published a report on the impact of rankings in the United Kingdom. It is probably one of the most extensive studies on ranking today. The study was conducted by the Centre for Higher Education Research and Information (CHERI) and Hobsons Research and is based on a survey of 91 higher education institutions in the UK and six institutional case studies. hefce

The researchers looked at five rankings in particular, three national ones (Sunday Times Good University Guide, The Times Good University Guide, The Guardian University Guide) and two international rankings (Shanghai Rankings and the Times QS Ranking). The report itself and the background data are all available on HEFCE’s website.

Roughly, the study is divided into three parts. The first looks at rankings and their shortcomings in general. The second at the impact of rankings on universities in the UK. And the final part discusses alternative ranking methods such as the CHE ranking.

One of the most interesting questions posed in the first part is actually the same as the title of the report: counting what is measured or measuring what counts? In other words, are the criteria used in these league tables used because they are the most important determinants of quality or because those indicators are simply the ones that are (most easily) measurable? Not surprisingly, they find that:

The measures used by the compilers are largely determined by the data available rather than by clear and coherent concepts of, for example, ‘excellence’ or ‘a world class university’. Also the weightings applied do not always seem to have the desired effect on the overall scores for institutions. This brings into question the validity of the overall tables.

Several other points of critique – many of which have been discussed before, also in this blog – are confirmed in this part of the study. But the real value of the study is that it doesn’t stop here. It continues with an analyses of the survey and case studies to identify the ways in which these rankings actually shape policies. They find that institutions are indeed strongly influenced by league tables. One finding that I confirmed my expectations (see here and here) was about the link – and often contradiction – between league table criteria and other missions of the university:

League tables may conflict with other priorities. There is perceived tension between league table performance and institutional and governmental policies and concerns (e.g. on academic standards, widening participation, community engagement and the provision of socially-valued subjects). Institutions are having to manage such tensions with great care.

These are just a few quick observations. Read the full report! I will and probably post more about it at a later stage.

University rankings and customer satisfaction

One of the main criticisms of international rankings is that they measure research quality rather than teaching quality. This is especially the case in for the Shanghai Jiao Tong Ranking. The THES Ranking uses proxies like employer surveys, student staff ratios and the number of international students in order to indicate education quality. The best known national university ranking is probably the one of the US News and World Report.  However, their proxies for educational quality (such as selectivity) can not be applied in a standardised global setting.

The most ambitious project to date to rank universities on education quality is the plan of the OECD to rank according to learning outcomes. Andreas Schleicher, the OECD’s head of education research explained this in the Economist in November last year:

“Rather than assuming that because a university spends more it must be better, or using other proxy measures for quality, we will look at learning outcomes”

Just as the OECD assesses primary and secondary education in their PISA assessment, it will sample university students to see what they have learned. Once enough universities are taking part, it may publish league tables showing where each country stands, just as it now does for compulsory education. This of course is a very ambitious project, if not over-ambitious. But at the same time, the OECD is probably one of the few international organisations that have the capacity and experience to assess educational outcomesat a (near) global level. Or not?

The Center for College Affordability and Productivity (CCAP) at the University of Ohio recently proposed an alternative ranking of US colleges and universities:

Continue reading University rankings and customer satisfaction