Counting what is measured or measuring what counts?

The Higher Education Funding Council published a report on the impact of rankings in the United Kingdom. It is probably one of the most extensive studies on ranking today. The study was conducted by the Centre for Higher Education Research and Information (CHERI) and Hobsons Research and is based on a survey of 91 higher education institutions in the UK and six institutional case studies. hefce

The researchers looked at five rankings in particular, three national ones (Sunday Times Good University Guide, The Times Good University Guide, The Guardian University Guide) and two international rankings (Shanghai Rankings and the Times QS Ranking). The report itself and the background data are all available on HEFCE’s website.

Roughly, the study is divided into three parts. The first looks at rankings and their shortcomings in general. The second at the impact of rankings on universities in the UK. And the final part discusses alternative ranking methods such as the CHE ranking.

One of the most interesting questions posed in the first part is actually the same as the title of the report: counting what is measured or measuring what counts? In other words, are the criteria used in these league tables used because they are the most important determinants of quality or because those indicators are simply the ones that are (most easily) measurable? Not surprisingly, they find that:

The measures used by the compilers are largely determined by the data available rather than by clear and coherent concepts of, for example, ‘excellence’ or ‘a world class university’. Also the weightings applied do not always seem to have the desired effect on the overall scores for institutions. This brings into question the validity of the overall tables.

Several other points of critique – many of which have been discussed before, also in this blog – are confirmed in this part of the study. But the real value of the study is that it doesn’t stop here. It continues with an analyses of the survey and case studies to identify the ways in which these rankings actually shape policies. They find that institutions are indeed strongly influenced by league tables. One finding that I confirmed my expectations (see here and here) was about the link – and often contradiction – between league table criteria and other missions of the university:

League tables may conflict with other priorities. There is perceived tension between league table performance and institutional and governmental policies and concerns (e.g. on academic standards, widening participation, community engagement and the provision of socially-valued subjects). Institutions are having to manage such tensions with great care.

These are just a few quick observations. Read the full report! I will and probably post more about it at a later stage.

Leave a Reply

Your email address will not be published. Required fields are marked *