In the latest issue of the American Journal of Sociology, Wendy Nelson Espeland (Northwestern University) and Michael Sauder (University of Iowa) present an impressive paper on Rankings and Reactivity: How Public Measures Recreate Social Worlds. The paper shows how the rise of public measures change social behaviour by looking at the law school rankings of the US News and World Report (USNWR). It struck me how many of their findings and arguments can be applied to international rankings as well. In some cases, their arguments might even be stronger for international rankings and expose additional complications.

Through processes of what the authors call ‘reactivity’, the independence between the measures and the social world they target are threatened. Rankings thus not just measure the current situation; they define it as well by changing behaviour. They identify two mechanisms of reactivity that are important in this respect: ‘self-fulfilling prophecies’ and ‘commensuration’. Here, I will discuss the self-fulfilling prophecy mechanism and make an attempt to ‘translate’ this to the level of global higher education and global rankings.

The authors define self-fulfilling prophecies as processes by which reactions to social measures confirm the expectations or predictions that are embedded in measures or which increase the validity of the measure by encouraging behaviour that conforms to it (p.11). These processes shape the reactivity of rankings in different ways:

(i) First of all, rankings have an effect on external audiences. Rankings magnify otherwise statistically insignificant differences between law schools and the distinction produced by these rankings become taken for granted. As one of the interviewees puts it (p.12):

“rankings create inequality among schools that are rather hard to distinguish. They create artificial lines that then have the danger of becoming real”

This is because even small differences have an effect on the quantity and quality of applications that a school receives in the future. Almost all admissions directors in the study reported that students’ decisions correlate with rankings.

To my knowledge, such correlations have not yet been demoonstrated for student choices at the international level. However, considering that quality is even less transparent at the international level than at the national level, it is likely that many students looking for international opportunities are guided by international rankings such as the Times Higher education Supplement Ranking (THES) or the Shanghai Jiao Tong Ranking (SJT) and, in the case for MBA’s, the Financial Times Global MBA Rankings (FT). The number of people arriving at my blog (especially from developing countries), searching for terms like ‘university ranking institution X’, ‘prestigious university country Y’ support this.

One problem here is of course that these international rankings don’t have a lot to say about education. This especially goes for the SJT ranking which is completely based on research performance. One criterion for education quality is used but this is measured by the number of alumni of an institution winning Nobel Prizes and Fields Medals, and accounts only for 10% of the total score

(ii) A second mechanism for self fulfilling prophecies emerges through the influence of prior rankings on survey responses. The USNWR uses two types of surveys, one for academics, one for practitioners. The interviews in the paper show that many academics are not sufficiently informed about other law schools in the US and therefore base their judgements on….previous rankings. And the same is probably true for the practitioners, as these quotes show:

“Well, hell, I get the rankings and I get 184 schools to rank. I know about [this school], something about [that school], I know about [my school] obviously, and I’ve got some buddies here and there so I feel like I know something more about some of those schools. But beyond that, guess what, I’m basing these decisions on the rankings; it’s a self-fulfilling prophecy. If partners [at a law firm] do anything to prepare themselves for this [reputational survey], they probably go out and get a copy of past USN reports and fill it out from that.”

The THES rankings also use peer surveys and recruiter surveys as part of their rankings. For the THES ranking this is a crucial part with academic peer surveys determining 40% and recruiter surveys determining 10% of the overall ranking. But even though surveys determine 50% of the overall rankings, the surveys are conducted in a very poor manner and therefore lack any credibility (see the discussion here)

Espeland and Sauder’s argument that a lack of information about other schools or universities make prior rankings a crucial determinant for future rankings, is of course even more valid in the international domain. For the THES rankings, each peer was asked which area of academic life they are expert in, and then asked to name up to 30 universities they regard as the top institutions in their area. I expect that each peer will list a few – let’s say 10 – universities with which they cooperate. That are of course the institutions they will have some knowledge of. The other 20 are probably based on reputation, and thus…previous rankings. We might think we know something about Stanford or Cambridge and therefore include them in the personal top 30? But why not Uppsala or Utrecht or Iowa? Because they are not in the top 20 of the rankings?

The fact that the THES responses to the surveys were very Asian and Anglo Saxon biased, together with the relatively high scores of Asian, American, UK and Australian universities, confirms this point. For the recruiter surveys, one can expect the same self-fulfilling prophecy effects.

(iii) A third mechanism of self-fulfilling prophecy is that resources are distributed on the basis of rankings. In the case of US law schools, these are basically internal university decisions so that law schools are competing with other programmes. As the authors note, some administrators use rankings as a heuristic to help allocate resources because other benchmarks are lacking. And if resources are allocated by ranking or by the potential for rank improvement, rankings reproduce and intensify the stratification they are designed to measure (p.14).

This is of course a tricky issue, especially for the case of international rankings. Does a university in a country receive more money from government (because in most countries that is still where much of the money is allocated) because of high rankings, or because it performs better? The crucial question here is of course, according to what and whom? Assuming that this is a reactive process, the criteria used in important rankings become indicators for policies and for resource allocation.

I think that USNWR rankings have a more explicit effect on higher education in the US than international rankings have on other universities. However, in my own experiences and interviews, it has become apparent that international rankings play a role in setting institutional priorities, albeit in a more implicit manner. This is likely to have an effect on the allocation of resources as well.

What we need to keep in mind here is that the rankings are developed in specific contexts, while national needs emerge from another context. If ranking criteria are prioritised at the cost of specific national or regional needs, the ‘objective quality’ might compromise the real functions of higher education. This is also related to the next mechanism.

(iv) A final mechanism is what the authors call ‘realizing embedded assumptions‘: rankings create self-fulfilling prophecies by encouraging schools to become more like what rankings measure, which reinforces the validity of the measure. They impose universal definitions of what a school or a university should look like or what they are supposed to do. As a result, schools may feel pressures to abandon missions that are not measured in rankings.

From an international perspective, the example of using international students as one of the criteria, confirms this point. International rankings that give a score for the number of international students do this on the assumption that it is good to have an international campus and that it is a measure of quality if students from around the world want to attend that university. For a large part this assumption is valid, but not if we look at the outliers. On the one hand there are universities, like in Australia, where the number of international students have become so substantial that one might ask whether this is an indicator of quality or an indicator of commercial interests over academic ones. The quality of the international students for instance is not measured in any way.

On the other side, there are many many countries with an unmet demand from their national population. Here, a delicate issue comes forward in relation to international rankings. If ranking criteria become policy indicators or performance measures, one runs the risk of policies becoming detached from the need of a specific country or university. Some time ago, I indicated this by pointing to a middle income country that increased its inflow of international students in order to appear higher in the rankings. This was a country which has a high and unmet national demand for higher education…

Other measures, such as the numbers of Nobel Prize winners or highly cited academics might give an indication of the quality of research in already established institutions, but for the bulk of the world’s universities, this should not be a priority at all. An example is for instance that a university in that same country recently attracted a well known ‘academic super star‘ for a newly established Chair. The question can be posed here whether this person’s annual appearance – probably for a day or two each year – adds anything to the research quality of such an institution.

This is not just an issue in lower or middle income countries. For instance, developments in knowledge transfer are important in developed as well as developing countries. But not recorded in any ranking. In many Western European countries, the inclusion of second generation immigrants into the higher education sector is a policy priority. Again, this is not recorded in any of the international rankings. Etc, etc… An over-emphasis on rankings might push such important policies and missions to the background.

In my opinion, the fact that certain assumptions about what a good university is (and even what a so-called ‘world class university‘ is), is very much dependent on national circumstances. Or at least it should be. If these assumptions become embedded in rankings, this national context is totally overlooked. Due to the processes of reactivity, rankings might have some serious negative effects for the national (and regional) missions of universities.


Clearly, the self-fulfilling prophecy effect of rankings is a serious one. One of the interviewees, a Law Professor, even called it a ‘self-fulfilling nightmare’. Rankings can have very positive effects in creating awareness of the quality of education and research within institutions. However, the Espeland and Sauder article shows that they might have detrimental effects as well, especially if we extrapolate the findings to international rankings and the global higher education landscape.

For the organisations that develop the rankings this means that they need to be very precise in ranking universities worldwide (this message clearly goes out to Quacquarelli Symonds, the company behind the THES rankings and the Fortune Business school ranking). Especially the survey method, even if the surveying was methodologically correct, is very sensitive to the self-fulfilling prophecy mechanism. Since universities and governments react to rankings in a very real way, that is by conforming to their criteria (either explicitly or implicitly), they have the responsibility to be accurate and to choose the right criteria. And the right criteria are not always those that are easily measured! And they are not always those that are important in the American, British or even OECD-countries context.

The main message however goes out to policy makers and university administrators. Since these rankings are developed in a certain setting, they will not always correspond to national or regional circumstances and societal and economic needs. The most irresponsible thing to do is to conform to rankings on the short term and therewith compromise the real missions and policy objectives on the long term.

This article has 6 comments

  1. Andrew

    Thanks for this detailed and considered reflection on the Espeland & Sauder paper – and for your effort to expand the implications of their argument to the international context. Just some off-the-cuff thoughts in response:

    I’m not convinced myself that the role of rankings at the international level is precisely analogous to the role that the USNWR rankings have come to play in the U.S., even though there are emerging overlaps that are worth monitoring. There’s undeniably an increasingly globalized economy of prestige and reputation, but under conditions of globalization and increasing affluence, the thought that individuals would choose to remain bound to local or national universities that meet ‘national needs’ rather than pursue an educational career that enhances individual social and economic mobility is becoming outdated. From the perspective of an individual student and/or his family, why not pursue education at a ‘name’ school or a degree that will give your resume the semblance of ‘international experience’ If going abroad will promote your career prospects and life chances, why not do so? Universities, in turn, are turning away from their mission of training national/local elites for national/local leadership; rather, seeing the emerging network of trans- and international affiliations among individuals across borders, it makes some degree of sense that they too, would attempt to ‘internationalize’ in order to develop their own networks of support and influence, regardless of where they may be. Rankings of course feed into this, but it’s only part and parcel of a larger system of distributing reputation and prestige along increasingly complicated and dense social networks.

    The curious outcome of this, I think, is increasing emphasis on English-language instruction. Having worked as an international student advisor before becoming a graduate student in education, I’ve noticed the tremendous boom in demand for English-language instruction and education from English-language countries. It’s not the rankings that drives this so much as a determination that the transmission of knowledge – and prestige – is going to be done on the basis of English, and that any individual who wants to enhance employment and social mobility prospects ought to establish some fluency in that language. Of course, universities understand this, but risk overplaying their card when they invite more students than they can sustain without compromising the appearance of quality, as has happened at various institutions.

  2. Mohammad

    Hi Eric, Just to say I really enjoy reading your thoughtful posts.
    Mohammad, GSSSP RMIT University

  3. Eric

    Apologies for my late replies…

    @ Mohammad: thanks for that!

    @ Andrew: thanks for your thoughts on the issue. Don’t get me wrong. I’m not advocating a ‘nationalist turn’ in higher education. And yes, from an individual perspective, I can totally understand the choice for a ‘name school’. It’s just worrying that this reinforces the prestige game… On the other hand, I do reckognise that this prestige game has a lot of benefits. Cross-national and cross-organisational learning can contribute to the quality.

    I just wonder whether international rankings (because of its simplification and quantitative reduction of the term quality) perform this function. I think the obsession with rankings and so-called ‘world class universities’ creates a risk of neglecting some of the functions of universities in the national or regional domain (functions which are usually not included in ranking criteria). Some of these issues are explored in Altbach’s book on world class universities in Asia and Latin America and also by Steiner-Khamsi’s (also from TC Columbia if I’m correct) book on educational lending and borrowing.

    B.t.w.: great blog you started! Interesting posts…

  4. Andrew

    Eric, thanks for your reply. I share in your worries and concerns about the impact of rankings; what the E&S provokes is a speculation about when rankings stops being a ‘distortion’ of true quality measures and a structural part of the reality itself. Interesting questions.

    Thanks for the nice words about my blog, but it’s so hard to keep up! But let me return the compliment – your posts are really great reading, very thought provoking.

  5. Pingback: THES Ranking 2007 by Country | Beerkens' Blog

  6. Pingback: Counting what is measured or measuring what counts? | Beerkens' Blog

Leave a Comment

Your email address will not be published. Required fields are marked *