Goodbye to AHELO?

This article was written for ‘HE – Policy and markets in higher education‘ from Research Fortnight and was published there on 24 June.

“It is hard to improve what isn’t measured.” So said Andreas Schleicher, who runs a programme to identify the abilities of school children in different countries on behalf of the OECD. Eight years ago, a similar measurement for higher education was proposed. The Assessment of Higher Education Learning Outcomes (AHLEO), as it is known, was due to be discussed this week at the sixth annual international symposium on university rankings and quality assurance in Brussels, but has been dropped from the programme. Where did AHELO go?

A clue could be found in where it came from. The idea was first mooted in 2006 when more than 30 ministers of education gathered in Athens to discuss higher education reforms worldwide. In his opening speech, Angel Gurria, the secretary general of the OECD, suggested undertaking “a comparable survey of the skills and abilities of graduates that might measure learning outcomes in higher education and help families, businesses and governments develop an evidence-based understanding of where and how higher education systems are meeting their expectations of quality and where they are not”. The OECD had developed the methodology to do it, he said. All they needed was a mandate. They got it.

This needs to be seen in the context of discussions about rankings that were taking place at the time. In the early 2000s international rankings in higher education were a new phenomenon. In 2003 the first edition of the Academic Ranking of World Universities was published by Shanghai Jiao Tong University. In 2004 the British magazine Times Higher Education published the first edition of the Times Higher Education-QS World University Rankings, which was later split up into the THE World University Ranking and the QS World University Ranking. Then followed the Taiwan Ranking and the Leiden CWTS Ranking and those of a plethora of newspapers.

The international higher education community was critical, to say the least. It disliked what these rankings measured, how they measured it and the possible consequences. There was too much focus on research and, in particular, on those aspects of research that could be quantified and counted: publications, citations and Nobel prizes. The resulting rankings and scores were presented as quality indicators for all activities in universities, including teaching, and an institution’s position in these rankings appeared to be important in the decision-making processes of international students. As a result, rankers searched for indicators that could express the quality of education, preferably in a single, globally comparable number—something that most experts and practitioners in education knew was an impossible goal.

Those education indicators that were included were poor proxies for the quality of teaching and learning. Teachers’ highest degrees or the ratios of staff to students merely said something about the input, not necessarily about quality. Other indicators, such as reputation or employer surveys as a proxy for teaching quality, were highly controversial. Highly selective institutions—which tended to lead the international rankings—might have many graduates in leading positions, but was that due to the institution’s efforts or the quality of the students it admitted? Is the best school the one that delivers the best graduates or the one that adds most value through the quality of its teaching?

Another problem was the homogenising tendency of international rankings. Having a limited set of indicators could cause institutions to focus on them, instead of the underlying objectives. This could mean specific local contexts would be neglected and local demands would not be met, which could lead to decreasing diversity. The only benchmark in the global knowledge economy—according to the international rankings—seemed to be the world-class, research intensive university.

AHELO—together with transparency tools such as U-Map, which looks at which activities individual institutions across Europe are involved in, and U-Multirank, which examines how well each institution does in these activities—emerged as the answer. The aim was to restore the teaching and learning mission of the university to a central position, look at the added value of institutions rather than their graduates, and show that a range of institutions—not just the so-called world-class research universities—could add value. Although the OECD repeatedly stressed that AHELO was never envisaged as a ranking tool, the rankings issue was frequently mentioned in its favour.

But can AHELO really provide an alternative for international rankings? It seems unlikely. Shortly after Schleicher addressed the conference in March 2013, the OECD published the results of a feasibility study. A staggering three volumes containing more than 500 pages concluded: “It is feasible to develop instruments with reliable and valid results across different countries, languages, cultures and institutional settings.” After that, came a deafening silence. The feasibility study may have proved that it is scientifically possible, but it is probably not financially or politically practical—or desirable.

According to the New York Times, the OECD spent about $13 million on the study. How much time and money was spent by the 17 governments and 248 universities involved is unknown—not great for an organisation that values and advocates transparency.

And AHELO does not solve the problem that rankings only look at input and output rather than the value added. In the feasibility study, the OECD notes that value-added analysis is possible but the challenges are considerable. Furthermore, it claims that the difficulties inherent in making comparisons across jurisdictions, mean the main use of the results of any value-added analysis would be for institutional self-study and comparisons between programmes within a jurisdiction. Although this might be helpful for a university’s internal quality assessment, it still only assesses learning outcomes after three or four years of study and, as has been shown with other instruments, this might be more an indicator of the quality of the admitted students and the quality of secondary education than the quality of the teaching and learning process within the university.

Finally, as critics have suggested, AHELO risks being at least as homogenising as other ranking systems, while standardisation of learning outcomes could lead to conformity and stifle innovation in teaching even more than the existing rankings do. A one-size-fits-all approach to learning outcome assessments, that originates largely from the Euro-American world, could undervalue local knowledge.

Schleicher may well have been right about the difficulties of improving what isn’t measured. But being able to measure something is no guarantee of improvement and measurement is not worth pursuing at all costs.

The end of the university? Not likely

This article was first published in University World News.

This year has frequently seen apocalyptic headlines about the end of the university as we know it. Three main drivers have been and still are fuelling these predictions: the worldwide massification of higher education; the increasing use of information and communication technology in teaching and the delivery of education; and the ongoing globalisation of higher education. These developments will make the traditional university obsolete in 2038. At least, that’s what some want us to believe.

The massification of higher education worldwide – even more than the massification in Western Europe, the United States and Japan in the post-war period – demands new and more efficient types of delivery. The acceleration in the demand for higher education, especially in China and other parts of South and East Asia, has made it nearly impossible for governments to respond to this demand. The increase in demand, together with decreased funding due to the financial crisis, has put pressure on traditional modes of university education.

Innovations in ICT have expanded the possibilities for delivering education and have led to new teaching instruments. The advent of massive open online courses, or MOOCs, in 2012 combined new technologies in order to reach a massive audience. These developments are intensified through the ongoing globalisation of higher education.

Because of the globalisation process, opportunities with regard to where to study have increased, ranging from attending universities abroad to attending online courses.

The concept of ‘the university’ is gone

The conjunction of these developments has led many to believe that the centuries-old model of the contemporary university is coming to an end. If we believe them, the higher education landscape of 2038 will be completely different from the current one. I would argue that these predictions show both a lack of knowledge about the contemporary landscape of higher education and a lack of historical understanding of the development of universities.

The time when the concept of the ‘university’ was clear-cut, referring to a single organisational and educational model, has long been gone. Especially since the massification of higher education in the post-war period, this single model has been accompanied by a wide variety of other higher education institutions. More vocationally oriented institutions were established, such as community colleges. Very large distance-education institutions emerged in many Western countries and beyond. What’s more, when the organisational boundaries of the traditional university were reached, new activities and new organisations appeared. One thing is for sure: in not one country in the world is the traditional university model representative of the entire higher education system any more.

But even if the proclaimers of the end of the university are only referring to the traditional model (whatever that is), they will be proven wrong in 2038, and long after that. The traditional university has been one of the most enduring institutions in the modern world. Granted, university research and university teaching have adapted constantly to changes in the economy and society. This process of adaptation might be too slow, according to many, but it is a constant process in the university. Despite this continual change and adaptation, the model of the university as we know it has changed very little.

The organisation of faculties, schools and departments around disciplines, accountability in the form of peer review, comparable tenure and promotion systems, the connection between education and research, the responsibility of academic staff in both education and research and both graduate and undergraduate education, the primacy of face-to-face instruction etc – these are all characteristics that can be found in universities throughout the world and which have existed for many, many decades – if not centuries.

My bet is they will still be there in 2038. It would be rather naive to think that a financial crisis or even a new type of delivery, like MOOCs, will profoundly change these enduring structures and beliefs.

Universities’ DNA

In the words of Clayton Christensen and Henry Eyring, authors ofThe Innovative University, we are talking about the ‘DNA’ of the university, and saying that this does not change easily. They argue that university DNA is not only similar across institutions but is also highly stable, having evolved over hundreds of years. Replication of that DNA occurs continually, as each retiring employee or graduating student is replaced by someone screened against the same criteria applied to his or her predecessor. The way things are done is determined not by individual preference but by institutional procedure, written into the ‘genetic code’.

New technologies will enable new forms of education and delivery. In the coming 25 years, we will see the emergence of new institutions focusing on specific target groups and we will witness traditional institutions employing these new technologies. But will this make the university as we know it obsolete? No, it will not, because the function of the university as we know it is much more comprehensive than ‘just’ the production and transfer of knowledge.

Students attend universities not simply to ‘consume’ knowledge in the form of a collection of courses. They go there for an academic experience; and they go there for a degree that will provide them with an entry ticket to the labour market and which will give them a certain status. Does the fact that I do not see any substantial changes in 2038 mean that there should be none? The fact that structures and beliefs endure does not always mean they still serve the functions they used to.

This is also the case with many of the traditional structures and beliefs in the university. Holding on to these practices is not an end in itself. At least, it should not be, yet in making policy and in making predictions, it is good to take into account the stabilising character of these structures and beliefs.

25 years from now

Because of the university DNA, there is rarely a revolution of the type so frequently predicted by politics, business and consultants. In addition to the major source of universities’ value to a fickle, fad-prone society, the university’s steadiness is also why one cannot make it more responsive to modern economic and social realities merely by regulating its behaviour. A university cannot be made more efficient by simply cutting its operating budget, nor can universities be made by legislative fiat to perform functions for which they are not expressly designed. Another argument why the university as we know it will still be there in 2038!

Many say that the best way to predict the situation in 25 years is to look back 25 years and see what has changed since then. I was first introduced to university life 25 years ago, in what you could call a traditional university. In the past 25 years I have studied and worked at four universities in and outside The Netherlands. At the time of writing, I work at Leiden University, another traditional university.

Comparing the university of 1988 with the university of 2013, it is remarkable how little these organisations have changed. Of course, the university has adapted to societal, political and economic changes, but at its core the traditional university has remained very much the same. I can safely say that the DNA of the traditional university has not changed in the past 25 years and I can safely predict that it will not change in the coming 25 years. And essentially, that is a good thing.

Internationalisering: meer dan arbeidsmarktbeleid

This article (Internationalisation: more than labour market policy) was earlier published in Transfer the internationalisation magazine of Nuffic

Het is inmiddels twee jaar gelden dat oud-OCW topambtenaar Ferdinand Mertens in Transfer de kosten van ‘de Duitse student’ ter discussie stelde. Ik vond dat hij een punt had en ben blij dat dit in de afgelopen jaren heeft geleid tot een brede discussie en grondige analyse van de kosten en baten van internationale mobiliteit. Via het CPB en het Agentschap NL leidde dat uiteindelijk begin deze maand tot Make it in the Netherlands, een ontwerp-advies van de Sociaal-Economische Raad over de binding van buitenlandse studenten aan Nederland.

De SER heeft vaker verstandige dingen gezegd over het hoger onderwijs en internationalisering. In het rapport over arbeidsmigratie, in het advies over Europa 2020 en in het advies naar aanleiding van de Strategische Agenda van het Ministerie van OCW – steeds opnieuw gaf de raad het belang van internationalisering aan en noemde het aantrekken van (meer) internationale studenten als mogelijkheid om tekorten op de arbeidsmarkt tegen te gaan en de positie van Nederland en Europa in de wereld te verstevigen.

Het recente SER-advies kan dan ook geen verrassing genoemd worden. Zowel voor de huidige en toekomstige arbeidsmarkt als voor de kwaliteit van het hoger onderwijs kan de internationalisering bijdragen aan oplossingen. De instroom van internationale studenten levert volgens de Raad welvaartsgroei op, kwaliteitsverhoging van het onderwijs door peer-effecten en een hoogwaardige internationale kennisinfrastructuur. Dit alles maakt Nederland op zijn beurt weer aantrekkelijker als internationale vestigingslocatie. Hoewel de risico’s van verdringing en van braindrain ook genoemd worden, slaat de balans toch duidelijk positief uit. Ik verwacht dat veel van de Transfer-lezers het rapport zullen onderschrijven en zelfs omarmen.

Maar hoe kunnen we die positieve resultaten daadwerkelijk behalen? De SER adviseert de instroom van buitenlandse studenten te vergroten, vooral voor die sectoren waar er arbeidsmarkttekorten zijn, en het bindingspercentage te verhogen. Daarbij hebben zowel het bedrijfsleven en de overheid als de onderwijsinstellingen een verantwoordelijkheid. De adviezen aan de onderwijsinstellingen richten zich op de onderwijsportefeuille en de inhoud van het onderwijs, maar ook nadrukkelijk op hun ondersteunende infrastructuur. Oftewel: om praktische en administratieve diensten, huisvesting, sociale integratie, taalcursussen, alumni-relaties en loopbaanondersteuning.

Ik ben ervan overtuigd dat hier nog zeer veel winst te behalen is. Voor internationale studenten spelen deze zaken een grote rol bij het kiezen van een studie en studiebestemming. En ook hun eindoordeel over hun studie-ervaring wordt hierdoor sterk bepaald. Goede ondersteunende diensten zijn voor individuele instellingen én voor het Nederlands hoger onderwijs in het algemeen de beste denkbare internationale reclame. Ze zorgen voor een positievere studie-ervaring, hogere bindingspercentages en – misschien op de lange termijn nog wel belangrijker – voor een positiever beeld van Nederland als open en gastvrij kennisland.

Deze ondersteunende infrastructuur moet daarom internationaal ‘on par’ komen met andere belangrijke studentenbestemmingen. Nationale en lokale overheden kunnen nog veel processen stroomlijnen en instellingen kunnen hun ondersteuning verder professionaliseren. Voor de instellingen betekent het echter wel een fikse investering in menskracht en faciliteiten. Hier laten de “Berenschot overhead benchmarks” echter weer weinig ruimte. En we willen dit alles tenslotte ook niet afwentelen op de academische staf.

Wat mij, ondanks mijn positieve indruk van het rapport, zorgen baart, is niet eens zozeer het gegeven advies als wel de adviesvraag. Het kabinet heeft de SER gevraagd om vooral te kijken welke arbeidsmarktsectoren de grootste behoefte hebben aan internationale afgestudeerden en wat de verschillende spelers kunnen doen om die te binden aan Nederland. Een legitieme vraag, maar tegelijkertijd een illustratie van de toenemende nadruk op de economische voordelen op korte termijn.

De terechte discussie over de kosten en baten lijkt er in het Haagse steeds meer toe te leiden dat de andere, vaak minder definieerbare en meetbare, langetermijneffecten genegeerd worden. Internationalisering van het hoger onderwijs is meer dan arbeidsmarktbeleid. Het gaat, zoals de SER al aangeeft, ook om kwaliteitsbeleid. Maar het kent ook vele andere dimensies: politiek, cultureel, wetenschappelijk, enzovoort. Wat dat betreft is de gebruikte definitie van binding zeer beperkt. Het gaat er volgens de SER om “buitenlandse studenten na afloop van hun studie (tijdelijk) te behouden voor de Nederlandse arbeidsmarkt”.

Binding hoort echter om meer te gaan dan dat. Het gaat om het binden van internationale afgestudeerden aan Nederland, of dat nu in Nederland is of daarbuiten. Het gaat om toekomstige handelsrelaties, diplomatieke en politieke betrekkingen, wetenschappelijke links, culturele betrekkingen en natuurlijk ook om persoonlijke relaties.

Zelfs een land als Australië, waar de internationalisering primair benaderd wordt vanuit economisch perspectief – het aantrekken van betalende studenten en opvullen van skills shortages – lijkt nu meer belang te gaan hechten aan het verbreden en verdiepen van de kennis over en de relaties in de Aziatische regio. Laten we er in Nederland in elk geval voor zorgen dat de kortetermijndiscussie over kosten en baten niet de enige is die we over de onderwijsinternationalisering voeren

Global University Models

[This post was first published in University World News] Universities throughout the world are becoming part of a global community, not just of scholars or students, but also of leaders, managers and administrators. This is sometimes referred to as Westernisation or Americanisation, or even ‘McDonaldisation’, and is frequently seen as a negative phenomenon. This homogenisation is linked to the debate on the knowledge-based economy and knowledge society, which has provided new models for teaching and research and also for the governance of institutions. Both education and research in the university are increasingly determined by external criteria rather than internal academic dialogue. These new models primarily exist to serve the economy and society and can be described as open, relevant and responsive.

One might argue that universities, especially public ones, have always existed to serve national economic and social interests. However, these services are becoming the raisons d’être of the university, partly displacing its function as an institution for personal academic and intellectual enrichment (the ‘ivory tower’ model’). International organisations such as the World Bank and the Organisation for Economic Co operation and Development (OECD), but also consultants, policy advisers and academics, have played a major role in advocating the ‘service-oriented research university’ model, with the current debate on global university models creating pressures for universities to conform and converge towards ‘prescribed’ types of university. But is this greater international homogenisation necessarily a bad thing?

Service-oriented research universities

A focus on applied research which is economically relevant, graduate labour market demands, the recruitment of new target groups, organisational autonomy and the incorporation of external stakeholders are all elements of this global model. Academic models such as the ‘triple helix’ approach [university-industry-government relationships], mode 2 knowledge production or the ‘entrepreneurial university’ have been picked up by political or opinion leaders, often in distorted and simplified forms. Countries such as Finland and regions such as Silicon Valley in California or Bangalore in India have become models for other countries and regions, such as the Bandung High Tech Valley in Indonesia. Prestigious universities in the US or UK have become models for other universities throughout the world ( the ‘Harvard of country A’ or an ‘Oxford in city B’).

This service oriented model of the national research university appeared throughout the world in the 1990s, spreading mainly from the Anglo-Saxon countries, and was also adopted and adapted in Malaysia and Indonesia. It impacts core functions of the university – education and research – as well as their governance and management. Local versions can now be observed in Southeast Asia. Institutions such as the University of Malaya and Universiti Sains Malaysia (Malasyia) and Universitas Gadjah Mada and Institut Teknologi Bandung (Indonesia) have clearly adopted the discourse of openness, relevance and responsiveness in their policies and strategies. However, although the four universities in Malaysia and Indonesia now have technology transfer offices, incubators, courses and centres promoting entrepreneurialism, professional training opportunities, external boards of trustees etc, they function and perform very differently in the different countries and universities.

Westernisation?

Critics often speak of Westernisation or Americanisation and ‘new imperialism’ to describe the homogenising forces in higher education . This may be the case when certain models are imposed on countries, for instance by colonial powers or global institutions such as the World Bank. But in Southeast Asia – and in particular in Malaysia, Singapore and Indonesia – the adoption of global university models appears more organic or voluntary. University leaders have become more and more socialised into a global policy community characterised by shared ideas about higher education’s main challenges and their solutions.

Deliberate cross-national learning takes place when policy-makers emulate or imitate practices in other countries or when they adopt best practices developed elsewhere. This process of policy learning or policy borrowing has become increasingly common. Through visiting foreign campuses or inviting international advisers or consultants, universities borrow or learn from other countries. For instance, Malaysian former prime minister Mohamed Mahathir’s knowledge of Harvard’s Kennedy School of Government inspired the founding of the INPUMA (International Institute of Public Policy and Management) at the University of Malaya; and the idea of a Silicon Valley-like region between Jakarta and Bandung was first proposed by the global consultancy firm McKinsey.

Many of Southeast Asia’s vice-chancellors, rectors and presidents have been trained abroad, often in the UK or US. Some have worked in foreign universities. All attend international conferences and seminars and all of them – and their staff – receive much of their professional information from international journals, reports and websites.
Many are in direct contact with university leaders abroad, face-to-face, via email or through other new forms of communication. Many will also read University World News or other global higher education media on a regular basis and this also frames their way of thinking about university management.

Global organisations such as the OECD play an active role in the diffusion of best practice, for instance on university management or the university’s role in a region. Media and other ranking organisations have given the creation of global exemplars – the so-called ‘world-class university’ – a massive boost.

The local context

Whether practices or policies from abroad are adopted successfully or not depends on the correct interpretation of them. Many examples become over-simplified in the process of borrowing or learning, resulting in sub-optimal results. But as long as the adoption of certain models is based on a rational assessment, an improvement of systems and universities can follow. For example, in both Malaysia and Indonesia national research universities were given autonomous status. However, in Malaysia this has been somewhat symbolic, given the historical, culturally determined close connections between the government and universities. In Indonesia, on the other hand, autonomy has been mostly interpreted as financial autonomy. The first universities were given autonomous status just after the Asian economic crisis of 1997, preceding a severe cut in government funding. In both cases there is a discrepancy between the global model and the local interpretation. The success of policy changes, particularly new practices, depends on how these fit the local cultural, legal, institutional or political context that has evolved over decades or centuries.
Another example is the drive towards the commercial application of research results and the stimulation of an entrepreneurial attitude among academics in the service-oriented research university. This drive stems from the fact that universities used to be too focused on ‘mode 1’ research (academic, investigator-initiated and discipline-based). In countries such as Malaysia and Indonesia ‘mode 2’ type research (context-driven, problem-focused and interdisciplinary) has also been adopted, despite the lack of a history of thorough ‘mode 1’ research.

A one-way street?

In my opinion the homogenisation that results from the international debate about university models is not negative in itself. The global diffusion of certain models can have positive effects because the knowledge about such models and their effectiveness becomes shared knowledge and one country learns from or is inspired by another. However, this learning process has to be well informed and needs to incorporate cross-national differences.

That said, in reality this diffusion process frequently follows a West-to-East or North-to-South pattern. This might be not so much a reflection of power relations but more a reflection of a willingness to learn from others. In this respect, universities in the West should be more open to developments in other parts of the world and be more open towards learning from these developments.

Despite the claims of Westernisation or McDonaldisation of higher education we can still detect specific varieties of higher education in different parts of the world, each with their own particularities, strengths and weaknesses. In 2010 Kishore Mahbubani, dean of the Lee Kuan Yew School of Public Policy at the National University of Singapore, warned the US that “the time has come for American higher education to think the unthinkable: that it can learn lessons from Asia”. The reaction of European policy-makers, after the recent publication of the OECD’s PISA results comparing 15 year-olds’ performance, echoed this warning. Already, many of them are seeking ways to learn from Asia’s performance in maths and science education.

In the same way, universities in other parts of the world, especially in East Asia (Singapore, Korea and China), might prove important sources of inspiration for universities in the US and Europe.

Mobility Stats: Mapping Mobility & Open Doors

altalt

Two international education organisations, Nuffic from the Netherlands and the Washington based Institute of International Education (IIE) published their international student mobility statistics this week. While Open Doors is being published by IIE already since 1948, the Nuffic publication – Mapping Mobility – was published for the first time in 2010. Although Nuffic published international education statistics before, this is the first one solely focused on higher education.

Growth

One finding of the Open Doors report was that the influx of international students into the US continued to grow modestly. Compared to the year before, there were 3% more international students coming to the US for the purpose of study (the vast majority for a full degree). The number of foreign students studying for a full undergraduate or graduate degree in the US (excluding non degree students) in 2009/10 was 568,316. This was almost 3% of the total student population.

In the Netherlands they witnessed a slightly higher growth. In 2009/10, there were 47,226 international degree students in the Netherlands, up 6.3% compared to the year before. Considering that the total student population also increased in the Netherlands, the percentage of foreign students remained stable at 7.4% of the total student population.

If we compare the growth rates between the US and the Netherlands in the past five years, we can observe a growth of over 40% in the Netherlands since 2005-6 and in the US a growth of 15%. (Data based on Table D in fast facts Open Doors and Diagram 06 in Mapping Mobility)

Countries of Origin

Other interesting dynamics are revealed if we look at the countries of origin. We can conclude that the growth in the US in the past year has been caused almost solely by the Chinese international student population. The number of Chinese students in the US increased by almost 30%, now accounting for almost a fifth of the international students in the US. The Netherlands however is much more dependent on a single nation. Germany remains the main source country for foreign students in the Netherlands, now accounting for 44% of all students. The table below shows the main source countries of the US and the Netherlands.

source

Destinations

Not surprisingly, the main destinations of these students are institutions of the Dutch border region with Germany. The University of Maastricht tops the list, followed by four universities of applied science in the southern, central and northern provinces bordering with Germany. In the US this obviously shows a much more dispersed pattern. Most internationalised institutions here are the University of Southern California, the University of Illinois (Urbana Champaign), New York University, Purdue and Colombia.

Framing International Education

alt

Ten days ago or so, I was in Sydney for the annual Australian International Education Conference. I’ve seen some very interesting presentations here, some real eye-openers. I’ll discuss some specific sessions here later (I’ll wait until the presentations are available on the website). Now I just want to share some general impressions.

Most remarkable for me was that the economic framing of international education now seems to be widely accepted. When I lived in Sydney some years ago, my perception was that the government and parts of university management occasionally dropped terms like the ‘education industry’ and ‘higher education exports’. This was really the language of the marketeers and the recruiters.

Nowadays this language has spread throughout the universities and even the international educators themselves have adopted the language. Should we perceive this as conscious, strategic behavior on their part? Is the framing in economic terms an attempt to convince governmental leaders to invest more in higher education because of its strategic economic importance?

In the Netherlands, national governments explicitly frame international education as a quality issue. International education is to be pursued because it improves the quality of Dutch higher education. On the other hand, the income from full fee paying international students have now become a necessary resource for Dutch institutions as well (and especially for some departments or programs).

Does it matter how we frame it? Or is it always about the bottom line anyway? I think it does matter. In framing international education as an export product, as an economic commodity, the recruitment of students becomes the dominant issue. As a result, recruitment and the image of Australia as an education provider have become the dominant issues in Australian international education. But of course, we all know there is so much more to international education…

Dutch universities & the ranking season

altRanking season is over. Yesterday, the Times Higher published its new ranking and that also marked the end of the ranking season for this year. After the Shanghai Jiao Tong ranking, the Leiden ranking, the QS ranking and the Taiwan ranking, this was the fifth attempt to illustrate the differences in quality of the world’s universities. Whether they succeeded in this remains a question of debate.

Although there are quite some differences in the results of the rankings, a few common observations can be made. First of all, it is clear that the United States still is home to the best universities. In all rankings the US universities are dominant and Harvard is the undisputed leader. Only in the QS ranking it was a non US university  – Cambridge – that topped the list.

Another observation is that non of the rankings manage to sufficiently capture the quality of teaching in their assessment. The THE ranking made an attempt to do so, but most of their indicators still reflect research quality and prestige more than the quality of teaching. The Shanghai, Leiden and Taiwan rankings put most emphasis on research.

Even though the ranking predominantly assess research – although in different ways – the results are very different. To illustrate this point I have mapped the results of the Dutch research universities in the different rankings. The results are shown in the graph below (click to enlarge).

rankings

The results for the twelve universities (the thirteenth, Tilburg University somehow doesn’t appear in the rankings) show a substantial variation for all universities. For universities like Eindhoven, Twente and Maastricht, the variation seems exceptionally large. Eindhoven for instance was ranked as the best university in the THE ranking while performing worst in the Shanghai ranking. Leiden shows the least variation but here the difference between its rank in the Shanghai ranking (70th) and the THE ranking (124th) is still enormous.

Earlier this week at the OECD/IMHE conference Charles Reed, Chancellor of California State University, critiqued the rankings (“rankings are a disease”) and argued that all universities add value. I guess he’s right. And the value measured by one ranking seems to be quite different than the value measured by the other…

Regulating recruitment agencies

Study abroad for a full degree has developed from an elite to a mass phenomenon. Parallel to this development, we have witnessed a commercialization of international higher education where many institutions have become financially dependent on full fee paying international students. To operate in this (global) market, institutions – and especially the lesser-known ones – now frequently turn to agents and recruiters in order to attract prospective students. Many point to the risks of these third party agents and plea for more regulation or even abolishment.

Abolish or regulate? In Inside Higher Ed, Philip Altbach, the Director of the Center for International Higher Education sheds his light on this issue. His viewpoint is clear and unambiguous: “Agents and recruiters are impairing academic standards and integrity — and it’s time for colleges and universities to stop using them.”

These agents recruit prospective students and provide general information, but – according to Altbach – in reality they are also making offers to students or actually admit students, often based on murky qualifications (even though the colleges that hire them say that they still control admissions).

Some initiatives have appeared in the United States with the objective to regulate this ‘new profession’ but these organisations lack powers to monitor compliance or discipline violators. The solution Altbach provides is simple: abolish them! After all, they have no legitimate role in higher education.

Dutch Self regulation

In the Netherlands, the institutions have chosen self-regulation as the prime instrument for managing (the excesses of) international recruitment. The sector-wide ‘code of conduct’ sets out standards for Dutch higher education institutions in their dealings with international students. One chapter in this code of conduct (pdf) deals with the use of agents. The provisions in this chapter stipulate that agents have to act in the spirit of the code and clarifies the responsibilities of the agents and those of the higher education institutions. One of the starting points is that admission remains the responsibility of the institutions and that institutions have to take action immediately in the case of unethical behavior.

This way of dealing with the risks of student recruitment (in an increasingly commercialized market) is somewhat comparable to the method of self-accreditation or ‘accreditation lite’ in the United States. Altbach criticized this method because of the lack of powers in the case of non-compliance.

Other solutions?

In some of the comments below the Inside Higher Ed article, Altbach’s view is portrayed as elitist. Prestigious American schools like Boston College might not need such recruiting agencies. But what about less prestigious universities? What about the ones that are not part of the Ivy League, the Russell Group or the Group of 8? Maybe, these institutions do need professional assistance in reaching prospective international students.

For these institutions, abolishment might not be acceptable. In addition, it remains the question whether all of these agents are rogue operators? Is Altbach’s opinion also valid for agents in other parts of the world? Or is this phenomenon only apparent in the more commercially higher education sectors (Altbach is mainly referring to the USA, Australia and the UK)?

Either way, even if the number of malicious operators would be small, some form of regulation might be necessary to protect the numerous international students who are about to invest a lot of money into their future. Should we let the market do its work or does this sector need government protection? Or is there enough trust in the higher education institutions (and in the majority of the agents) and should we apply soft instruments like codes of conduct and other forms of self-regulation?

Rankings and Reality

Summer holidays are over. In the global field of higher education, this also means that it is ranking season. Last month it was the Shanghai ranking, This week the QS World Universities Ranking were revealed and in two weeks the all new Times Higher Education ranking (THE) will be published. Ranking season also means discussions about the value of rankings and about their methodologies. Two points of critique are addressed here: the volatility of (some) rankings and the overemphasis of research in assessing universities’ performance.

Volatility and stability in international rankings 

This year’s discussion has gotten extra fierce (and nasty now and then) because of the THE’s decision to part with consultancy agency QS and to collaborate with Thomson Reuters, a global research-data specialist. The previous joint THE/QS rankings usually received quite some media attention. This was not just because their methodology was heavily criticized (and rightly so) but also because this disputed methodology led to enormous fluctuations in the league tables from year to year. The critique has made THE to join forces with Thomson while QS continues their ranking.

Although the various rankings differed in their methodology, they all seemed to agree on two things: the hegemony of the United States universities and the undisputed leadership of Harvard. This week’s QS rankings again showed the volatility of their methodology. For the first time Cambridge beat Harvard and for the first time the top ten is not dominated by US universities. The top ten is now occupied by five US universities and five UK universities.

The Shanghai ranking on the other side shows much less fluctuations in its rankings. This probably does reflect reality better, but makes it less sensational and therewith less attractive for wide media coverage. The two graphs below clearly show the difference between the stable Shanghai rankings and the volatile QS rankings for a selection of Dutch universities.

Image

 

The graphs show the positions in the past six years for the four Dutch universities that are in the top 100 of the Shanghai and/or QS ranking. To illustrate the relative meaning of the absolute positions, the Shanghai rankings groups institutions above rank 100 (this also explains the relatively steep drop from Erasmus University in the 2006 ranking). Although Amsterdam has remained fairly stable in the rankings, Leiden and Utrecht show quite some fluctuation. Much more than its real quality would justify.

And who thinks this is volatile, it can be much worse. Simon Marginson in a 2007 paper lists dozens of cases where drops are increases of more than 50 positions (sometimes even up to 150 positions) occur in a year. A case in point is the Universiti Malaya who went from “close to world class” to “a national shame” in only two years…

 It will be interesting to see in the coming years how the new THE/Thomson methodology will work out in this respect. The Times Higher published its methodology this week. While the QS ranking based their listing on only 6 indicators (with 50% weighting going to reputational surveys), the new THE ranking takes into account 13 indicators (grouped in five categories). Considering this higher number of indicators and considering that the weight of reputational surveys is significantly lower, it is also likely that fluctuation will be lower than in the QS ranking. Time will tell…

Are international rankings assessing teaching quality?  

Another frequently mentioned critique on the existing international rankings is that they put too much emphasis on assessing research and neglect the teaching function of the university. Since the new THE ranking more than doubled the number of indicators, it is likely that the assessment will correspond better with the complex mission of universities.

If we look at the new methodology this indeed seems to be the case. The teaching function now constitutes 30% of the whole score and is based on 5 indicators. In the QS ranking, it was based on only 2 indicators (employer’s survey and staff/student ratio).

The 5 indicators are:

  • Reputational survey on teaching (15%)
  • PhD awards per academic
  • Undergraduates admitted per academic (4.5%)
  • Income per academic (2.25%)
  • PhD and Bachelor awards

A closer look at these 5 indicators however leaves the question on how much they are related to teaching.

  1. First of all, one can wonder whether a reputational survey really measures the quality of teaching or whether this in reality is another proxy for research reputation. Colleagues and peers around the world often do have some idea of the quality of research in other institutions, but is it likely that they can seriously evaluate the teaching in other institutions? Apart from the institutions where they graduated or worked themselves, it is unlikely that they can give a fair judgment about the teaching quality in other institutions, in particular in institutions abroad.
  2. Two other questionable indicators for the quality of teaching are the number of PhD’s awarded and the number of PhD awards per academic. In the Netherlands, and in many other countries in continental European and elsewhere, this says much more about the research quality and the research intensity of an institution than about the teaching quality
  3. The indicator ‘Undergraduates admitted per academic’ seems the same as their old indicator of student/staff ratio. Assuming here that a lower number is better, this again benefits research intensive institutions more than other institutions. Research intensive institutions employ relative many academics, but many of them will have research only contracts. Yet, in this indicator they will still lead to a higher score on teaching quality
  4. ‘Income per academic’ is also a dubious indicator. Assuming this concerns the average annual income of academics, there is no reason to believe that higher salaries benefits the quality of  teaching in particular. It could be argued that salaries are nowadays more related to research quality and productivity than to teaching quality. If income per academic refers to the external financial resources that an academic attracts, it would even more be an indicator of research intensity.

Although the new THE ranking methodology seems to put more emphasis on teaching, at a closer look this is rather misleading. All this again shows how difficult it is to measure teaching quality. But as long as we do not address teaching quality sufficiently in the international rankings, they cannot fulfill their function as transparency instrument for international students.