Classifying European Institutions for Higher Education

I’m on my way back to The Hague, returning from the EAIR conference in Copenhagen. Although lots of interesting new studies and findings have been presented there (some of them I’ll discuss in later posts), I actually want to talk about a conference I visited last July in Berlin.

This conference (Transparency in Diversity – Towards a European Classification of Higher Education Institutions) presented the results from the second stage of the project Classifying European Institutions for Higher Education, a project that might turn out to have a major impact on European higher education policy. This project was initiated in 2005 (see this previous post) and is now supported by for instance the European Commission (DG Education) and the German Hochschule Rektorenkonferenz. It’s run by an international team led by Frans van Vught.

The project can be seen as a response to two trends (at least, that’s my interpretation). First of all, there is the emergence of the European higher education area, the objective of the Bologna process. If there’s one space, we need to know what types of institutions are occupying that space and hence, we need a classification or typology.

Secondly, there is the proliferation of ranking and league tables. As I’ve discussed many times before, these rankings present a very uni-dimensional view of the contemporary higher education institution. Basically they only look at the – science heavy – traditional research university. Through this they neglect the quality of a very wide range of other institutions which might be very good at the things they are supposed to do. Here one can think of mono-disciplinary institutions (e.g. colleges of fine arts; schools of economics and business), teaching oriented institutions (like the American liberal arts colleges) ore more professionally and vocationally oriented institutions (like the German and Austrian Fachhochschulen, the Dutch Hogescholen, etc.).

A multidimensional classification of European higher education institutions can on the one hand create more transparency in European higher education, while at the same time clarify which institutions can be compared with each other (so we can compare apples with apples and pears with pears). If you are interested in how they intend to do this, I suggest you have a look at the presentations of the conference. See Frans van Vughts presentation (PDF) to get a better idea about the background of the project and have a look at Frans Kaiser’s presentation (PDF) for the technical aspects of such a multidimensional classification.

What the classification will look like exactly is not yet clear. If it will remain limited to the web tool and the resulting radar graphs, I expect the effects to be rather limited. The question is whether the various stakeholders related to the project will ultimately define real categories of institutions (like the old Carnegie classification did). This however might give the project a more political character. Even though the project-team stresses that they will not create a hierarchical classification, it is interesting to see whether some categories will be perceived as more prestigious than others.

Nevertheless, the classification project seems to be widely supported by institutions throughout Europe and their representative organisations. The feeling that Europe needs to create more transparency is widely shared and at the same time, many institutions are looking for benchmarking opportunities with like-minded institutions. After all, comparisons with Harvard, Oxford and Yale are not very useful for most higher education institutions in Europe…

Interactive Higher Education Policy [or HigherEd 2.0]

Both the Department of Education, Employment and Workplace Relations (DEST) of the Australian Commonwealth Government and the Department of Innovation, Universities and Skills (DIUS) of the British Government are looking for news to organise and coordinate their higher education sector. For this, they have started a similar initiative. Both are relying heavily on input from the field and the broader society to get new ideas, and probably to receive more support for their future polices. Yet, there are some differences as well.

In its Review of Higher Education, the Australian government has asked a small expert panel to write a Higher Education Discussion Paper. This Discussion Paper (PDF, 4 MB) was released in June and addresses a wide range of questions structured around nine key challenges and issues for higher education in Australia over the coming decades.

· Meeting labour market and industry needs
· Opportunities to participate in higher education
· The student experience of higher education
· Connecting with other education and training sectors
· Higher education’s role in the national innovation system
· Australia’s higher education sector in the international arena
· HE’s contribution to Australia’s economic, social and cultural capital
· Resourcing the system
· Governance and regulation

After this release, the Expert Panel invited the community to react to this paper and send in their submissions before 31 July. This has led to 300 submissions responding to the discussion paper. Responses have been submitted by interested individuals, Vice Chancellors, Leaders of intermediary organisations, student unions, etc. There’s also a range of HE experts and researchers that submitted their reactions, and even some HE bloggers (who of course are also experts; for instance Andrew Nortonsubmission 91 and Steven SchwartzSubmission 66). The Review Panel will provide its report on priority action by the end of October 2008, and final report by the end of the year. I’ll keep an eye on it…

In the UK,  the Secretary of State for DIUS, John Denham, claimed that the UK needs to decide what a world-class HE system of the future should look like and what it should seek to achieve. And he also is asking the public to participate in this Higher Education Debate. Denham first asked eight experts to present their advise and opinions on eight different themes:

· Part-time studies in Higher Education
· Demographic challenge facing Higher Education
· Teaching and student experience
· International issues in Higher Education
· Intellectual property and research benefits
· Academia and public policy making
· Research careers
· Understanding institutional performance

These contributions will lead to a formal public consultation on a policy framework for HE in the autumn. They however also form the input for discussions on these eight topics with the wider public. And the discussions are conducted…yes on a blog. On the Future of Higher Education Blog readers have the opportunity to comment on the opinions of the experts.

The Australian example has shown that there are plenty of HE stakeholders and experts willing to spend some time in drafting future HE plans (I feel sorry for all the staff at DEST that has to go through them all). In some ways their process resembles the consultation process of the European Commission (for instance here, for the EIT).

What the input of the English public will be remains to be seen. Until now, comments on the blog are only few – and not always very constructive contributions. However, the  discussion opportunity has only been online since July. 

Even though the outcomes of these processes are not yet clear, I welcome these new ways of policy making. Even though these new initiatives would fit well in the (consensus oriented) Dutch political culture, – to my knowledge – the use of the Internet in the process of policy making and formulation is still rare. Maybe an idea for Dutch higher education…?

Weird Science: the genetic map of Europe

wsCorrelation between Genetic and Geographic Structure in Europe

by Lao, Oscar et al. (2008)

Full Text Available in Current Biology; See also this article in the IHT

Maybe not that weird, but definitely interesting. Biologists from the Erasmus University in Rotterdam and others have constructed a genetic map of Europe. They investigated genotype data from 2,514 individuals belonging to 23 different subpopulations, widely spread over Europe. Although they found only a low level of genetic differentiation between subpopulations, the existing differences were characterized by a strong continent-wide correlation between geographic and genetic distance. This resulted in the following genetic map of Europe (click to enlarge).

Genetic Map of Europe

The IHT explains: the genetic map of Europe bears a clear structural similarity to the geographic map. The major genetic differences are between populations of the north and south (the vertical axis of the map shows north-south differences, the horizontal axis those of east-west). The area assigned to each population reflects the amount of genetic variation in it.

The map also identifies the existence of two genetic barriers within Europe. One is between the Finns (light blue, upper right) and other Europeans. It arose because the Finnish population was at one time very small and then expanded, bearing the atypical genetics of its few founders. The other is between Italians (yellow, bottom center) and the rest. This may reflect the role of the Alps in impeding free flow of people between Italy and the rest of Europe.

But the study provides more than just an interesting picture. The authors explain that understanding the genetic structure of the European population is important, not only from a historical perspective, but also for the appropriate design and interpretation of genetic epidemiological studies.

More rankings: Shanghai Jiao Tong, Forbes (& AHELO?)

Tomorrow, the new 2008 Academic Ranking of World Universities will be officially published. Not surprisingly, it’s an almost all American affair. It’s rather interesting that the publication of the Shanhai Jiao Tong rankings almost goes by unnoticed, especially if you compare it to the publication of the Times Higher Education Supplement/QS World University Rankings (the THES-QS rankings 2008 will be published on 9 October).

This exactly is the strength of the SJT ranking. After all, universities are robust organisations and don’t change a lot in a years time. I guess it therefore corresponds with reality that the top 10 of 2008 is exactly the same as the one of 2007. Actually, not much has changed at all (although I of course did notice that the University of Sydney – my former employer – entered the top 100; the top 500 list is here).

2008(2007) University
1 (1)   Harvard University
2 (2)   Stanford University
3 (3)   University California – Berkeley
4 (4)   University Cambridge
5 (5)   Massachusetts Inst Tech (MIT)
6 (6)   California Inst Tech
7 (7)   Columbia University
8 (8)   Princeton University
9 (9)   University of Chicago
10 (10)   University of Oxford

The main critique on the SJT rankings is that they only give an indication of a university’s research quality. They have only one proxy for teaching quality and that one isn’t exactly saying much about teaching quality at all. I have already pointed to some alternatives for these research biased rankings and league tables, for instance the new ranking being develop by CCAP (Center for College Affordability and Productivity).

This last one has now been published by Forbes Magazine. And yes…the criteria are very different than the ones we are used to:

  1. Listing of Alumni in the 2008 Who’s Who in America (25%)
  2. Student Evaluations of Professors from (25%)
  3. Four- Year Graduation Rates (16 2/3%)
  4. Enrollment-adjusted numbers of students and faculty receiving nationally competitive awards (16 2/3%)
  5. Average four year accumulated student debt of those borrowing money (16 2/3%)

And what’s the result?

2008 University
1 Princeton University
2 California Institute of Technology
3 Harvard University
4 Swarthmore College
5 Williams College
6 United States Military Academy
7 Amherst College
8 Wellesley College
9 Yale University
10 Columbia University

Compared with the SJT rankings, it are especially the liberal art colleges and the military colleges that are evident in the Forbes ranking. The high quality liberal arts colleges in the US (and elsewhere) are unfortunately lacking in nearly all international rankings. The reasons for this is of course again that these rankings are so research biased.

Another thing that I noticed after looking through the rest of the list is the relatively low standing of the public research universities. University of Virginia is the first one on 43, University of North Carolina at Chapel Hill at 66 and UC Berkeley at 73.This is probably due to another flaw in most rankings, that is that they measure the quality of the graduates without looking at the quality of the inputs. For more criticism on this ranking, see the comments on Vedder’s article in Inside HigherEd and the critical contribution of Patricia McGuire.

This challenge of actually measuring the added value provided by the university is taken up by the OECD’s AHELO project: assessing learning outcomes in higher education (sometimes referred to as the PISA for higher education). This exercise is still in it’s early stages and currently they are at the stage of studying the feasibility of such an exercise. And although the OECD explicitly does not want to promote it as a ranking, it might provide an alternative for the league tables.