European Institute of Innovation and Technology: Go!

Excellence needs flagships! That is why Europe must have a strong European Institute of Technology, bringing together the best brains and companies and disseminating the results throughout Europe. That is how José Manuel Durão Barosso introduced the European Institute of Technology about two and a half years ago. Today was the inaugural meeting of the first Governing Board of the EIT.

The Board’s 18 high-level members, coming from the worlds of business, higher education and research all have a track record in top-level innovation and are fully independent in their decision-making. The Board will be responsible for steering the EIT’s strategic orientation and for the selection, monitoring and evaluation of the Knowledge and Innovation Communities (KICs).

After discussions on whether the European version of MIT would become a virtual institute, a brick and mortar institution or something in between… After a study claimed that a European Insitute of Technology was actually not necessary… After feasibility studies had been neglected….

After the decision for the establishment of the EIT was formally taken and published in the Official Journal of the European Union in April earlier this year… After its name was changed into European Institute of Innovation and Technology… After beautiful Budapest won the race and became the official location of the EIT eitin June… And after the EIT’s first Governing Board was officially appointed on 30th July 2008…

It is now time to get to work!

The only thing still missing is a real logo. As long as there is none, I’ll just keep on using the one I have been using for the last years. Looks familiar, doesn’t it?

Education at a Glance 2008

Today the Organisation for Economic Cooperation and Development published its annual report ‘Education at a Glance’. Education at a Glance presents data and analysis on education; it provides a rich and up-to-date range of indicators on education systems in the OECD’s 30 member countries and in a number of partner economies. This years highlights are:

Meeting a rapidly rising demand for more and better education is creating intense pressures to raise spending on education and improve its efficiency. Recent years have already seen considerable increases in spending levels, both in absolute terms and as a share of public budgets: The total amount of public spending on educational institutions rose in all OECD countries over the last decade, on average by 19% between 2000 and 2005 alone, and in Greece, Hungary, Iceland, Ireland and Korea by more than twice that amount.

Another visible indication of the efforts governments are making can be seen in the fact that, over the last decade, the share of public budgets devoted to education grew by more than one percentage point – from 11.9% in 1995 to 13.2% in 2005.

The full report and links to the statistics can be found at the EAG 2008 website.

Can institutions be compared using standardised tests?

At the EAIR conference in Copenhagen last month I attended an interesting presentation by Trudy Banta, a professor of higher education and vice chancellor for planning and institutional improvement at Indiana University-Purdue University. Her question was clear: Can institutions really be compared using standardised tests?

Policymakers seem determined to assess the quality of HEIs using standardised tests of student learning outcomes. Yet, Dr. Banta claims that such tests do not provide data for valid comparisons and on top of that, they measure other things than institutional performance:

Comparing test scores sounds easy, but are today’s standardised tests of generic skills capable of yielding data for valid comparisons? Twenty years of research conducted in the US using these tests indicates they are not.

It is however not the use of standardised tests as such that was criticized by Banta, but the use of such tests to compare institutions. Research in the US showed that the scores of such tests were highly correlated with the SAT scores (with correlations up to 0.9). It appeared that 81% of the variance between institutions could be explained by previous schooling. This means that the residual 19 percent is explained by a whole range of other factors (e.g. motivation, family situation, etc.), only one of them being institutional performance!

Bante therefore concludes that:

standardized tests of generic intellectual skills do not provide valid evidence of institutional differences in the quality of education provided to students.

Moreover, we see no virtue in attempting to compare institutions, since by design they are pursuing diverse missions and thus attracting students with different interests, abilities, levels of motivation, and career aspirations.

This provides food for thought for many national policy makers, but also for some international actors. I’ve written a few times about the OECD AHELO project. In this project, the OECD tries to differentiate between institutions on the basis of an assessment of the learning outcomes.

AHELO focuses on an assessment of students’ knowledge and skills towards the end of a three or four-year degree programme. The assessment will be based on a written test of the competencies of students, and will be computer delivered.

The feasibility study is expected to demonstrate the feasibility – or otherwise – of comparing HEIs’ performance from the perspective of student learning rather than relying upon research-based measures which are currently being used across the globe as overall proxies of institutional quality.

AHELOAHELO can thus partly be seen as a response to the research-biased rankings and league tables. They are presently working on a feasibility study. Whatever will be the result of this, it’s a sure thing that such a (near-)global assessment is going to be an enormously complex exercise. And therefore a very expensive one…

It’s reasonable to expect that results here also correlate strongly with prior learning, just as was the case in the US. Therefore PISA results might better explain AHELO results than institutional performance does. If the AHELO-assessment results only explains a few percentages of the variance between institutions, comparing higher education institutions will be impossible. And then all that money might better be spent otherwise. I would hope the OECD takes these American research findings into account in the feasibility study.