Can institutions be compared using standardised tests?

At the EAIR conference in Copenhagen last month I attended an interesting presentation by Trudy Banta, a professor of higher education and vice chancellor for planning and institutional improvement at Indiana University-Purdue University. Her question was clear: Can institutions really be compared using standardised tests?

Policymakers seem determined to assess the quality of HEIs using standardised tests of student learning outcomes. Yet, Dr. Banta claims that such tests do not provide data for valid comparisons and on top of that, they measure other things than institutional performance:

Comparing test scores sounds easy, but are today’s standardised tests of generic skills capable of yielding data for valid comparisons? Twenty years of research conducted in the US using these tests indicates they are not.

It is however not the use of standardised tests as such that was criticized by Banta, but the use of such tests to compare institutions. Research in the US showed that the scores of such tests were highly correlated with the SAT scores (with correlations up to 0.9). It appeared that 81% of the variance between institutions could be explained by previous schooling. This means that the residual 19 percent is explained by a whole range of other factors (e.g. motivation, family situation, etc.), only one of them being institutional performance!

Bante therefore concludes that:

standardized tests of generic intellectual skills do not provide valid evidence of institutional differences in the quality of education provided to students.

Moreover, we see no virtue in attempting to compare institutions, since by design they are pursuing diverse missions and thus attracting students with different interests, abilities, levels of motivation, and career aspirations.

This provides food for thought for many national policy makers, but also for some international actors. I’ve written a few times about the OECD AHELO project. In this project, the OECD tries to differentiate between institutions on the basis of an assessment of the learning outcomes.

AHELO focuses on an assessment of students’ knowledge and skills towards the end of a three or four-year degree programme. The assessment will be based on a written test of the competencies of students, and will be computer delivered.

The feasibility study is expected to demonstrate the feasibility – or otherwise – of comparing HEIs’ performance from the perspective of student learning rather than relying upon research-based measures which are currently being used across the globe as overall proxies of institutional quality.

AHELOAHELO can thus partly be seen as a response to the research-biased rankings and league tables. They are presently working on a feasibility study. Whatever will be the result of this, it’s a sure thing that such a (near-)global assessment is going to be an enormously complex exercise. And therefore a very expensive one…

It’s reasonable to expect that results here also correlate strongly with prior learning, just as was the case in the US. Therefore PISA results might better explain AHELO results than institutional performance does. If the AHELO-assessment results only explains a few percentages of the variance between institutions, comparing higher education institutions will be impossible. And then all that money might better be spent otherwise. I would hope the OECD takes these American research findings into account in the feasibility study.

Leave a Reply

Your email address will not be published. Required fields are marked *