Last week, the Dutch Volkskrant reported on an interesting study on the distribution of research funding by the Netherlands Research Council (NWO). Loet Leydesdorff (one of the researchers that introduced the Triple Helix concept) and Peter van den Besselaar – both of the Amsterdam School of Communications Research of the University of Amsterdam – conducted a study on the grant allocation decisions of the Netherlands Research Council in the Humanities and Social Sciences in the Netherlands.
Besselaar and Leydesdorff tested whether the grant decisions correlate with the past performances of the applicants in terms of publications and citations, and with the results of the peer review process organized by the Netherlands Research Council
In their paper they show that the Council is successful in distinguishing grant applicants with above-average performance from those with below-average performance, but within the former group no correlation could be found between past performance and receiving a grant. When comparing the best performing researchers who were denied funding with the group of researchers who received it, the rejected researchers significantly outperformed the funded ones. Within the top half of the distribution, neither the review outcomes nor past performance measures correlate positively with the decisions of the Council.
The authors conclude with some questions for further research. They suggest a network analysis of applicants, reviewers, committee members, and Council board members. This might provide an answer to the question whether funding is correlated to the visibility of the applicants within these networks. After all, in the social process of granting proposals many processes play a role, apart from scholarly quality: bias, old-boys’ networks and other types of social networks, bureaucratic competencies, dominant paradigms, etc., all play an important role in selection processes.
If my reading of the paper is correct, it might also point to a discrepancy between the grant decision makers and the international academic community. If we consider that metrics (past performance) and peer review very much emerge in international networks and the grant distributors make decisions contradicting the metrics and peer review, what does that tell about the Council members’ involvement in these international networks?
The paper will be published later this year in the journal Research Evaluation.