Critics say a government review of scientists and research institutions coordinated by the National Agency for the Evaluation of Universities and Research Institutes (ANVUR) is using flawed criteria and will do little to reward the best Italian scientists.
For researchers in fields where good bibliometric data are available, ANVUR uses three criteria: the number of papers published in the past 10 years, the number of citations, and the so-called h-index, a measure that takes into account both output and impact. The law governing ANVUR prescribes that only applicants who score above the national median in their field on two of the three criteria can be admitted to the next stage of evaluation, a more quality-based scrutiny by committees. Many scientific groups have protested against what they see as a mindless and unjust application of numbers.
ANVUR President Stefano Fantoni states that the criteria will be used only as indicators, and that the committees can still pass researchers who fail to meet quantitative criteria—although they will have to justify their decision. But others say the law does not provide that escape, and it’s not clear what criteria the committees would use.
For human and social sciences, which aren’t adequately covered by bibliometric databases, ANVUR compiled lists of 16,000 journals whose papers are included in the evaluation. Those lists have been heavily criticized because they include around 200 titles whose scientific credentials are questionable—including glamorous publications like Yacht Capital, religious journals, magazines about food and drink, a trade journal for pig breeders, and supplements of broadsheet newspapers.
ANVUR’s other arm, evaluating universities and research institutions, is under similar criticisms of methodology and transparency. That evaluation next year will lead to a ranking that will partly determine the allocation of Italy’s public funding.
Not everyone thinks ANVUR’s methods are so bad, but some say ANVUR should look abroad, for instance at the United Kingdom’s Research Excellence Framework, for its evaluations, or use expert evaluators rather than metrics. “We need to look at what people are doing in those countries that have a long evaluation tradition, such as the U.K. and the U.S., if we want to set up clear and effective rules the majority of scientists will be prepared to accept and share,” says Francesca Pasinelli, the director-general of Telethon, a nonprofit foundation that screens about 450 research proposals in medicine and biology every year.
See ScienceInsider for full article by Laura Margottini.