Content of review 1, reviewed on June 01, 2021

Overall statement of the article The paper presented by Abramo and Angelo at the 14th International Society of Scientometrics and Informetrics Conference, held in Vienna, 15–19 July 2013, and subsequently published in Scientometrics (2014) 101: 1129–1144is very interesting. The objective of this paper was to operationalize the concept of productivity for the specific context of research activity and propose a measurable form of productivity. The authors presented an indicator, Fractional Scientific Strength (FSS), which in our view is thus far the best in approximating the measure of productivity. They also illustrated the methodology for measuring FSS in the evaluation of performance at various levels of analysis: individual, field, discipline, department, institution, region and nation. Finally, they compare Italian university ranking lists by the two definitions of productivity: FSS and average number of publications per researcher. The paper contains 4 keywords: Research productivity, FSS, Research evaluation, University rankings. This work is divided into 5 distinct parts: (i) Abstract, (ii) Introduction, (iii) Productivity in research activities, (iv) Total factor research productivity, (v) Labor productivity in research activity and the FSS, (vi) Labor productivity at the individual level, (vii) Labor productivity in a specific field, (viii) Labor productivity of multi-fields units, (ix) Labor productivity of multi-fields units based on FSSR, (x) Labor productivity of multi-fields units based on FSSS, (xi) Comparison of university ranking lists based on different research productivity measures, (xii) Discusssion and conclusions, and (xiii) Conclusions. The paper has 3 tables does not have figure. There are 39 references, which are from 1926 to 2013. The references are internationally evaluated and published in peer-reviewed journals with important impact factors. In conclusion, the authors retain “For the large part of the objectives and contexts where evaluation of research performance is conducted, productivity is either the most important or the only indicator that should inform policy, strategy and operational decisions. We thus issue a two-fold call to the scholars in the subject: first, to focus their knowledge and skills on further refining the measurement of the FSS indicator in contexts of real use; second, to refrain from distribution of institutions’ performance ranking lists based on invalid indicators, which could have negative consequences when used by policy-makers and research administrators. Specifically, the authors advance in their conclusion that so far, what bibliometrics have proposed as indicators and methods to measure research performance, at the microeconomic level, is not appropriate. For them, the h-index does not take into account citations from scientists as well as co-authors and their signature rankings.

Overall strengths of the article The aim of the paper is very clear. The title is informative and relevant. The keywords included in the article are appropriate. The topic of the works is of general interest, and the article reflects a present state of knowledge with a literature sufficiently critical and internationally evaluated. The authors have used 39 references date from 1926 to 2013. The authors described very well the general aspect of their study. The methodology is very well developed. The variables have been well defined. The paper has very interesting results. It is a very informative paper. First of all, the authors based on the concept of the old method used to evaluate the work of a researcher but subsequently came a new form of productivity evaluation. therefore, in this sense we can say that it was clear what we already know. The research question was clearly defined based on the evaluation that was done previously. Considering what is already known about the evaluation of a scientist's output, the research question was justified by admitting the method on the mean score of normalized citation. This article, based on the research question of how do you define yourself and measure your research productivity, presents us with tables, well-structured figures and has no typos, missing references and apparent inconsistencies. In the conclusion part, they gave in support of the references to consolidate their point of view. This study has limitations. Because the great majority of the most popular indicators and rankings based on their use present two fundamental limits, one by the lack of standardization of the value denoted and the other by the lack of classification of scientists by field of research. In concluding, they advance beyond the indicator of the productivity of the research units, the decision-maker could also find those which inform on the unproductive researchers, highly cited publications and lastly the calculation of the dispersion of the performances within and between the units. of research.

Overall statement For the past two decades, the study of the determinants of the scientific production of professors and researchers has been particularly prolific in the literature. The paper would be more impactful if the authors would strengthen their references by increasing the name, which would reduce the impact in their previous work on the paper. Indeed, out of the 39 references cited in the paper, the authors' previous work represents a third or 13 references. They relied solely on the number of publications to support their thesis on research productivity. The reviewer thinks it would, at least, also be interesting to take into account several indicators including the number of patents. At first, they were based on the concept of the old method used to evaluate the work of a researcher but subsequently came a new form of productivity evaluation. therefore, in this sense we can say that it was clear what we already know. According to the author “To date in fact there is no international standard for classification of scientists and, we are further unaware of any nations that classify their scientists by field at domestic level, apart from Italy.” If it is true that there is not yet at the world level an international convention, or a universal resolution defining the quantitative indicators to measure the scientific productivity of researchers and suddenly facilitate their classification, however there is, despite criticisms a certain convergence on the number of publications, the number of citations and the H-index to classify researchers. In addition, France, through its National Council of Universities, has set up a whole system to classify these researchers. It seems that this thesis advanced here by the authors requires a revision in the light of a re-reading of the systems of national classification systems.

Source

    © 2021 the Reviewer.

References

    Giovanni, A., Andrea, D. C. 2014. How do you define and measure research productivity?. Scientometrics.