Content of review 1, reviewed on July 19, 2017

In this paper the authors propose the “R-index as a simple way to quantify a scientist's efforts as a reviewer”. The authors believe “it encourages strong reviews for leading journals within one's field and allows editors to manage and measure the reviewers they use”. The R-index is based on the number of papers reviewed, the total number of words, which is multiplied by the square root of the journal's impact factor. This product is weighted by the editor's feedback on individual revisions, which is given by a score of excellence ranging from 0 (poor quality) to 1 (exceptionally good quality). The authors use the word count as a proxy for time spent during each revision and use the impact factor as a proxy for the impact of the prospective paper as well as the reviewer's prestige and standing in the field. Not only an index based on the flawed impact factor (Sevinc, 2004; Yu and Wang, 2007; Falagas et al., 2008; Martin, 2016) makes absolutely no sense but the idea of putting editors in the position of giving poor quality marks to those who generously accepted to contribute to the advancement of science would have very negative consequences for the peer review process because it would increase the number of reviewers that would not accept to contribute to the review process. Especially because one of the aspects to be evaluated in this index is the punctuality (within or beyond the deadline set by the editor). The authors seem to forget that academics already have too many daily duties to fulfill so they would be very annoyed by the idea of receiving negative rates in an activity done “pro bono”. That suggestion would only make sense if reviewers were paid by the time spent on reviewing activities. Falagas, M. E., & Alexiou, V. G. (2008). The top-ten in journal impact factor manipulation. Archivum immunologiae et therapiae experimentalis, 56(4), 223-226. Yu, G., & Wang, L. (2007). The self-cited rate of scientific journals and the manipulation of their impact factors. Scientometrics, 73(3), 321-330. Martin, B. R. (2016). Editors’ JIF-boosting stratagems–Which are appropriate and which not?. Sevinc, A. (2004). Manipulating impact factor: an unethical issue or an Editor's choice. Swiss Med Wkly, 134(27-28), 410.

Source

    © 2017 the Reviewer (CC BY 4.0).

References

    Mauricio, C., Shane, G. 2015. The missing metric: quantifying contributions of reviewers. Royal Society Open Science.