Content of review 1, reviewed on August 04, 2017

The paper use the acceptance rate as a proxy of the quality of the reviewers and cite Callaham et (1998); Kurihara and Colletti (2103) on this respect. However, the results of the first cited reference show that “Reviewer average quality ratings correlated poorly with the rate of recommendation for acceptance”. Also since for Publons, a manuscript is accepted when it has at least been published online (a DOI has been assigned) its not possible to establish if a report of a given reviewer was the cause for a rejection of a paper. Therefore its not possible to really establish the acceptance/rejection rate of a reviewer. For that to happen it would be necessary that papers were reviewed by a single reviewer and also that the editor always follow the reviewer recommendation. The acceptance/rejection rate can say something on the quality of rejected paper or even about the journal that rejected it because some journals have a very high rejection rate but says little on the quality of the reviewers that made such rejections. As a consequence of this serious limitations some of the conclusions of this paper like the one that states that “this study encourages journal editors the recruitment of young and women scholars because these researchers are more committed with the peer-review process”, do not seem not based on sound science.

Source

    © 2017 the Reviewer (CC BY 4.0).

References

    Jose, L. O. 2017. Are peer-review activities related to reviewer bibliometric performance? A scientometric analysis of Publons. Scientometrics.