Go to publons.com

Retraction: Average Journal Impact Factor metric

Some weeks ago we added a controversial new metric to the statistics on our author profiles: the average Impact Factor of journals reviewed for by the author in question. Response to this metric was varied but predominantly negative; it was interpreted by many to suggest that one's quality as a reviewer was inherently tied to the prestige of the journals one reviews for. Add to this the widespread opinion that the Impact Factor fails to even do what it claims - provide a reliable proxy for journal quality - and you have a recipe for impassioned comments across Twitter and the blogosphere at large.

In the face of reasonable complaint we must concede that this 'Average Journal Impact Factor' is too easily misconstrued and potentially detrimental to a reviewer's behaviour. The true relevance of the average is difficult to evaluate since it relies on more context than a number provides (such as field of research). As of the publication of this blog post this Impact Factor derived statistic has been removed from our author profiles.

We would like to thank all those who contributed to the discussion of this statistic both on Twitter and our blog. Publons ultimately exists to serve the needs of the global peer reviewing community so we are more than happy to take your advice on board where we can. This online discussion of review metrics has led to many interesting suggestions and we are looking forward to developing more and better indicators of peer review quality and quantity.

So what makes a good metric? We believe its calculation should be transparent and reproducible where possible. This is difficult for us to achieve as we handle such sensitive data (viz. the details of blind peer review). The confidential nature of this information is actually a big part of what prompted us to create the metrics we display on author profiles; they can give an overview of an author without disclosing what papers they've reviewed etc. Where possible we are happy to provide data in a format which allows for reuse but doesn't compromise our relationship with our users.

We're also very aware that we must prevent persons from 'gaming' any metrics which we provide. The metrics we currently provide are most easily manipulated by adding fraudulent reviews and we protect against this by encouraging our users to verify their reviews where possible. Currently around 90% of reviews on Publons have been verified by either i) official integrations, ii) human processing of 'review receipts' through our automated channel, or iii) by the editor who commissioned the review. These channels allow us to have faith in our data and our users.

Furthermore, we recognise that peer review is not a standardised activity; that peer review differs between journals, disciplines, and individuals. Where necessary we will highlight the effect that these differences may have on any metrics that Publons provides.

comments powered by Disqus

Subscribe to our mailing list

* indicates required