Predictably, peer review fraud has hit the news again. This time Springer have announced the retraction of 64 articles from 10 of its journals after it was discovered that the peer review process of those articles was "compromised". Another retraction announcement from SAGE followed quickly thereafter, adding another 17 articles to the growing list of retractions due to fabricated review reports performed via fake or fraudulent email addresses.
Why does this keep happening?
There are two key issues at play here. The first is that editors struggle to find enough qualified reviewers. At its core, this is a symptom of the lack of incentives facing peer reviewers. In today's publish-or-perish world, it's unsustainable to continually ask time-pressured researchers to put aside their own research to do peer review when that review work goes unrewarded. Giving credit for peer review is just the first step in that regard: it is also up to funding agencies and universities to value peer review contributions in their assessments. Until that time, peer reviewing will always struggle to compete with the other priorities of a researcher.
The result is a high review invitation rejection rate - a constant source of frustration for academic editors. Editors often have to find ten or more qualified potential reviewers in order to secure a couple of reviews. The editor's personal network can only stretch so far, leading busy editors to gamble on unknown and unsuitable reviewers. Some editors are lucky enough to be able to treat the list of author-suggested reviewers as a list of reviewers not to invite, but for others, those suggestions are an editor's strongest lead to finding suitable reviewers.
These cases of reviewer fraud are the more unscrupulous among us taking advantage of these busy editors that are struggling to find suitable reviewers that will say 'yes' to a review invitation. The playbook is simple: provide a real researcher's name with a fake email under your control, wait for the editor to send a review invitation to the fake email address, and promptly write a glowing review of the obviously-great science you've done.
So what can journals actually do about it?
That's the second major issue - the lack of tools available to help busy editors detect fraud. It's easy to chastise the editors for not doing their job after the fact, but how are they actually meant to detect when an email address is fake? That was unlikely to be the subject of their PhD.
The changes that are typically announced after discovery of peer review fraud are that journals will adopt a higher level of vetting of potential reviewers, will stop accepting author-suggested reviewers, and/or will discourage against reviewers using non-institutional email addresses.
Each of these solutions have limitations. More vetting of course takes more resources, and unless the editor can figure out the validity of an email address via google (or unless they select reviewers exclusively from their journal's database and personal networks), they are still at risk of fraudulent reviewers bringing their journal into disrepute. Without author-suggested reviewers, the job of finding the necessary amount of potential reviewers is even harder, especially for less-experienced editors and especially-niche topics. Finally, demanding an institutional email address is inconvenient to reviewers: based on Publons data, about 1 in 3 reviewers use personal email addresses (e.g. google, yahoo) when they review. Unlike personal email addresses, institutional/work email addresses can quickly fall out of date.
A stronger solution - what we at Publons are working towards - is to provide reviewer vetting tools to editors based on a publisher-independent database of verified reviewers (and their verified email addresses). These tools allow editors to, for instance, search for and contact reviewers based on their verified review history and the email addresses used for those reviews, as well as check a particular reviewer name and email address against the database to be alerted of any inconsistencies. Importantly, editors are able to have peace of mind to use reviewers that are not yet in their own journal's reviewer database.
Better in-house fraud detection by publishers will certainly help, but it is unlikely to be enough to stop peer review fraud on its own. Time-pressured editors are still going find themselves in situations where they must resort to gambling on unknown reviewers they have little information on. If we want to stop peer review fraud, we need to provide editors with quick and effective tools that help them find motivated, trustworthy reviewers.
We know giving credit for peer review helps too, as early data from publisher partnerships show that reviewers accept more review requests, respond to review requests faster, and return their review assignments faster. A world where reviewers find peer reviewing more worth their while is a world where editors have much less issues with resorting to unknown reviewers.
For ongoing reporting of the retractions due to this form of peer review fraud, do check out Retraction Watch.