On Wednesday 27 September, we surpassed 200,000 researchers getting recognition for their peer review and editorial work on Publons. A few weeks before that, the one millionth peer review was added to the platform.
Thanks to all of you who have been a part of our journey so far!
These milestones are really exciting for a few reasons:
It signals a clear shift in the academic landscape. The wider community is prioritizing the need to recognize and strengthen peer review to bring trust and efficiency to research communication.
We are thrilled to be helping a huge number of you showcase your previously hidden work and be rewarded for all your previously hidden peer review work.
We can now dig into a wealth of information about the nature of the peer review ecosystem to better understand and improve it.
With that in mind, we want to celebrate these milestones by sharing some of our findings with the community, in the hope of shedding some light on who is shouldering the load of peer review globally.
Introducing the Review Distribution Index.
What is it?
The Review Distribution Index (RDI) measures the distribution of reviews performed across a population. In other words it shows us the spread of the peer reviewing workload. We got the idea from the Gini coefficient commonly used in economics to show the spread of income or wealth inequality across a population.
Why did we develop it?
In line with Publons general mission of strengthening peer review to improve research, and the theme of 'transparency' for Peer Review Week 2017, we wanted to better understand the spread of the global reviewing workload.
Answers to questions such as:
- Who is doing the lion's share of peer review?
- Where are they?
- How much do they do and when do they do it?
...will help the research community make more informed decisions about how to improve the peer review process.
How does it work?
The index uses a Lorenz curve—plotting the cumulative share of a peer reviews performed against the cumulative share of reviewers in a population.
In the example below you can see that a 45° line represents perfect equality. This means that everyone in the population writes an equal share of all the peer reviews.
For example, in a population of 100 people where 100 peer reviews are done, a perfectly equal distribution means each person does 1 review. Perfect equality is represented by an RDI co-efficient of 0.
If the line is curved, the distribution of reviews performed is, to some degree, skewed across the population. A perfectly unequal distribution (imagine 1% of the population completing 100% of the reviews) is represented by a RDI coefficient of 1.
The lower the co-efficient, the more evenly distributed the reviewing workload is. The higher the coefficient, the more skewed the workload.
What did we find?
Using some Publons peer review data we explored the distribution of the reviewing workload across different populations. Check them out below (but make sure to read the caveats to our findings listed at the bottom of this post).
Distribution of reviews performed in 2015 and recorded on Publons, by reviewers within countries
Note: The chart above shows the distribution of reviews performed within each country, not between countries.
In each country we measured, a reasonably small proportion of reviewers (10%-20%) were responsible for half of the reviews done in their country.
Interestingly, a very small proportion of reviewers in Italy and The Netherlands were responsible for about half of the reviews done in those countries.
This could be for a number of reasons, including the possibility that these countries had a small number of prolific early Publons adopters :)
Distribution of reviews performed and recorded on Publons within Bio-med and Non-biomed fields in 2015
The chart above shows that the top 20% of reviewers in Biomedicine reviewed 50% of the papers. By comparison, the top 20% of reviewers for Non-Biomedicine reviewed 60% of all non-biomedical papers in the data captured on Publons.
Could it be that the larger sample size for non-biomed papers skewed the distribution more? Are a smaller number of reviewers being asked to do more of the work in non-biomed? Or is it something else again?
By benchmarking the distribution of review work, different stakeholders in the research community can better understand and inform their approach to peer review.
For example, journals may notice they are relying on a small cohort of reviewers to perform the lion's share of the work and decide to spread the load more evenly.
Conversely, a journal may prefer a small, highly specialized group of reviewers undertaking the majority of the peer review for their publications.
As neutral players, we think it is up to each research community to decide if they prefer to rely on small cohort of specialized reviewers, or to spread the load more evenly amongst a greater number of experts. Whatever each community prefers, the Reviewer Distribution Index will be a useful benchmarking tool to inform such policies—particularly when combined with other data such as review quality metrics or turnaround times.
The RDI could also be used as an indicator of the overall health of scholarly publishing. If global RDI coefficient is high, it may suggest that not enough scholars are contributing—placing increasing stress on research's quality control system.
If you're interested in the Review Distribution Index or the data that sits behind it, get in touch at: firstname.lastname@example.org. We'd love to collaborate on a future version of this work and in developing out the index.
- Our dataset includes some bias as we are only studying the effort of researchers on Publons.
- We used a simplified version of calculating the integral by using grouped values. Calculating on the raw set of data before grouping it by percentiles may prove to be more accurate.
- The country information is based on user-added data. This includes many users who likely have not fully completed their Publons profiles. We expect this to improve in the future.
- The Research fields data available is organized using the Scopus All Science Journal Classification (ASJC) system; this information is not included for every review but we hope to expand this in the future.