This is our third post in a series discussing the way we evaluate scientific research. In our first post we examined the impact factor, which seems to be the best immediate measure we have of a publication's value, and showed that it is most likely an indirect measure of the publications timeliness (or relevance) as opposed to its quality. In our second post, we strengthened our findings by extending the analysis to all scientific disciplines. In this post we discuss alternative forms of evaluating research - namely, generating more direct metrics during the peer-review process.
In our earlier posts, we discussed how the impact factor is an indirect measure of two different attributes of a publication - quality and context. Publishers who promote impact factors tend to imply that publications in high impact factor journals are high in both quality and context.
However, a paper published in a high impact factor journal could score high for either quality and/or context without the reader knowing which feature was the paper's strength. The spread could be anywhere from low quality and high context, high quality and low context to high quality and high context.
Figure: The likely spread over quality and context for papers in high impact factor journals
Impact factor versus direct peer evaluations
Using impact factor alone makes sense in a historical context; it is the only measure we had that generalizes across the all the different journals in use around the world.
This may no longer be true.
We are now in a position where it is technologically possible to organize and collate myriad evaluations of a particular paper. This opens up the ability to build superior indicators of value that aren't reliant on where the paper was published. Rather than indirect (and imperfect) measures like impact factor, we can use direct peer evaluations - like the current peer review process, but amplified.
Such an approach makes it possible to deconvolute quality and context, and to provide a much better indicator of a publication's worth. Does the paper represent a timely study (i.e. done at the outset of an emergent field)? Or does the paper present a rigorous study within a fairly steady field?
Figure: Deconvoluting quality and context onto two axes
This is what we considered as we set out to design a better peer-review system: a system that allows us to clearly quantify a paper's quality and impact (and isn't onerous on the reviewer).
We thus see a modern peer review system as having two key aspects:
- Ratings of specific dimensions of the paper (e.g. quality and context) to allow for immediate and accurate analysis
- A written component to unlock (and summarize) the nuances of a complex work.
Alternative forms of peer review: a survey
Experimenting with more modern forms of peer review is at the heart of what we're doing with Publons, and we have a couple of different implementations on the go.
We started with a simple text box with a suggested template, and a stand-alone rating system. It keeps the basic format of the traditional review, and allows reviewers to construct their review however they see fit. (All reviews on Publons to this point have used this method.)
Some early feedback we've received is that writing a review is hard work. Reviewers know what they want to say, but it takes time to turn those coherent thoughts into coherent words on a page.
In response to this feedback, we're experimenting with a different approach: a novel, interactive review experience, where the acts of rating and reviewing a paper are unified. The reviewer is asked to rate the paper on two dimensions - quality and significance - and to add comments to justify their evaluations.
The theory is that this format will make for a more efficient form of review, where all the useful information of a review is communicated quicker and easier than with the traditional format.
We're starting this experiment with a survey. Which of these more modern forms of peer review would you prefer?
See more info about the two variations and share your thoughts here: