The manner in which academic research is published has remained remarkably static relative to the upheaval currently underway in most other publishing industries. In this paper we examine the incentive structure that caused this outcome, outline a strategic approach to creating a system that encourages collaboration and faster scientific development, and introduce publons.com, our implementation of these ideas.


Andrew Preston;  Daniel Johnston

Publons users who've claimed - I am an author
Contributors on Publons
  • 2 authors
  • 1 editor
  • 5 reviewers
Followers on Publons
Publons score (from 7 scores)
Web of Science Core Collection Citations
  • In the paper, you say that

    However, there is no inherent reason that only one or two experts in a field should determine the validity of a paper, ...

    The journals for which I've reviewed explicitly do go out and find experts for the task. They expect the reviews to be informed. Will your contributed reviews be weighted? Are you thinking you'll do some sort of StackExchange thing where contributed reviews percolate to the top, based on others' ratings?

    btw: when writing this comment, I wished I could quote directly from the paper by inserting a link so the quote could be read in context. Might be a useful feature.

    Reviewed by
    Ongoing discussion (10 comments - click to toggle)
    • Andrew R. H. Preston | 7 years, 3 months ago

      Good point, Robert.

      Some reviews are better than others (e.g., because the reviewer is an expert or put a lot of effort into their review) and they should be weighted accordingly.

      We are grappling with this problem right now. For example, what should the net score of publon:2636 be, given its reviews? Right now we calculate the mean weighted by the number of endorsements each review has received:

      $$ S_q = \frac{\sum_r{e_r q_r}}{\sum_r{e_r}}, $$

      Where $S_q$ is the net quality score and $e$ and $q$ are the the number of endorsements and the quality score for each review, $r$, of the Publon.

      That's not perfect. As with anything, it can be gamed. How would you rank reviews?

    • Rajiv Agarwal | 5 years, 4 months ago

      Quality to who? Quality of a review depends on the context. A critical review may be rated highly by the editors but low by the authors. On the other hand, a gentle review of a low quality paper with a lot of suggestions to the authors for improvement may be rated low by the editors and high by the authors. The timeliness, thoroughness, and thoughtfulness are all important and editors are in a unique place to rate reviewers. However, quality is a latent variable and multidimensional. It should not be, in my view, be reduced to a few simple calculations.

    • Andrew R. H. Preston | 5 years, 4 months ago

      Quality to who? Quality of a review depends on the context. A critical review may be rated highly by the editors but low by the authors. On the other hand, a gentle review of a low quality paper with a lot of suggestions to the authors for improvement may be rated low by the editors and high by the authors.

      This is a really good point.

      I would extend your example to more "objective" measures of a review -- such as its length -- that may or may not give us insight into its quality. For example, a long review might indicate that the reviewer did a comprehensive analysis of the paper. On the other hand it might just indicate that the reviewer is verbose.

      It's important to consider the context for any measure. The editor and authors should have a special standing when it comes to measuring the quality of a review. I think it's our job to provide the context -- and to facilitate the discussion -- that makes it possible to interpret their ratings.

    • Rajiv Agarwal | 5 years, 4 months ago

      The measurement of quality is a "latent variable" or one that cannot be observed. It is probably measurable but like IQ will require validated instruments. When I see a good review I know it. I can also recognize the ones at the extremes--highly critical or highly complementary. The reviewer has a difficult task of defending the authors and defending science. It is difficult to find reviewers who understand the concept of dual commitment. Both defending the authors and defending science are important. In trying to defend science the reviewer should still be constructive in comments to the authors. Anonymity should not be a license to derogatory or destructive comments.

    • Andrew R. H. Preston | 5 years, 4 months ago

      I agree on all counts. This is why it's probably right for the editor and authors to make an estimate for the quality of a review and for their ratings to have a relatively high standing; they are best placed to know the good reviews when they see them.

      Looking beyond the review, what about article quality? Is that also a latent variable, or can we come up with some objective measure beyond the Impact Factor?

    • Rajiv Agarwal | 5 years, 4 months ago

      Again the quality depends on the question: quality to who? For the journals and editors, it is the number of citations that matter. High citations, high impact factor, high ratings. For the authors, quality may mean publishing in a high impact journal and more grants. But we have to ask a more fundamental question: does the article really move the field forward? Does it change our way of thinking, diagnosing or prognosticating? Few articles can do that. Since I do patient oriented research, I ask the question, does it make any difference to the way I treat patients, think about a disease, or tell my patients on what might happen to them in the long term. If the answer is no, the citations, the impact, the grants all fall by the wayside. Unfortunately, the answer to the question on how an article impacts the field can only be answered many years after the article is published. And that is well beyond the impact factor years...but in my view it is the real impact of a research publication. Unfortunately, this is even harder to measure that the quality of a review.

    • Andrew R. H. Preston | 5 years, 4 months ago

      Unfortunately, the answer to the question on how an article impacts the field can only be answered many years after the article is published. And that is well beyond the impact factor years...but in my view it is the real impact of a research publication. Unfortunately, this is even harder to measure that the quality of a review.

      We should probably start with some variant of the question you ask: "does the article really move the field forward?"

      PeerJ asks reviewer's this question after the review is complete. I think it's a really good idea.

    • Rajiv Agarwal | 5 years, 4 months ago

      I agree that the reviewer should assess the novelty of the article and communicate this to the editor. Many experienced reviewers do this, but not all.

    • Andrew R. H. Preston | 5 years, 4 months ago

      What do you think of the small scoring experiment we're running? Ratings for quality and significance of the manuscript.

      You can set a score for any manuscript you've reviewed by clicking on the progress bar on your dashboard: https://publons.com/dashboard/reviewer/#review-history

    • Rajiv Agarwal | 5 years, 4 months ago

      This potentially unblinds the reviewer to the authors. The quality and significance of a manuscript (at least as many view it) are correlated with the IF of the journal. Not perfect, but close. Sometimes you can have real hits in lower impact factor journals. These are easy to figure out by looking at their citations. I think Google Scholar here would do better than an individual rater.

  • Significance Comment


    The first thing to say is that publons is to be welcomed. It's part of an evolving community of ideas around revolutionising academic publishing that has been in the making for a long time. It's a pretty exciting time right now - less so for the incumbent BIg Publishers, more so for researchers in the area and a number of interesting startups and projects around reshaping the journal.

    The second thing to say is that where we are in the academic publishing industry is due to a whole set of pressures that serve to empower the incumbents and disable the forces of change. While this paper correctly says that Open Access is not the solution, is it however one of the levers that are being pulled that will open the door to a whole wave of innovation - mostly centered on startups, and their investors, that see opportunities that Big Publishing can't see. Open Access is good - not for editors, reviewers, authors, instituions or funders since there is effectively no real change to the status quo - but because of the fact that it opens up what was previously hidden away in a process that has been the same for several centuries. Now the discussion is alive about who pays, and for what, and what the value delivered is; where the labour comes from, and who pays for that; and, crucially, where are the next innovations going to come from. Open Access will prove bad for Big Publishing however, because it signals the start of the end for the traditional journal.

    Publons, as discussed in this paper, is about fixing the lack of transparency in academic publishing by changing peer review from a closed, hidden process to an open one. Just as crowdsourcing taps into the opinions of the crowd, and crowdfunding leverages the economic contributions of many, Publons aims to open up peer review. "Peer review is flawed" says this article, and it may be right - but it might also be that this is not the argument that is the most powerful to motivate change - or the one that will create change in a direction that will significantly improve the way academic publishing works.

    There are lots of issues to examine here, but I'm going to start with what Andrew says is wrong with peer review:

    1. Only one or two experts determine the validity of a paper

    2. The review process takes too long

    3. Peer review is not transparent

    Only one or two experts determine the validity of a paper. This might be true or it might not, depending on what journal you look at. For example, in the journal I run there may only be two or three peer reviews for a given paper, but each paper passes through the hands of an editor-in-chief, an editor, possibly an associate editor and all of these people input into the process of taking a submission and turning it into a published article. But to cast this as a criticism devalues 'expertise': in reality most papers that our editors see will be already known to them - they may have been presented at conferences, discussed at seminars, been presented at workshops - and in any case, an experienced editor will be aware of the major research themes, concentrations of expertise in research groups or projects or will have read other papers around the topic. Editors, and the reviewers they use, have lots of expertise and it's easy to discount it. The real issue here, I think, is leveraging the network to enhance the process. So, for example, we have the idea of reviewing reviewers, rating reviewers, and commenting on their output. which - as I understand it - it one of the central contributions of Publons. Easy to throw the reviewer out with the bathwater, though, and replace it with a version of peer review that weakens what can be an extremely effective process.

    The review process takes too long. I agree. But while this might be about the seemingly pre-internet workflows that most of Big Publishing uses, it's probably more about the fact that folks are really busy. Reviewers who review in a timely fashion tend to get more reviews to do and they reach capacity. The process of peer review, to maintain a level of quality acceptable to authors, editors and their communities, requires committed, meticulous reviewing and that takes time. There are ways to speed that process up, and it will be interesting to see whether Publons is effective in doing that and creating workflows that work with the constraints.

    Peer review is not transparent. It isn't. There certainly is a need to open up the process of discussion, commentary and opinion about articles - whether published or not - to the community. Publons does that by allowing academics to "comment on, or ask questions about, the papers they already rely on to do their research". I like this idea - that Publons can start to disrupt the academic publishing process not by publishing, but by starting to decouple and enhance a peer review process which has until now been tied closely to the journal itself.

    This is perhaps the most interesting aspect of Publons - and perhaps the most difficult to enact since, as Andrew says, it requires a change to the incentival structure that currently exists for academics. Publons does this by providing a way to build reputation through the enhancement of their personal/professional profile that is generated for a paper they have authored. The profile contains standard per-paper or per-researcher metrics plus the rep. created by engaging in discussions about articles, projects and contributions to the field. Other sites - ResearchGate or academia.edu, for example - also offer the opportunity to build reputation - and perhaps the outcome will be that there is a super-reputation aggregator (a Klout for academics) that scans the various sites for key indicators of reputation and influence. However it pans out, the experience design of sites like Publons will be critical: ask too much, in the wrong way, and it's a turn-off.

    Quality Comment


    The authors say in conclusion:

    "In this paper we have raised a hypothesis for how to change the incentive structures faced by academics but have largely ignored the three other major players in the ecosystem we hope to disrupt: journals, libraries, and funding agencies. While it may be possible to marginalize journals and libraries, our ideas for the future of research will go nowhere if funding agencies and universities continue to rely entirely on conventional publication records to allocate jobs and funding."

    There are a lot of issues here that I might discuss, but I suspect that the major one is around what 'disruption' means. One way that the academic publishing industry might trajectory is that, like many other industries, small, agile publishing startups will enter the industry fulfilling the publisher role but who have correctly understood the disruptive power of the internet and built their offer around it: they will offer enhanced utility at a lower price. Another trajectory is that the industry is disrupted by what has been started by the OA movement and forces Big Publishing to change, and change quickly. Their market muscle will mean that everyone else is playing catchup. The other is that, as Publons might demonstrate, disruption will come from an innovation in one part of the process - it could be peer review or it could be around how journals are managed and led - but change is in the air.



    Peter Thomas is founder of the Manifesto Group, Executive Director of the Leasing Foundation, Visiting Senior Fellow at The University of Melbourne, and Visiting Professor at Brunel University, London. He is editor-in- chief of the international research journals Personal and Ubiquitous Computing (PUC), and Communications in Mobile Computing (ComC), both from Springer, and the new Internet of Things research journal, RIOT, an independent, open access altmetrics-based journal. Peter’s current interests are in the use of mobile, social and open source technologies in higher education to deliver engaging student experiences, and in the use of reputation media to reshape academic publishing.


    Probably not for me to judge. Maybe Publons contributors will be able to tell me.

    Reviewed by
    Ongoing discussion
  • Quality Comment

    This is a scholarly article which clearly articulates a novel idea about the future of academic publication

    Reviewed by
    Ongoing discussion
  • One question regarding the first phase of publons.com, which is to encourage viewers to ask questions about already published journal articles. Wouldn't it be much easier in such case to email the corresponding author, instead of posting the question on publons.com?

    In para. 5, line 2, the correct word should be "credible" instead of "credulous".

    Reviewed by
    Ongoing discussion (3 comments - click to toggle)
    • Daniel Johnston | 7 years, 9 months ago

      Hey Chun,

      Thanks for your input. I've migrated it to the discussion section as you're right -- it is probably more suited here.

      The exact problem we're trying to solve is that email discussions with authors are not available to the community as a whole. That's why we're trying to route the discussion through Publons. Our hope is that it can be at least as convenient as email -- if not more.

      I think credulous is the word we were going for. We were trying to say that because peer review has taken place we can afford to be a little bit more accepting of what's written (relative to e.g. a blog post).

    • Chun Y Cheah | 7 years, 8 months ago

      Hi Daniel,

      Thanks for your reply.

      Unlike 'credible', the word 'credulous' in fact has a negative connotation. While the former means "capable of being believed; believable" [1], the latter actually means "willing to believe or trust too readily, especially without proper or adequate evidence; gullible" [2]. Based on the context you've described, certainly the correct word should be the first and not the other!

      The reason why I raised the issue of contacting the author directly is supported by my experience in trialing this website. I would not have been aware that you have replied to my comment, if not for curiosity bringing me back to Publons and locating Publon:1. Due to this disconnect, our replies are almost a month apart. Similarly, a situation may arise wherein the author might be unaware of comments or reviews posted. Can I suggest that the author (or reviewer) be notified by email whenever a review (or reply) gets posted?

      Ben and Shriv have just presented the concept of this site in our weekly group meeting, and one interesting issue raised for discussion by our team was that by their very nature, a public discussion/review, e.g. via publons, would differ greatly from a private discussion e.g. by email. In a private correspondence, one would have the opportunity to perhaps discuss (1) unpublished materials and ideas; and (2) in a more collaborative (i.e. less formal) manner. Both would not be practical in a public arena as this. In such a case, the discussions not being "available to the community as a whole" is by design and is not a problem per se.

      As such, it would be interesting to clarify by slightly rewording your goal for the first phase of this trial concept, on how exactly publons plans to serve as an effective complement to -- and not so much to replace -- existing methods of discussion.

      Lastly, thanks for the opportunity to share my comments. Cheers.

      References [1] credible. Dictionary.com. Dictionary.com Unabridged. Random House, Inc. http://dictionary.reference.com/browse/credible (accessed: April 11, 2013). [2] credulous. Dictionary.com. Dictionary.com Unabridged. Random House, Inc. http://dictionary.reference.com/browse/credulous (accessed: April 11, 2013).

    • Benjamin Wylie-van Eerd | 7 years, 8 months ago

      I'll weigh in on this thorny issue... As used in the text, the word credulous is describing the manner in which readers approach published literature. As written, it implies that the behaviour of the readers is a believing one, and the literature is credible). If credible was used in the place of credulous (with no other alterations to the text) then it would imply that the readers are what was credible - rather than the literature. So I think credulous is correct here.

      In regards to emailing authors, we're ready to go to make this happen, but we have been holding off until we have decided on a critical part of the service - what reviews should look like! You can weigh in on that here: https://publons.com/revating/

      Thanks for all the input from your club! There were a lot of great points raised : )

All peer review content displayed here is covered by a Creative Commons CC BY 4.0 license.