Last week I attended the Sydney Conference on "scholarly communication beyond paywalls". One of the five core threads was all about peer review - what it's for, whether it should be done pre- or post-publication, what's wrong with it currently, and how we can improve it. This is a topic close to our heart here at Publons, so it was a fun three days of workshop discussions.
It was nice to see unanimous support for greater recognition for peer review, but the main takeaway was that journals giving credit for peer review is not enough on its own: we also need funding agencies and universities to value peer review contributions in their assessments. By considering peer review activity in promotion and funding applications, universities and funding agencies can make the act of reviewing more worthwhile to researchers - time spent reviewing would then count for something in a researcher's career.
The expected effect is that it would amplify what we've already seen with giving credit for peer review - higher review invitation acceptance rates, faster reviewer response times, and greater care put into the reviews. Faster, better peer review means faster, better science.
A great aspect of the conference was that we were able to make good progress on both fronts with those in attendance. The Australian Research Council expressed interest in using peer review activity in their assessments, and described exactly what they would need to make that work. On the university side, a few attendees are now looking at pulling data from the Publons API into their universities' research output / researcher profile systems, which paves the way for peer review activity to become standard in promotion applications. Both of these developments are likely to have a real impact on the incentives facing reviewers - an impressive outcome for a three-day conference.
The byline for the peer review thread was "Peer review, pre- or post-publication", but I was surprised to find there was little support for a post-publication peer review future. A major concern was that most papers would simply go unread and unevaluated! Similarly, many were worried that a fully-open peer review model (eg. PeerJ or GigaScience) would create a positive review bias, especially for early-career researchers worried about criticising eminent researchers in their field. Most were in favour of at least publishing the review (but keeping the reviewers' identities anonymous) alongside the published article.
A few radical ideas for peer review were discussed, but none gathered much in the way of support - highlighting that many see peer review as a core part of research in need of a few tweaks, not a full overhaul. Just how representative the group was (~100 attendees) is up for argument, however.
It was also cool to compare notes with fellow academic startup founders Lachlan Coin (Academic Karma) and Charlie Rapple (Kudos). I would have liked to have seen more entrepreneurs in attendance, as we are some of those best placed to enact change in this industry. I do admit that I am biased on that front!
Thank you to the facilitators of our thread, James Mercer (Springer) and Carol Feltes (Rockefeller University), as well as conference organiser JoAnne Sparks. For further reading of what went on at the conference, see Charlie Rapple's great write up of a different thread.