Go to publons.com

Talking Peer Review: Q&A with Michèle Nuijten of statcheck

What an exciting Peer Review Week so far!

On Monday we hit one million verified reviews on Publons and announced a new trial with Royal Royal Society of Chemistry. And on Tuesday, we had the privilege to honor the top peer reviewers and editors for 2017 in our Publons Peer Review Awards! See the winners' table here. Also on Tuesday (or Wednesday, depending on where you live!), Publons' cofounder Andrew Preston discussed transparency in peer review with a panel of thought leaders from across academic publishing at the Peer Review Congress. You can watch that here.

And the exciting thing is... there's so much more to come!

On the 19th of September we will be announcing the winner of our Sentinel Award - for outstanding advocacy, innovation or contribution to scholarly peer review.

In this series of Q&A posts leading up to the Sentinel Award announcement, we meet our eight finalists and get to know a bit more about them.

Up today, we have Michèle Nuijten, PhD Student at Tilburg University, for the open source project: StatCheck.

Here's what our judges had to say:

"By making this software freely available, Michèle increases the transparency in the system, while highlighting cases of problematic interpretations and low scientific rigor."

We asked Michèle a few questions about her work and what's coming up next:

Publons: Can you tell us a bit about your research and how it lead to statcheck?

Michèle: My research focuses on improving psychological science, which includes topics such as replication, publication bias, and statistical errors. One of my larger projects was a study in which we wanted to estimate how often statistical results in psychology papers are reported inconsistently. Specifically, we wanted to see how often a reported p-value did not match the reported test statistic and degrees of freedom. You could answer this question by looking through the literature and recalculating statistics manually. However, that would cost a lot of time and is quite error prone. That’s why Sacha Epskamp and I developed software to do this for us: the free R package “statcheck” (and web app at http://statcheck.io) that automatically extracts statistics from papers and recalculates p-values.

How is statcheck helping to improve scholarly peer review?

In our research we found that roughly half of the papers in psychology contained at least one inconsistent statistical result. Mostly, these were rounding errors in the third decimal and therefore not very influential. However we also found that about one in eight papers contained an inconsistency that could actually have changed the statistical conclusion.
These inconsistencies were all found in published papers that already went through peer review. That means that peer review does not catch these problems. With statcheck we offer a tool that can quickly scan a manuscript and flag possible problems in the statistics.
At the moment, the journals Psychological Science and the Journal of Experimental Social Psychology use statcheck in their review process, and several other journals recommend the use of statcheck in their submission guidelines.

Who or what inspired you to work towards this aim?

The feedback of the research community has played a major role in the development in statcheck and has been a major source of inspiration. There were many times when I thought: “checking p-values, what a typical nit-picking thing for a methodologist to do, who on earth could want this”. And then it turned out: many people want this. With statcheck they have an easy and free tool to quickly check their own work for mistakes in their stats before sending it to a journal.

What does transparency in peer review (the theme of this year's Peer Review Week) mean to you?

Transparency in science is crucial. Many people agree that transparency in research data and analyses is important, but people are still apprehensive about transparency in peer review. It is argued that young scholars are in a risky position if they reject a paper from a senior in their field. Although this is an important point, I think it’s better to implement a new system with a lot of room for exceptions, than not implement anything at all.
In my own case, I try to be as open as possible in everything I do. I always sign my reviews, and I’m also very enthusiastic about new initiatives such as the system in the journal Collabra, where I’ll be starting as an editor in January 2018. Collabra does not only pay their reviewers and editors, but also has an option for open peer review, where the entire peer review process is published.

What are you plans for the future?

In December I’ll finish my dissertation, and in January I’ll start as an assistant professor as part of the meta-research center at Tilburg University. I intend to continue my research on improving psychological science together with our growing group of “meta-scientists”. Beside several studies about publication bias and replication I intend to do, I definitely will work on improving and extending statcheck. It would be great if using statcheck can become standard practice in peer review in psychology, and hopefully across other disciplines!

Our Publons Peer Review Award sponsors are:

comments powered by Disqus

Subscribe to our mailing list

* indicates required