Here at Publons, our goal is to speed up science by creating a community around peer review and scientific discussion that gives credit where credit is due. Peer review, however, is just one piece of the giant, daunting puzzle that is the Academic Publishing Machine. In this post, we're going to start digging a bit deeper into academia's "publish or perish" culture and how the emphasis on publication in prestigious, high impact factor journals is actually slowing down the dissemination and advancement of scientific knowledge.
A quick disclaimer before I begin: as of now, I can only speak to my experience and observations within the biological and biomedical research community. As I delve more into the world of academic publishing, I'm interested to see how cultures and practices may differ across different disciplines, even within the sciences.
Peer review: the un-standard standard
Let's begin with a quick overview of the review process at most journals. Submitted articles are first screened by the publisher's editorial staff, who will then pass on selected manuscripts to 2-3 experts in the field for formal peer review. Reviewers will evaluate the paper and send the publisher comments and a publication decision: yes, no, or maybe - if the authors make x revisions and resubmit. Authors then receive these comments and move on to the next journal if rejected, or start working on experiments for revisions if those are suggested.
Pre-publication peer review is the main mechanism by which articles are accepted for publication - but as we know, different journals have very different standards and expectations. In other words, acceptance into a journal may be less dependent on who is reviewing your article, and more dependent on where you are trying to get it published. A researcher could submit an article to two separate journals, with the same peer reviewers, but get opposite outcomes depending on the journal.
Highly selective vs. basic criteria
On one end, we have highly selective journals with rejection rates of >90%. These exclusive journals are not only looking at the scientific validity of a study, but also evaluating its (subjectively determined) impact and novelty. On the other end of the spectrum are journals, like the open-access publications PeerJ and PLoS ONE, who accept papers based on a few essential criteria, namely: are the findings new, and are they scientifically and technically sound? We'll call these journals 'basic criteria' journals. As stated on PLoS ONE's website: "Judgments about the importance of any particular paper are then made after publication by the readership (who are the most qualified to determine what is of interest to them)." In addition to impact and novelty, selective journals have high standards on how "complete" a research project needs to be to qualify for publication. Generally, all of the data needs to fit together and tell a "nice story", as researchers like to say.
So far, this sounds okay, right? It makes sense that the most highly regarded journals should have appropriately high standards and only publish well-fleshed out projects that will advance the field forward.
The long, hard road to publication
However, we already see a potential for differences in the speed of publication (and thus the spread of knowledge) between the two types of journals. For a basic criteria journal, a lab could put out new papers far more often - essentially every time they have an interesting and significant finding that they would like to share. For a selective journal, however, it can take years (in my experience, often 4-5 years) to conduct the kind of research that will make the cut, and sometimes an additional 2-3 years just for revisions and resubmissions. All of that adds up to a significant delay in presenting the scientific community and general public with important research findings.
The problem lies in the inordinate emphasis placed on publication in these select few journals. I have witnessed this firsthand in my own lab and heard of numerous other situations illustrating this point. For the anecdote I present next, let me be clear that I am in no way generalizing instances like this to all universities or all researchers, but just want to present this case as a scenario that does happen and is representative of a mindset that is quite pervasive, perhaps especially at prestigious research institutions.
The impact factor obsession
To say that the researchers at the top, or who want to get to the top, care a lot about publishing in prestigious journals with high impact factors is…an understatement. I know many professors at my university who absolutely refuse to submit to journals below a certain 'tier' in their mind. For example, a fellow graduate student originally submitted a paper to a high impact journal last year. The paper is solid, and actually contains a data set that will be an incredibly useful resource for thousands of researchers. That resource will become publicly available on a website - but only once the paper is published. Unfortunately, the paper has yet to be accepted to the couple top-notch journals that this student's professor deems worthy for a publication from his lab. I've talked about it with the student, and the paper would easily be accepted at a journal with a slightly lower impact factor - but their professor won't budge, and wants to do everything possible to get the paper accepted at the journal of choice.
The end result is that discoveries coming from the best labs may actually be taking longer to reach the public. Inevitably, this single-minded focus on selective journals slows down the rate of research dissemination and subsequent research that will build off those findings. Let me be clear that I'm not against publishing in highly selective journals, and I actually quite enjoy reading their articles and think they do a great job of selecting interesting, high impact studies. Certainly, stringent acceptance criteria and a round or two of revisions likely improves the quality of research articles. The key problem is trying to publish in these journals for the sake of publishing in these journals, and at the expense of sitting on valuable research for unnecessarily extended periods of time.
Just one cog in the machine
Granted, this is an incredibly complex problem for which I am barely scratching the surface, failing to discuss issues like reviewer bias, competition and 'scooping', and the possibility of additional problems like confirmation bias and irreproducibility (which I plan to address in future posts!).
This is a systemic problem that cannot be isolated to one player alone: academic researchers, universities, funding agencies, and publishers all have a role to play. Universities and funding agencies, on which academic researchers rely for jobs and research grants, respectively, have perpetuated a "publish or perish" culture by using publication record as the primary metric when measuring a scientist's caliber. Academic scientists also perpetuate this system by following the status quo (although, excitingly, scientists are increasingly willing to try new approaches, such as open-access journals) and likewise judging each other by the same criteria. Publishers currently don't have much incentive to change the system, which works in their favor.
Moving science forward
Sounds like a pretty extensive, almost insurmountable challenge, doesn't it? Here at Publons, we believe there is plenty of low-hanging fruit: incremental, but meaningful, improvements that can be made to make science more efficient. Along with our partners and other like-minded organizations, we think the Academic Publishing Machine is long overdue for evolution.
Publons' mission is to speed up science by working with the players above: peer reviewers/researchers, publishers, and research institutions, to make peer review a measurable research output that deserves recognition. Rather than the almost singular focus on publication record, we think a scientist should also be evaluated based on other contributions to scientific discourse. These contributions include participation in pre-publication peer review as well as scientific discussion and review of already published papers. Not only would this work to speed up the rate of dissemination, but it would help editors in their search to find motivated, fit-for-purpose, and available peer reviewers.
A researcher's Publons profile serves as an additional means of showcasing their contribution to the scientific community and building up their reputation. With measures like the Publons profile and other growing alternative metrics for scientific contribution, we can start to move away from the single-minded focus on high impact publications that is currently impeding scientific research.
Create your Publons profile and start getting credit for your peer review today!
This is Alicia's first post for the Publons blog. Alicia is a neuroscience graduate student in California. She's into health, tech, and science communication, and is super excited to write about peer review and academic publishing for the Publons blog. You can find her on Twitter @AliciaShiu.