Content of review 1, reviewed on June 27, 2016

This commentary, in its one-sided attempt to defend society journals (in particular, the one in which it is published), contains a number of serious errors in its description of open access publishing. The first occurs in the opening paragraph, when the author defines open access journals "in which authors whose articles are accepted for publication pay a fee to have them made freely available on the Internet." This is an oft-repeated misrepresentation that can be easily corrected by checking the Directory of Open Access Journals (http://doaj.org), which has a filter allowing searches for journals with or without article processing charges (APCs). In short, a minority of OA journals use this business model, though a majority of OA articles are published through APCs. It is ironic that the author later warns against bad information spreading through the literature, because this assertion is frequently corrected, yet continues to be made.

The author says that OA journals "threaten to pollute science with false findings, making knowledge unreliable" yet offers no examples. The website Retraction Watch (http://retractionwatch.com) might be one way to measure this. While data about retractions in open access journals does not seem to be available, many of the articles featured on Retraction Watch come from toll access (paywalled) journals. Indeed, Science, which the author identifies as a "top" society journal, has one of the highest retraction rates (http://doi.org/10.1128/IAI.05661-11). Paywall journals were responsible for publishing now-infamous articles on arsenic life as well as on autism and vaccination. In addition, the author's comparison of Science to PLOS ONE is a strange one, since the journals share few characteristics. A better choice of "top" open access journal might have been PLOS Biology.

The author makes a second serious error by conflating the peer review model at PLOS ONE with that of all open access journals. Though some OA journals use a similar model of peer review, most use traditional types of peer review. The author does not seem to understand or acknowledge the rationale behind this peer review model, or the problems in scholarly publishing it attempts to address. The author's lengthy explanation of why review within a society is preferable is unconvincing in the current environment. Paywalls limit the number of people who can view an article, and therefore limits the debate on an article's merits. And sites like PubPeer (http://pubpeer.com) have identified serious problems with articles that have "passed" peer review.

The author's third serious error is the idea that society journals and OA journals are mutually exclusive. In 2011, there were 530 societies publishing 616 fully open access journals (http://legacy.earlham.edu/~peters/fos/newsletter/12-02-11.htm#societies); that number is likely higher today. The author's concern about declining society memberships (p. 4) seems to have resulted in the identification of OA journals as the problem.

The author touts "inexpensive access" without mentioning that journal operations have been outsourced to the international publishing conglomerate Wiley, whose focus on profits result in ever-higher serials costs for libraries. Though he "chose not to" provide open access to this commentary, one factor may have been Wiley's absurdly high hybrid APC charge of $3,000. In mentioning the OA citation advantage, the author cites a single 15-year old study. Yet there is a bibliography of 70 studies available (http://sparceurope.org/oaca/), the majority of which show a citation advantage. Finally, the author claims that grantors look favorably on society journal publication, without mentioning that many grantors are increasingly focused on open access to the research they fund.

The Journal of Wildlife Management experienced a failure in its review processes for this publication. Though it is a commentary, as the saying goes, everyone is entitled to their own opinions, but not their own facts. Even commentaries should be fact-based. By propagating false information, this commentary ironically undermines the journal it intends to support.

Source

    © 2016 the Reviewer (CC BY 4.0).

Comments   (Guidelines)

Andrew R. H. Preston

4:46 p.m., 27 Jun 16 (UTC) | Link

Disclaimer: I have only read the first page of the original article. No access; couldn't read (na;cr).

Thanks for your review! I want to add a few quick observations of my own:

First, I do think there is one very plausible route through which OA could be bad for society-driven science:

  • Societies have traditionally generated profits from their publishing activities
  • Those profits are reinvested in (i.e., subsidise) other society activities, which presumably have positive externalities for that field of research
  • If OA were to threaten society publishing profits then they would be unable to fund other activities, potentially harming that field of research

Does this mean OA is bad for science? Not necessarily, not directly, and not obviously, but it's worth debating.

...the author defines open access journals "in which authors whose articles are accepted for publication pay a fee to have them made freely available on the Internet." This is an oft-repeated misrepresentation that can be easily corrected...

Is it fair to say that throughout the author uses "open access" when he means "gold open access"? I propose we proceed with this discussion under that definition.

Paywall journals were responsible for publishing now-infamous articles on arsenic life as well as on autism and vaccination.

I don't think this is a good argument one way or the other for the merits of different business models in publishing. The scientific method is right in the long run. A high profile retraction from a journal implies that researchers were trying to publish groundbreaking research in that journal (a measure of its draw power or prestige) and that there was enough robust discussion around the original publication to find its flaws (a measure of discussion quality).

Therefore we should be looking for low profile flawed research. The best example I can think of is the Bohannon sting, which (grain of salt) has many of its own methodological flaws. Anyway, in that study it was clear that a number of low impact gold open access journals did in fact publish flawed research. What we don't know is whether the rate of publication of flawed research is higher in those journals relative to subscription or society journals -- but it's not implausible to think so. In any case the counter-examples you give here don't sway the argument in either direction because we have not controlled for journal prestige.

In my opinion, the interesting thing about the gold OA model is that there is a strong marginal benefit to publishing another manuscript (fixed costs remain the same, and unit economics are generally pretty favourable). For subscription journals, there is no marginal revenue from publishing another manuscript, so the incentive to publish is somewhat less direct. Assuming that there is no difference in the publication rates of low impact flawed research between gold OA and subscription journals, it almost seems like there must be external pressures that outweigh the marginal benefit of publishing? In other words, perhaps the threat of losing the Impact Factor or other forms of journal prestige (typically indexing) actually keeps OA research honest!

Martin Gilje Jaatun

8:32 p.m., 6 Dec 16 (UTC) | Link

The problem, of course, is that the Gold OA model makes all kinds of predatory publishers come out of the woodwork; there is money to be made by publishing bad science and utter nonsense if the authors are willing to pay APCs. This model would not work for traditional subscription based journals, as very few libraries are willing to pay for nonsense publications. Unfortunately it is sometimes very difficult for budding researchers to determine whether or not a journal is predatory - I'm not sure whether relying on Jeffrey Beall to do this for us will work indefinitely. One answer may be for national funding bodies to maintain a list of approved journals/publishers?

Marc Couture

9:12 p.m., 6 Dec 16 (UTC) | Link

@Martin. But the author explicitly excludes "predatory" publishers : "The OA journals it is concerned with are the nonpredatory ones". So the problem you mention is not the one the author sees in OA journals. As to Beall's (black) list of publishers and journals, I would definitely recommend instead the (white) list of journals indexed by DOAJ (http://doaj.org), which are now checked and evaluated according to much more stringent criteria.

Martin Gilje Jaatun

3:34 p.m., 7 Dec 16 (UTC) | Link

@Marc I failed to find specific mention of (non)predatory publishers on the first page :-)

Thanks for tip about DOAJ; could be useful in the future.

Please sign in to leave a comment.

References

    Romesburg, H. C. 2016. How publishing in open access journals threatens science and what we can do about it. The Journal of Wildlife Management, 2016.