On the “Sting”

The latest buzz in the OA community seems to be the story of the so-called sting of  OA journals, large numbers of which accepted a bogus paper with little to no peer review. The Chronicle article captures the story well. The journal Science, which published the “sting,” claims it exposes the “dark side of open access publishing.” I guess the dark side of subscription publishing has been well known for so long it’s good other dark sides are exposed. Critics have complained about the quality of the study/sting itself and the fact that it targeted only open access journals, even though (shockingly!) subscription science journals can be just as susceptible to flawed peer review, including Science itself.

I’m still trying to figure out what all the hubbub’s about. Okay, so only open access journals were targeted (including several owned by Elsevier and other subscription science publishers). Okay, a whole bunch of the publishers on Beall’s List of Predatory Publishers turn out to be predatory publishers. All you have to do is start exploring some of those publishers to figure out they’re hardly reputable.

Putting aside the potential bias of the subscription journal Science trying to spin this as a sting that shows how subscription journals are more trustworthy than open access journals, isn’t it beneficial to know just what dubious OA journals are in fact little more than scams? Beall himself might have an anti-OA bias and believes that the subscription Big Deals have been a big success for libraries (although I still don’t believe the numbers back him up on that), but that doesn’t mean he’s not doing the world a service by identifying suspicious publishers. Identifying suspicious OA publishers is good for the OA movement.

The only way this could be harmful to the OA movement in general is if someone claimed that this “sting” somehow proved that the OA process is inherently flawed. That would be a stupid and unsupportable claim based on the evidence at hand. In fact, despite the fact that every other Indian citizen seems to be creating a dubious OA journals, numerous OA journals didn’t fall victim to the bogus article. Is anyone making that claim?

What we can learn from this episode is that there are a lot of shady publishers trying to make money. We live in a world where Elsevier published fake medical journals for profit. Does it really come as a surprise that lots of enterprising people want to find a way to make a profit from a flawed system of scholarly communications? But just as the mission of science isn’t to support Elsevier’s bottom line, neither is it to support questionable OA publishers around the world. They should be outed and avoided. Maybe the bigger lesson is that wherever profit is involved in scholarly communication, someone’s going to try to make a profit, whether it’s Elsevier or some desperate guy in India with access to the Internet.

2 thoughts on “On the “Sting”

  1. WHERE THE FAULT LIES

    To show that the bogus-standards effect is specific to Open Access (OA) journals would of course require submitting also to subscription journals (perhaps equated for age and impact factor) to see what happens.

    But it is likely that the outcome would still be a higher proportion of acceptances by the OA journals. The reason is simple: Fee-based OA publishing (fee-based “Gold OA”) is premature, as are plans by universities and research funders to pay its costs:

    Funds are short and 80% of journals (including virtually all the top, “must-have” journals) are still subscription-based, thereby tying up the potential funds to pay for fee-based Gold OA. The asking price for Gold OA is still arbitrary and high. And there is very, very legitimate concern that paying to publish may inflate acceptance rates and lower quality standards (as the Science sting shows).

    What is needed now is for universities and funders to mandate OA self-archiving (of authors’ final peer-reviewed drafts, immediately upon acceptance for publication) in their institutional OA repositories, free for all online (“Green OA”).

    That will provide immediate OA. And if and when universal Green OA should go on to make subscriptions unsustainable (because users are satisfied with just the Green OA versions), that will in turn induce journals to cut costs (print edition, online edition), offload access-provision and archiving onto the global network of Green OA repositories, downsize to just providing the service of peer review alone, and convert to the Gold OA cost-recovery model. Meanwhile, the subscription cancellations will have released the funds to pay these residual service costs.

    The natural way to charge for the service of peer review then will be on a “no-fault basis,” with the author’s institution or funder paying for each round of refereeing, regardless of outcome (acceptance, revision/re-refereeing, or rejection). This will minimize cost while protecting against inflated acceptance rates and decline in quality standards.

    That post-Green, no-fault Gold will be Fair Gold. Today’s pre-Green (fee-based) Gold is Fool’s Gold.

    None of this applies to no-fee Gold.

    Obviously, as Peter Suber and others have correctly pointed out, none of this applies to the many Gold OA journals that are not fee-based (i.e., do not charge the author for publication, but continue to rely instead on subscriptions, subsidies, or voluntarism). Hence it is not fair to tar all Gold OA with that brush. Nor is it fair to assume — without testing it — that non-OA journals would have come out unscathed, if they had been included in the sting.

    But the basic outcome is probably still solid: Fee-based Gold OA has provided an irresistible opportunity to create junk journals and dupe authors into feeding their publish-or-perish needs via pay-to-publish under the guise of fulfilling the growing clamour for OA:

    Publishing in a reputable, established journal and self-archiving the refereed draft would have accomplished the very same purpose, while continuing to meet the peer-review quality standards for which the journal has a track record — and without paying an extra penny.

    But the most important message is that OA is not identical with Gold OA (fee-based or not), and hence conclusions about peer-review standards of fee-based Gold OA journals are not conclusions about the peer-review standards of OA — which, with Green OA, are identical to those of non-OA.

    For some peer-review stings of non-OA journals, see below:

    Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(2), 187-195.

    Harnad, S. R. (Ed.). (1982). Peer commentary on peer review: A case study in scientific quality control (Vol. 5, No. 2). Cambridge University Press

    Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242.

    Harnad, S. (2010) No-Fault Peer Review Charges: The Price of Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16 (7/8).

  2. Pingback: Anti-OA and the Rhetoric of Reaction | Academic Librarian

Comments are closed.