Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

The Coming of Age of Open Access (I) or… Where are the alternative journals 18 years after the BOAI?

For most of us, February 14th is Valentine’s Day; for open access activists and lovers, It is also the celebration of the BOAI anniversary. It was 18 years ago, they were sixteen, meeting in Budapest in December 2001. Far from agreeing on everything, yet they co-signed a landmark declaration published on February 14th, 2002. 18 years later, it is the coming of age for Open Access, a time to look at what has been changed, redifined, gained and missed. To start with, we have to remember that the BOAI really defined open access, as a virtually unlimited re-use of academic documents:

By “open access” to this literature, we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited ((BOAI, 14th February 2002)).”

They also put together what was largely separated before, the soon named “green road” and “gold road”. Nevertheless, contrary to the popular belief, the supposed “original version” of the BOAI reproduced in lots of copies on the web, they were not calling them “self-archiving” and “open access journals”. Indeed, the reference above is not the true original version, but one slightly changed in the summer of 2002. The former one, which can be seen on the Web Archive, stated : “Open access to peer-reviewed journal literature is the goal. Self-archiving (I.) and a new generation of open-access alternative journals (II.) are the ways to attain this goal. So following BOAI, we will deal with this coming of age in two successive posts : this first one will focus on alternative journals, the second one on the triumph of organized archives over self-archiving.

A revolution with no defined business model

How are described these alternative journals? Why and how are they alternative and to what? The main answer is given in a long paragraph of the BOAI: it is our actual starting point, from which the history of these journals shall be analyzed. To ease the reading, we have divided it in three parts.

Second, scholars need the means to launch a new generation of alternative journals committed to open access, and to help existing journals that elect to make the transition to open access.

The two ways for journals to commit were already happening at the beginning of the century. On the one hand, very early electronic journals, often without publisher but what we now call a platform, didn’t go the subscription way and were established as free for readers. On the other, the 2001 PLOS letter/petition pushed publishers to change their ways and open their content, with lots of signees but few positive answers apart from BMC. So the BOAI reminds them that they could still “filp” to open access. But what does it mean exactly?

Because journal articles should be disseminated as widely as possible, these new journals will no longer invoke copyright to restrict access to and use of the material they publish. Instead they will use copyright and other tools to ensure permanent open access to all the articles they publish. Because price is a barrier to access, these new journals will not charge subscription or access fees, and will turn to other methods for covering their expenses.”

The alternativeness doesn’t come from the way journals should be run (editorial boards, scope, peer review) but from their economic model. Journals are qualified as “alternative” because they shouldn’t anymore rely on the property of content and subscription as the main route to pay for journal expenses (and profit). More than that, they would have extra costs as they have to maintain open access through time. With the vanishing of current and future revenues, on what shall the new business model rest?

There are many alternative sources of funds for this purpose, including the foundations and governments that fund research, the universities and laboratories that employ researchers, endowments set up by discipline or institution, friends of the cause of open access, profits from the sale of add-ons to the basic texts, funds freed up by the demise or cancellation of journals charging traditional subscription or access fees, or even contributions from the researchers themselves. There is no need to favor one of these solutions over the others for all disciplines or nations, and no need to stop looking for other, creative alternatives.

This is probably the most important part of the whole BOAI declaration, besides the open acess definition. Three main points should be retained: firstly, the idea of a diversity of sources of income, in an optimistic vision of the means of financing a journal undoubtedly fuelled by the success of the open source software movement. Secondly, this diversity is reinforced by the final sentence which supports the absence of a “one best way” even when the exploration of possibilities would have fully taken place. Finally, thirdly, the idea of recourse to a payment by authors is tconceived as a last resort (“or even”). In other words, not only does the alternative economic model remained unclear and uncertain, but the paying author’s proposal was not considered a priority at all.

The stories of 11 pioneers

But with such vagueness, who then ventured to go on the alternative and how did they settle their journals once launched? On the George Soros site, funder of the BOAI meeting through the Open Society Institute, a very short list of 11 entities was then available, even if it was supposed to be only examples1.

11 journals

So, a first way to fulfill the promise made by the title of this post is to investigate the trajectory of these 11 journals, or rather publication websites, so great is their diversity. We will treat them in groups, according to their destiny:

  1. Still free of charge mathematicians & computer scientists journals: Algebraic & Geometric Topology and Geometry & Topology were respectively founded in 1997 and 2001, have always published open access articles and are still community-based journals, published by MSP, which puts a very strong anti-APC statement on its website. Document Mathematica is the first journal of the Elibm platform, founded in 1996, which acts as a repository for maths proceedings and journals, free of charge for readers and authors. JMLR was created in 2001 in an independance movement by 40 members of the editorial board of Machine Learning, then owned by Kluwer and is a always hard-to-believe from the outside $10 per article cost kind of journal – thanks to huge volunteer work, Latex, open source software, no fancy website and outsourced micropublishing for paper versions with no financial exchange.
  2. Still owned by societies, but have switched to APC: The New Journal of physics founded in 1998, now published by IOP with an APC of 1630 €. It was a part of some “offset deals” (Austria, UK) and is still one of the journals of the SCOAP3 agreement. The Journal of Insect Science was supported by the University of Arizona, launched in 2001, it changed with the death of its editor-in-chief in 2014, owned by a society but published by Oxford with a 1176 € APC.
  3. Bought by Springer platforms: Living Reviews in Relativity was founded in 1998 by a Max Planck institute, it published only reviews, which were “living” as authors could update them as new literature could be taken into account. It was sold to Springer in 2015, which kept the same formula with, remarkably, no APC . The trajectory of BioMedCentral is probably well-known to readers, let us just remind that it was founded in 2000, cosigned BOAI through Jan Velterop, its then director, was the first “big” publisher to bet on APC and was finally sold by its owner, Vitek Tracz, to Springer in 2008.
  4. Popularizers of APC and inventors of the megajournal: PLOS didn’t really exist as a publishing place at the time of the BOAI. Its call/letter for Open Access the year before as almost only BMC responded positively. But they were already able to secure funds, cosigned BOAI through Michael Eisen and soon lauched PLOS Biology and then, in 2006, PLOS ONE which was the first megajournal, which climbed to more than 30,000 articles a year, invented new forms of peer review and supported article-level metrics againts journal-based metrics . It was also the launch of APC as a standard way to provide Open Access for large communities.
  5. The Platform that used to promote open access among publishers: Highwire has never been a journal nor a platform-journal, but rather a hosting service which develops tools and software for publishers. Founded in 1995 and based at Standford University, it used to be the largest archive of free full-text science on Earth with more than 2,4 million articles. Bought by an equity fund in 2014 (a minority share is still owned by Stanford), this “free texts” webpage stopped its counting on the 25th March, 2015 and the webpage was not maintained after 2018.
  6. Terminated by its learned society: Psycoloquy had been launched and supported by the American Psychological Association, with Stevan Harnad at its helm, who translated some of the features he developed in his previous journal, BBS, notably open peer commentary, into the electronic form. It stopped publishing new articles by 2002.

Other journals or platforms could have been indicated as examples in early 2002. One can notably think of Scielo which was already working very well in South America, Erudit was growing up in Quebec as well as Revues.org in France. But the BOAI was rather focused on STM and English-language journals, and the alternative journals of the BOAI are also located within a world already dominated by an oligopoly of big publishers that was to be changed or at least challenged. Despite these limitations, the 11 stories nevertheless show the diversity of actual trajectories, the adoption of economic models that had yet to be defined and implemented and the adoption of the alternative by some big publishers.

From Gold to Diamond:
when the alternative remains alternative

Above and beyond these examples, what trends could be drawn from these last 18 years? We have to consider a wide range of moves from public policies, learned societies, universities & libraries, research funders and finally of course publishers in order to give a second answer to the titile of this post. Of course, the first evolution is the invention of a locution, soon after the BOAI : open access journal, which replaced the “alternative” ones.

Then, as with 4 of the 11 listed, we observed a massive rise of the APC model, from BMC and PLOS pioneers. The idea that authors would accept to pay to publish was not to be taken for granted, would it be in principle or in practice with questions about the accounting circuit, the actual source of funding (authors, labs, departments, universities…), the level of price, etc. And still in some disciplines, being forced to pay is putting a low-quality stamp on the output. The Wellcome Trust in the UK and the ERC programs in the European Union played a huge role in experimenting with the possibility of paying APC through grants, which made them a “normal cost”, especially in well-granted disciplines (biomedecine, physics…). The UK official public policy, after the Finch Report in 2012, also injected money to pay for APCs.

It not only fueled the growth of relatively new publishers – BMC, PLOS but also MDPI, Frontiers in, Hindawi – but pushed “traditional” big publishers to adopt APC and make their journals “hybrid”, with a “basic funding” by subscription and “extra funding” through APC. Springer began its “Open Choice Program” in 2007, which name deeply reflects the liberal-market vision of open access. These two evolutions led to very harsh critiques of the whole Gold OA project : on the one hand, it raised the question of the birth of predatory publishing through APC ; on the other hand, hybrids meant double dipping and the deepening of the serial crisis.

Hybrid journals were conceived as transition tools to open access, as the then director of SPARC Europe theorized them2 So these private and public policies of APC funding were conceived as a way to reach a tipping point after which the Open Access, now renamed full open access journal, would happen. How naive wrote Richard Poynder in a recent essay3! Some powerful actors came to the same conclusion, so they recently try to impose new radical changes in the funding of journals, most notably Max Planck Gesellschaft in 2015, then the now famous Coalition S, which aim is to accelerate the transition to open access by in fact killing the subscription model, having a CC-BY license to authors for content. Does this sound familiar?

So it seems to go full circle: almost twenty years later, trying to get rid of the traditional economic modela for journals and to do that, talking with big publishers in order to sign “transformative agreements”. Open access has gone mainstream, Elsevier even now present open access as a standard. If changes happened, it was more on the way journals were run, most notably open peer review4. But wait a minute, if the alternative has gone mainstream, where is the new alternative? In fact, the support and success of the APC modeal made the impression on a lot of commentors and actors that the Gold way was now the equivalent of an author-payor model. That led some activists to coin new names for “no APC journals”. Would it be Diamond or Platinium, it meant that it was also free for authors, and not only readers.

Scielo, Erudit, Open Edition were already mentioned, just as 5 of our 11 pioneers. But we could add Open Library of Humanities or Redalyc as “big platforms” for journals5. They are the majority, as no APC journals still represents more than 70% of entries in the DOAJ, their business models are diverse, from bricolage to strong institutional support, just like the BOAI predicted. So the alternative is still alternative, though it has vastly grown in the last 18 years. Getting to adulthood, we will see whether OA journals coexist into two genres, non-APC and APC, or whether one of them in not sustanaible in the long run. Unless, of course, the other open access road gets us into a post-journal world through preprint servers and open archives. To be continued…

  1. this list didn’t evolve a lot in the next two years, Highwire and PLOS were removed, while two MDPI journals were added []
  2. Prosser, David C. “From here to there: a proposed mechanism for transforming journals from closed to open access.” Learned publishing 16.3 (2003): 163-166. []
  3. Poynder, Richard. “Open access: Could defeat be snatched from the jaws of victory?.” (2019). []
  4. Which means lots of different things, see Ross-Hellauer, Tony. “What is open peer review? A systematic review.F1000Research 6 (2017). []
  5. Not to mention the ones for books which are catalogued into the DOAB. []

The perfect hacking of journal peer review or… The fastest way to become a Highly Cited Researcher

Since the beginning of the 21st century, the names of great fraudsters have spread beyond the academic arenas, each one bringing their biographies, their practices and the astonished tale of the discovery of their misdeeds. This starisation of fraudsters should neither hide the existence of famous cases in the past1, nor the multitude of ordinary misdemeanors and misconduct daily taking place in the academic world, which is hardly different from other professional circles in this respect. Nevetheless, they deserve a place in the Hall of Hame of academic fraudsters; so, before addressing the case of our new champion Kuo-Chen Chou, let’s review a few exemplary figures of this Hall of Fame, in alphabetical order.

Yoshitaka Fujii (2012): enduring Japanese anesthesiologist who owns is world record holder for the number of articles retracted (183). He spent his career inventing data and despite a statistical analysis published in 2000 showing how “too nice” his numbers were, he was not really worried until 10 years later.2

Woo-Suk Hwang (2006) : amazing Korean veterinarian and biologist, specialized in stem cells and producer of the first human clone, announced by a publication in Science. After being accused of forcing his technicians to donate their eggs for his research, investigations revealed the total absence of human cloning. A national glory in South Korea and an international star, his public downfall was so brutal that he made the cover of Time Magazine.

Jan-Hendrik Schön (2002): industrious German physicist, working at Bell Labs on the limits of matter and life, able to co-author in less than two years seven papers in Nature, eight in Science and six in Physic Review. All of course have since been retracted, and it seems that all his research, including his thesis, was based more on his desire to stick to the expectations of theory or those of his colleagues than on the empirical results he claimed to have achieved.3

Diederik Stapel (2011) : extraordinary Dutch psychologist, whose social experiments always proved the hypotheses made… since they were never carried out, but fabricated on paper and computer. Denounced by a whistleblower from his team and 58 retracted articles later, he was the object of a sensitive New York Times portrait . His own production became an object of psychology of deception as his colleagues found small differences in style between his genuine articles and the fake ones.

From transparent peer review
to citation manipulation

These eminent members of the Hall of Fame, all men – women are an extremely small minority among the elected members – have produced “false science” but have neither massively plagiarized nor attacked the peer review system. They rather provided, as good forgers, the expected raw material to journals. However, over the last 10 years, there has been concern about how reviewers or publishers can indirectly influence the science produced, in a more subtle way. “Coerced citations”, “fake peer review” and “cartel citations” are all designations of practices that do not directly fudge the content of articles, but act on the margins by hacking into the journal peer review process.

So, the old criticisms about the misdeeds of anonymity in journal peer review4 were reborn at great expense and many debates about “transparent peer review” took place. Where in the past editors and authors were at the helm of these discussions, now the publishers are in charge and, above all, Elsevier. The company provided two in-house researchers with access to his back office and they were able to compare the bibliography of the manuscripts with those of the published articles and check whether added references were coauthored by reviewers of the manuscript. Unsurprisingly, the authors of When Peer Reviewers Go Rogue concluded that there was a manipulated citation, even if its level is quite low (0.79%).

At the same time, a research team led by the famous John Ioannidis sought to build a “clean” base of citations for the most cited researchers. For them, this meant being able to separate self-citations from the rest, and it was on this occasion that they made a surprising discovery: the staggering intensity in the level of self-citation of some colleagues. Indeed, as your level of citations rise, you expect that they are all the more coming from distant colleagues. Not for everybody:

Vaidyanathan, a computer scientist at the Vel Tech R&D Institute of Technology, a privately run institute, is an extreme example: he has received 94% of his citations from himself or his co-authors up to 2017 (…) He is not alone. The data set, which lists around 100,000 researchers, shows that at least 250 scientists have amassed more than 50% of their citations from themselves or their co-authors, while the median self-citation rate is 12.7% ((Nature, Hundreds of extreme self-citing scientists revealed in new database, 19th August 2019)).”

But then, what are the Highly Cited Researchers, whose numbers are one of the components of the Shanghai Ranking, actually doing? Are they renowned for their influential results or are they more adept at being manufacturers on the citation chain? Bibliometricians would say that a “high” self-citation rate is not necessarly a sign of fraud, but that a detailed inquiry woud be neeeded This is where we return to our newest member of the Hall of Fame, whose work is worthy of close consideration.

A perfect hacker,
always greedy for citations

It all starts with a mundane story: a reviewer asks for additional references in a manuscript. But the request itself is not so trivial: it consists of 35 references, the vast majority of which are co-signed by him, and he indicates that his recommandation to editors, on whether or not to accept the manuscript, will depend heavily on this inclusion. It should also be specified that it is not for a single review that this request is made, but for each of the manuscripts passed through his hands. In describing their decision to ban this unnamed reviewer, editors did not indicate how long this practice has existed. Indeed, it is unusual, to say the least, to request the addition of so many references, and one might question their own responsibility in this matter if it lasted as they reference to the “most recent reviews” seems to imply. To which they reply:

One might ask how this reviewer got away with submitting multiple reviews containing coercive requests for citation before being banned. The shortest explanation is that excessive self-citation demands are generally not seen as an ethical problem until a pattern is established, and a decentralized peer-review system is not amenable to detecting patterns“((Wren, Jonathan D., Alfonso Valencia, and Janet Kelso. “Reviewer-coerced citation: case report, update on journal policy and suggestions for future prevention.” (2019): 3217-3218.)).

And in fact they inquire in other journals, and that suggested the same pattern of behavirour for this reviewer. A year later, in early 2020, the investigation leads to an editorial in another journal, the Journal for Theoretical Biology, (JTB), which reveals perhaps the most complete case of manipulated citation to date. Indeed, the hacker is no longer a simple reviewer there, but a “handling editor” for JTB, which enables him to act at several stages of the manuscript, with a single objective: to accumulate citations.

  1. He took the charge of many manuscripts from his research centre to ensure that they are well treated (conflict of interest).
  2. He chose reviewers requested by the authors, or designated colleagues from his own centre (conflict of interest) or even reviewed them himself under a false name (ghost peer review).
  3. In many cases, with the return of the reviews, he would ask for the title of the article to be changed so that it explicitly refers to his own algorithm, as well as a discussion of his own work in the introduction and conclusion (coerced citations).
  4. As a result, he requested the addition of a very large number of references (up to more than 50) to the bibliography of the manuscript (coerced citations).
  5. Just before the acceptance of the manuscript, he was added as co-author of the article (gift authorship).

We therefore observe two complementary types of behaviour. On the one hand, hidden from the outside, it consists in hacking the flow of the peer review journal, capturing the evaluation process to ensure that the articles most “favourable” to its citation count are actually published – and sometimes with his coauthorship. On the other hand, visible to the authors and perhaps the editor-in-chief and publisher, the aim is to hack the byline, content and references of the manuscript by making imperative requests for inclusion. Thus, these ordinary manuscripts became articles loaded with citations from the hacker.

It can be noted that at this stage, the name of the reviewer is not given by JTB, which caused some to make educated guess on Twitter. News articles in Nature among others, soon follow and revealed his identity: Kuo-Chen Chou, a retired chinese-american biophysicist. We then learn that he has been for years a member of the Highly Cited Researcher “club”5. So, as usual, this extraordinary case will be treated as “rare”, counter-measures have been taken such as an algorithm written by one of the Bioinformatics editor. But the ordinary gaming will still happen, would it be in so-called predatory journals or “prestigious” publishers, with smarter colleagues less greedy on citations and not obsessed with the HCR club. Will you be one of them?6

  1. for example John Darsee, see Broad, William; Wade, Nicholas (1983), Betrayers of the Truth: Fraud and Deceit in the Halls of Science, London: Century Publishing, ISBN0-7126-0243-7 []
  2. For a quick view of this case, see Pontille, David, and Didier Torny. “Behind the scenes of scientific articles: defining categories of fraud and regulating cases.” (2012). []
  3. He was the subject of a wonderful book, Plastic Fantastic, ISBN 978-0-230-22467-4 []
  4. See David Pontille and Didier Torny, “The blind shall see! the question of anonymity in journal peer review.” Ada: A Journal of Gender, New Media, and Technology, No.4. doi:10.7264/N3542KVW (2014). []
  5. the Web of Science Group didn’t list him in 2019 as he had, like others, a high rate of self-citations but, as stated, “Although this list is updated and refreshed each year, a Highly Cited Researcher is always a Highly Cited Researcher—whether their name was included in 2013 or 2019.” []
  6. I am aware that this post contains two self-references but they won’t be counted in any database []

From sharing to versioning to citing to retracting or… How preprints became quasi-articles

The forms of communication in academic communities are very diverse: articles, seminars, books, colloquia, mailing lists, posters, letters, workshops, proceedings,… The reasons why each one is chosen are multiple and the formats live their own life with new uses, far beyond the initial intentions of their creators. As we will see, preprints, though they have a relatively short history, followed complex patterns, to become something more than shared documents.

It is first necessary to agree on the designation of these entities: working papers, discussion papers, e-prints, preprints will be considered in this post as equivalents. All are written texts, produced by authors without any form of certification, and are made available without any paywall on a perennial web address. Contrary to Wikipedia, we don’t distinguish them on the basis of their future publiation in a journal. We will also ignore the issue of their licensing for two reasons: historically, preprints have existed long before the release of CC licenses – and many of them continue to be unlicensed; pragmatically, because our focus here is on the use of preprints, not their re-use.

Prior to the electronicization of scholarly communication, some disciplines had already experienced preprints, notably psychology and biomedical sciences. This meant that paper manuscripts circulated by mail, with associated quite high material costs (reproduction, stamps). This was not the primary reason for the cessation of certain practices: in biomedicine, publishers were vigilant and their editors-in-chief allies declared a ban on the publication of manuscripts that had already been circulated. On the other hand, physics, especially high-energy physics, pioneered these practices and continued to generate these mail flows before transferring them to the electronic world in the 1970s. Using the compactness of the TEX format, these preprints started to be distributed by email and then Paul Ginsparg had the idea of building an automatic BBS, basically inventing ArXiv.

E-prints servers as competitors of journals?

Until then, in all disciplines, usage has essentially been the same: to facilitate the consideration and discussion of recent research and results, by circumventing the obstacle of delays of publication in journals. Admittedly, a large number of conferences had adopted the practice of proceedings, thus allowing a reduction in this delay, but they remained then very largely attached to the world of paper printing. Following the success of ArXiv, several e-print services were launched in the mid-1990s and Steven Harnad predicted their pre-eminence over journals as the central venue for distribution:

“the best people start putting stuff and readers start saying :’Why wait for the journal to come out? I have to teach this stuff, I have to know this stuff, I can get it to the archive’ and then the libraries come around and say ‘should we order this journal?’ and the scientist says ‘I don’t care, I no longer read in paper’.”

It seems obvious that this prediction did not come true, far from it, and the 2000s saw a world divided between a few disciplines massively practicing preprinting (physics, mathematics, computer science, economics) and the rest of the academic world ignoring them superbly. Nevertheless, their uses – both on the authors’ and the readers’ side – have started to compete with the journals ones. In a peer reviw fashion, the “raw” circulation of a manuscript for discussion regularly produced new versions of a preprint. On ArXiv, more than a third of the preprints exist in 2 versions, and more than 10% exist in 3 versions; or even more as Hirsch’s famous manuscript inventing the h-index had 5 versions, 4 of them before submission to PNAS. On the readers’ side, researchers soon started to cite not only published papers, but also preprints – then often called e-prints, on a massive scale.

These new reading and referencing practices have led to a vast literature on the citation advantage of open access articles over those available only through subscription and its paywalls1. Beyond this possible advantage – monetarised by big publishers for their hybrid journals in a commercial version of open access – these practices shed light on the change in status from simple “manuscripts” to texts integrated into the published literature. To completly get them out of their grey literature status, Paul Ginsparg had proposed as early as 1996 to add overlaid information on preprint servers, which led on the one hand to the creation of journals overlays proper, and on the other hand to various recommendation devices for preprints, among other texts.

The accelerated life cycle of preprints

The “standardization” of preprints through citation or certification is not the only notable development. Indeed, the recent disciplinary extension of preprints servers, in what is often described as a second wave2) is a significant development and has consequences for their uses. Let us take the example of life sciences, with the development of biorXiv, a platform launched in 2013 and published 30,000 preprints in the year 2019.

From this video put online at the time of the platfom inauguration, we will retain two elements: fastness and discussion. If high energy physicists, because of the weight of the infrastructures, work organization and authorship practices are used to live in a world with little publishing competition3 , this is not the case for many computer scientists who already published on ArXiv, especially in the artificial intelligence branch. Also, flag-planting to estabilish priority and (thoretically) gain the scientific credit has been a common operation on ArXiv, the use of timestamp by the server being a certfication of the order of arrival. If this fastness is also important in life sciences to avoid getting scooped, it shall be equally considered in contrast with the slowness of journals : speed of publication has often been an argument for different outputs, and the tension between rapid dissemination and quality of certification is at the heart of the history of the peer review in journals4.

For life scientists and especially early carreer researchers with short-term contracts, speed is less a question of priority than to simply see their results being widespread to be able to build some credit for their next assignment. Until preprints, no publications meant no credit. Now, they have at least something, especially since some organization have recognized preprints as legitimate outputs for CVs in grant applications. Of course, they still need publication in journals, which leads us to the role of discussions. As we have seen, in the case of ArXiv, discussions often feed a release cycle in the form of new preprints. In life sciences, this is apparently much less the case: a recent study by Kent Anderson5. shows that the majority of preprints were posted after they were submitted to a journal, so the “discussion » rather than feedback from the readers of the preprint takes the form of a peer review within a given journal

From fastness to emergency:
Preprints can be retracted too

At this point, we need to address the question of the targeted audiences for preprint servers: if it was initially pure academic community exchanges, things have changed with the popularity of social networks. Indeed, the cited Knowledge Exchange report highlighted the crucial role played by Twitter in the dissemination of preprints by their authors or platforms themselves. This dissemination to fringe and non-academic audiences has several consequences, such as the reuse of preprints by maginalised communities or communities with minority knowledge and beliefs. This is also the case for links to blogs included in ArXiV trackbacks for which it is very difficult to reach a consensus on the “serious” or “eccentric” character of a website.6. If Anderson concluded that the promise of a discussion was not kept within the platform in the case of biorXiv, it doesn”t necessarly mean that it is limited to journal peer review, as an unexpected event has just shown us.

In fact, the 2019-nCov coronavirus has been a test for biorXiv as it became the forefront of scientific information. Yet, since the 2003 SARS virus, the international health community, strongly pushed by WHO, seemed to have favored data and information sharing over scientific credit or patents. In recent epidemics, even the paywalls of big publishers have been opened in order to maximise the sharing of the existing knowledge. Now that biorXiv has been strongly established, it is the easiest legal way to combine sharing, speedness and some credit coming from priority7 And indeed, the preprint server has been flooded with coronavirus papers.

This new disclaimer – which specifies in the current case a general policy stated at the top of each preprint – emphasizes the potential audience of preprints, media. For long, the majority of senior life scientists have feared that uncertified preprints would be taken for granted and that a flow of “bad science” would be given to lay audiences. And their strongest fears apparently came true, as an article suggesting the artificial nature of the current virus quickly fed the conspiracy sites and flows, “proving” the epidemic could only be, at the very least, the result of a failed experiment. But the preprint publicity is more ambiguous : as its links spread, it was severely criticized, in a very well-argued way, by colleagues. Moreover, biorXiv is one of the few preprint servers that has included a comment feature attached to the preprints it hosts. And this paper has received a lot of them! So much so that the preprint was retracted less than 2 days after its publication – or more exactly the authors withdrew it following all these comments, whereas previously the retraction of a preprint was envisioned only in case his published heir would have previously endured this exact fate.

The interpretation of this ultra-fast life cycle is of course contrasted: the creators of Retraction Watch see it as a victory for science in preprint mode, while K. Anderson and others consider that such an article would never have appeared in a top-level journal. But the outcome of this debate on journals vs. preprint servers quality should not obscure the profound transformations of preprints. The Harnad vision began to come into reality more than 20 years later, but in a twisted way. While preprint servers didn’t replace journals, preprints have become quasi-articles: used for priority, have a DOI, generate some scientific credit, read and cited, change through at least informal discussion processes, appear on CVs and are archived, generate media interest. And now even if by name they are pre-publications, they are submitted to the stringest post-publication peer review decision.


  1. This literature is so vast and contradictory that Ben Wagner has made an annotated bibliography of it []
  2. see the very good synthesis funded by Knowledge Exchange, Chiarelli, Andrea, et al. “Accelerating scholarly communication: the transformative role of preprints.”(2019 []
  3. In her groundbreaking 1988 book, Sharown Traweek stated that publications were not important for them, as they were only archives, record-keepingof the things that really matters []
  4. see Pontille, David, and Didier Torny, “From manuscript evaluation to article valuation: the changing technologies of journal peer review.“. Human Studies 38.1 (2015): 57-79. []
  5. “bioRxiv: Trends and analysis of five years of preprints.” Learned Publishing (2019). []
  6. see Ritson, Sophie. “‘Crackpots’ and ‘active researchers’: The controversy over links between arXiv and the scientific blogosphere.” Social studies of science 46.4 (2016): 607-628. []
  7. On the illegal side, activists have built a specialized archive based on Scihub. []