From sharing to versioning to citing to retracting or… How preprints became quasi-articles

The forms of communication in academic communities are very diverse: articles, seminars, books, colloquia, mailing lists, posters, letters, workshops, proceedings,… The reasons why each one is chosen are multiple and the formats live their own life with new uses, far beyond the initial intentions of their creators. As we will see, preprints, though they have a relatively short history, followed complex patterns, to become something more than shared documents.

It is first necessary to agree on the designation of these entities: working papers, discussion papers, e-prints, preprints will be considered in this post as equivalents. All are written texts, produced by authors without any form of certification, and are made available without any paywall on a perennial web address. Contrary to Wikipedia, we don’t distinguish them on the basis of their future publiation in a journal. We will also ignore the issue of their licensing for two reasons: historically, preprints have existed long before the release of CC licenses – and many of them continue to be unlicensed; pragmatically, because our focus here is on the use of preprints, not their re-use.

Prior to the electronicization of scholarly communication, some disciplines had already experienced preprints, notably psychology and biomedical sciences. This meant that paper manuscripts circulated by mail, with associated quite high material costs (reproduction, stamps). This was not the primary reason for the cessation of certain practices: in biomedicine, publishers were vigilant and their editors-in-chief allies declared a ban on the publication of manuscripts that had already been circulated. On the other hand, physics, especially high-energy physics, pioneered these practices and continued to generate these mail flows before transferring them to the electronic world in the 1970s. Using the compactness of the TEX format, these preprints started to be distributed by email and then Paul Ginsparg had the idea of building an automatic BBS, basically inventing ArXiv.

E-prints servers as competitors of journals?

Until then, in all disciplines, usage has essentially been the same: to facilitate the consideration and discussion of recent research and results, by circumventing the obstacle of delays of publication in journals. Admittedly, a large number of conferences had adopted the practice of proceedings, thus allowing a reduction in this delay, but they remained then very largely attached to the world of paper printing. Following the success of ArXiv, several e-print services were launched in the mid-1990s and Steven Harnad predicted their pre-eminence over journals as the central venue for distribution:

“the best people start putting stuff and readers start saying :’Why wait for the journal to come out? I have to teach this stuff, I have to know this stuff, I can get it to the archive’ and then the libraries come around and say ‘should we order this journal?’ and the scientist says ‘I don’t care, I no longer read in paper’.”

It seems obvious that this prediction did not come true, far from it, and the 2000s saw a world divided between a few disciplines massively practicing preprinting (physics, mathematics, computer science, economics) and the rest of the academic world ignoring them superbly. Nevertheless, their uses – both on the authors’ and the readers’ side – have started to compete with the journals ones. In a peer reviw fashion, the “raw” circulation of a manuscript for discussion regularly produced new versions of a preprint. On ArXiv, more than a third of the preprints exist in 2 versions, and more than 10% exist in 3 versions; or even more as Hirsch’s famous manuscript inventing the h-index had 5 versions, 4 of them before submission to PNAS. On the readers’ side, researchers soon started to cite not only published papers, but also preprints – then often called e-prints, on a massive scale.

These new reading and referencing practices have led to a vast literature on the citation advantage of open access articles over those available only through subscription and its paywalls1. Beyond this possible advantage – monetarised by big publishers for their hybrid journals in a commercial version of open access – these practices shed light on the change in status from simple “manuscripts” to texts integrated into the published literature. To completly get them out of their grey literature status, Paul Ginsparg had proposed as early as 1996 to add overlaid information on preprint servers, which led on the one hand to the creation of journals overlays proper, and on the other hand to various recommendation devices for preprints, among other texts.

The accelerated life cycle of preprints

The “standardization” of preprints through citation or certification is not the only notable development. Indeed, the recent disciplinary extension of preprints servers, in what is often described as a second wave2) is a significant development and has consequences for their uses. Let us take the example of life sciences, with the development of biorXiv, a platform launched in 2013 and published 30,000 preprints in the year 2019.

From this video put online at the time of the platfom inauguration, we will retain two elements: fastness and discussion. If high energy physicists, because of the weight of the infrastructures, work organization and authorship practices are used to live in a world with little publishing competition3 , this is not the case for many computer scientists who already published on ArXiv, especially in the artificial intelligence branch. Also, flag-planting to estabilish priority and (thoretically) gain the scientific credit has been a common operation on ArXiv, the use of timestamp by the server being a certfication of the order of arrival. If this fastness is also important in life sciences to avoid getting scooped, it shall be equally considered in contrast with the slowness of journals : speed of publication has often been an argument for different outputs, and the tension between rapid dissemination and quality of certification is at the heart of the history of the peer review in journals4.

For life scientists and especially early carreer researchers with short-term contracts, speed is less a question of priority than to simply see their results being widespread to be able to build some credit for their next assignment. Until preprints, no publications meant no credit. Now, they have at least something, especially since some organization have recognized preprints as legitimate outputs for CVs in grant applications. Of course, they still need publication in journals, which leads us to the role of discussions. As we have seen, in the case of ArXiv, discussions often feed a release cycle in the form of new preprints. In life sciences, this is apparently much less the case: a recent study by Kent Anderson5. shows that the majority of preprints were posted after they were submitted to a journal, so the “discussion » rather than feedback from the readers of the preprint takes the form of a peer review within a given journal

From fastness to emergency:
Preprints can be retracted too

At this point, we need to address the question of the targeted audiences for preprint servers: if it was initially pure academic community exchanges, things have changed with the popularity of social networks. Indeed, the cited Knowledge Exchange report highlighted the crucial role played by Twitter in the dissemination of preprints by their authors or platforms themselves. This dissemination to fringe and non-academic audiences has several consequences, such as the reuse of preprints by maginalised communities or communities with minority knowledge and beliefs. This is also the case for links to blogs included in ArXiV trackbacks for which it is very difficult to reach a consensus on the “serious” or “eccentric” character of a website.6. If Anderson concluded that the promise of a discussion was not kept within the platform in the case of biorXiv, it doesn”t necessarly mean that it is limited to journal peer review, as an unexpected event has just shown us.

In fact, the 2019-nCov coronavirus has been a test for biorXiv as it became the forefront of scientific information. Yet, since the 2003 SARS virus, the international health community, strongly pushed by WHO, seemed to have favored data and information sharing over scientific credit or patents. In recent epidemics, even the paywalls of big publishers have been opened in order to maximise the sharing of the existing knowledge. Now that biorXiv has been strongly established, it is the easiest legal way to combine sharing, speedness and some credit coming from priority7 And indeed, the preprint server has been flooded with coronavirus papers.

This new disclaimer – which specifies in the current case a general policy stated at the top of each preprint – emphasizes the potential audience of preprints, media. For long, the majority of senior life scientists have feared that uncertified preprints would be taken for granted and that a flow of “bad science” would be given to lay audiences. And their strongest fears apparently came true, as an article suggesting the artificial nature of the current virus quickly fed the conspiracy sites and flows, “proving” the epidemic could only be, at the very least, the result of a failed experiment. But the preprint publicity is more ambiguous : as its links spread, it was severely criticized, in a very well-argued way, by colleagues. Moreover, biorXiv is one of the few preprint servers that has included a comment feature attached to the preprints it hosts. And this paper has received a lot of them! So much so that the preprint was retracted less than 2 days after its publication – or more exactly the authors withdrew it following all these comments, whereas previously the retraction of a preprint was envisioned only in case his published heir would have previously endured this exact fate.

The interpretation of this ultra-fast life cycle is of course contrasted: the creators of Retraction Watch see it as a victory for science in preprint mode, while K. Anderson and others consider that such an article would never have appeared in a top-level journal. But the outcome of this debate on journals vs. preprint servers quality should not obscure the profound transformations of preprints. The Harnad vision began to come into reality more than 20 years later, but in a twisted way. While preprint servers didn’t replace journals, preprints have become quasi-articles: used for priority, have a DOI, generate some scientific credit, read and cited, change through at least informal discussion processes, appear on CVs and are archived, generate media interest. And now even if by name they are pre-publications, they are submitted to the stringest post-publication peer review decision.




Cite this blog post
Didier Torny (2020, February 5). From sharing to versioning to citing to retracting or… How preprints became quasi-articles. The political economy of academic publications. Retrieved April 18, 2024, from https://doi.org/10.58079/sy35

  1. This literature is so vast and contradictory that Ben Wagner has made an annotated bibliography of it []
  2. see the very good synthesis funded by Knowledge Exchange, Chiarelli, Andrea, et al. “Accelerating scholarly communication: the transformative role of preprints.”(2019 []
  3. In her groundbreaking 1988 book, Sharown Traweek stated that publications were not important for them, as they were only archives, record-keepingof the things that really matters []
  4. see Pontille, David, and Didier Torny, “From manuscript evaluation to article valuation: the changing technologies of journal peer review.“. Human Studies 38.1 (2015): 57-79. []
  5. “bioRxiv: Trends and analysis of five years of preprints.” Learned Publishing (2019). []
  6. see Ritson, Sophie. “‘Crackpots’ and ‘active researchers’: The controversy over links between arXiv and the scientific blogosphere.” Social studies of science 46.4 (2016): 607-628. []
  7. On the illegal side, activists have built a specialized archive based on Scihub. []

One Reply to “From sharing to versioning to citing to retracting or… How preprints became quasi-articles”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search OpenEdition Search

You will be redirected to OpenEdition Search