Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

From sharing to versioning to citing to retracting or… How preprints became quasi-articles

The forms of communication in academic communities are very diverse: articles, seminars, books, colloquia, mailing lists, posters, letters, workshops, proceedings,… The reasons why each one is chosen are multiple and the formats live their own life with new uses, far beyond the initial intentions of their creators. As we will see, preprints, though they have a relatively short history, followed complex patterns, to become something more than shared documents.

It is first necessary to agree on the designation of these entities: working papers, discussion papers, e-prints, preprints will be considered in this post as equivalents. All are written texts, produced by authors without any form of certification, and are made available without any paywall on a perennial web address. Contrary to Wikipedia, we don’t distinguish them on the basis of their future publiation in a journal. We will also ignore the issue of their licensing for two reasons: historically, preprints have existed long before the release of CC licenses – and many of them continue to be unlicensed; pragmatically, because our focus here is on the use of preprints, not their re-use.

Prior to the electronicization of scholarly communication, some disciplines had already experienced preprints, notably psychology and biomedical sciences. This meant that paper manuscripts circulated by mail, with associated quite high material costs (reproduction, stamps). This was not the primary reason for the cessation of certain practices: in biomedicine, publishers were vigilant and their editors-in-chief allies declared a ban on the publication of manuscripts that had already been circulated. On the other hand, physics, especially high-energy physics, pioneered these practices and continued to generate these mail flows before transferring them to the electronic world in the 1970s. Using the compactness of the TEX format, these preprints started to be distributed by email and then Paul Ginsparg had the idea of building an automatic BBS, basically inventing ArXiv.

E-prints servers as competitors of journals?

Until then, in all disciplines, usage has essentially been the same: to facilitate the consideration and discussion of recent research and results, by circumventing the obstacle of delays of publication in journals. Admittedly, a large number of conferences had adopted the practice of proceedings, thus allowing a reduction in this delay, but they remained then very largely attached to the world of paper printing. Following the success of ArXiv, several e-print services were launched in the mid-1990s and Steven Harnad predicted their pre-eminence over journals as the central venue for distribution:

“the best people start putting stuff and readers start saying :’Why wait for the journal to come out? I have to teach this stuff, I have to know this stuff, I can get it to the archive’ and then the libraries come around and say ‘should we order this journal?’ and the scientist says ‘I don’t care, I no longer read in paper’.”

It seems obvious that this prediction did not come true, far from it, and the 2000s saw a world divided between a few disciplines massively practicing preprinting (physics, mathematics, computer science, economics) and the rest of the academic world ignoring them superbly. Nevertheless, their uses – both on the authors’ and the readers’ side – have started to compete with the journals ones. In a peer reviw fashion, the “raw” circulation of a manuscript for discussion regularly produced new versions of a preprint. On ArXiv, more than a third of the preprints exist in 2 versions, and more than 10% exist in 3 versions; or even more as Hirsch’s famous manuscript inventing the h-index had 5 versions, 4 of them before submission to PNAS. On the readers’ side, researchers soon started to cite not only published papers, but also preprints – then often called e-prints, on a massive scale.

These new reading and referencing practices have led to a vast literature on the citation advantage of open access articles over those available only through subscription and its paywalls1. Beyond this possible advantage – monetarised by big publishers for their hybrid journals in a commercial version of open access – these practices shed light on the change in status from simple “manuscripts” to texts integrated into the published literature. To completly get them out of their grey literature status, Paul Ginsparg had proposed as early as 1996 to add overlaid information on preprint servers, which led on the one hand to the creation of journals overlays proper, and on the other hand to various recommendation devices for preprints, among other texts.

The accelerated life cycle of preprints

The “standardization” of preprints through citation or certification is not the only notable development. Indeed, the recent disciplinary extension of preprints servers, in what is often described as a second wave2) is a significant development and has consequences for their uses. Let us take the example of life sciences, with the development of biorXiv, a platform launched in 2013 and published 30,000 preprints in the year 2019.

From this video put online at the time of the platfom inauguration, we will retain two elements: fastness and discussion. If high energy physicists, because of the weight of the infrastructures, work organization and authorship practices are used to live in a world with little publishing competition3 , this is not the case for many computer scientists who already published on ArXiv, especially in the artificial intelligence branch. Also, flag-planting to estabilish priority and (thoretically) gain the scientific credit has been a common operation on ArXiv, the use of timestamp by the server being a certfication of the order of arrival. If this fastness is also important in life sciences to avoid getting scooped, it shall be equally considered in contrast with the slowness of journals : speed of publication has often been an argument for different outputs, and the tension between rapid dissemination and quality of certification is at the heart of the history of the peer review in journals4.

For life scientists and especially early carreer researchers with short-term contracts, speed is less a question of priority than to simply see their results being widespread to be able to build some credit for their next assignment. Until preprints, no publications meant no credit. Now, they have at least something, especially since some organization have recognized preprints as legitimate outputs for CVs in grant applications. Of course, they still need publication in journals, which leads us to the role of discussions. As we have seen, in the case of ArXiv, discussions often feed a release cycle in the form of new preprints. In life sciences, this is apparently much less the case: a recent study by Kent Anderson5. shows that the majority of preprints were posted after they were submitted to a journal, so the “discussion » rather than feedback from the readers of the preprint takes the form of a peer review within a given journal

From fastness to emergency:
Preprints can be retracted too

At this point, we need to address the question of the targeted audiences for preprint servers: if it was initially pure academic community exchanges, things have changed with the popularity of social networks. Indeed, the cited Knowledge Exchange report highlighted the crucial role played by Twitter in the dissemination of preprints by their authors or platforms themselves. This dissemination to fringe and non-academic audiences has several consequences, such as the reuse of preprints by maginalised communities or communities with minority knowledge and beliefs. This is also the case for links to blogs included in ArXiV trackbacks for which it is very difficult to reach a consensus on the “serious” or “eccentric” character of a website.6. If Anderson concluded that the promise of a discussion was not kept within the platform in the case of biorXiv, it doesn”t necessarly mean that it is limited to journal peer review, as an unexpected event has just shown us.

In fact, the 2019-nCov coronavirus has been a test for biorXiv as it became the forefront of scientific information. Yet, since the 2003 SARS virus, the international health community, strongly pushed by WHO, seemed to have favored data and information sharing over scientific credit or patents. In recent epidemics, even the paywalls of big publishers have been opened in order to maximise the sharing of the existing knowledge. Now that biorXiv has been strongly established, it is the easiest legal way to combine sharing, speedness and some credit coming from priority7 And indeed, the preprint server has been flooded with coronavirus papers.

This new disclaimer – which specifies in the current case a general policy stated at the top of each preprint – emphasizes the potential audience of preprints, media. For long, the majority of senior life scientists have feared that uncertified preprints would be taken for granted and that a flow of “bad science” would be given to lay audiences. And their strongest fears apparently came true, as an article suggesting the artificial nature of the current virus quickly fed the conspiracy sites and flows, “proving” the epidemic could only be, at the very least, the result of a failed experiment. But the preprint publicity is more ambiguous : as its links spread, it was severely criticized, in a very well-argued way, by colleagues. Moreover, biorXiv is one of the few preprint servers that has included a comment feature attached to the preprints it hosts. And this paper has received a lot of them! So much so that the preprint was retracted less than 2 days after its publication – or more exactly the authors withdrew it following all these comments, whereas previously the retraction of a preprint was envisioned only in case his published heir would have previously endured this exact fate.

The interpretation of this ultra-fast life cycle is of course contrasted: the creators of Retraction Watch see it as a victory for science in preprint mode, while K. Anderson and others consider that such an article would never have appeared in a top-level journal. But the outcome of this debate on journals vs. preprint servers quality should not obscure the profound transformations of preprints. The Harnad vision began to come into reality more than 20 years later, but in a twisted way. While preprint servers didn’t replace journals, preprints have become quasi-articles: used for priority, have a DOI, generate some scientific credit, read and cited, change through at least informal discussion processes, appear on CVs and are archived, generate media interest. And now even if by name they are pre-publications, they are submitted to the stringest post-publication peer review decision.


  1. This literature is so vast and contradictory that Ben Wagner has made an annotated bibliography of it []
  2. see the very good synthesis funded by Knowledge Exchange, Chiarelli, Andrea, et al. “Accelerating scholarly communication: the transformative role of preprints.”(2019 []
  3. In her groundbreaking 1988 book, Sharown Traweek stated that publications were not important for them, as they were only archives, record-keepingof the things that really matters []
  4. see Pontille, David, and Didier Torny, “From manuscript evaluation to article valuation: the changing technologies of journal peer review.“. Human Studies 38.1 (2015): 57-79. []
  5. “bioRxiv: Trends and analysis of five years of preprints.” Learned Publishing (2019). []
  6. see Ritson, Sophie. “‘Crackpots’ and ‘active researchers’: The controversy over links between arXiv and the scientific blogosphere.” Social studies of science 46.4 (2016): 607-628. []
  7. On the illegal side, activists have built a specialized archive based on Scihub. []

The short history of the h-index or… being one click away to determine whether you are a succesful scientist

Sometimes, newness really happens in the academic world. Take bibliometrics indicators: for at least a century they have been usally proposed and discussed by specialized scientists from a scientometrics background in their field journals (currently JASIST, Scientometrics, …) and nobody else cared, at least for some time. But in 2005, something very peculiar happened: the h-index was coined by a total stranger to that field and had an instant success, which endures until now. How did that happen and why such a success?1

Physicist J.E. Hirsch proposed the h-index in a working paper posted on August 3rd, 2005 on arXiv, the famous open archive developed by physicists in Los Alamos. In his manuscript, he discussed the issue of comprehensively evaluating a researcher and wished, in a radical way, to subsume their whole career into a simple and practical measurement: an integer number. To do so, he considered that the production and its uses had to be taken into account through a citation measurement. So, number h is the greatest number for which h articles by an author have at least h citations. For example, for 5 articles cited at least 5 times, h is 5; likewise, for 50 articles cited at least 50 times each, h is 50.

Yet, the use of algorithms concerning authors was not new. Eugene Garfield, the founder of the Institut for ScIentific Information (ISI), claimed to regularly predict Nobel prizes using the Science Citation Index and had then developed the “ISI highly cited”, presenting results for a tiny fraction of “top”researchers ; similarly, a few disciplines like economics and management had a long tradition of ranking “top authors”. But, to our knowledge, no algorithm had to that date been specifically designed to evaluate authors . This novelty also comes from the lack of consideration Hirsch displayed for the existing litterature: there were only four references in his manuscript and only one in the field.

This departing from the scientometric tradition enabled Hirsch to make several shifts. Firstly, he did not take into account journals in which articles are published, probably because in high energy physicists, journals are used to archive knowledge more than to make discoveries public. Secondly, he excluded the pitfalls of the number and order of coauthors, which are of little relevance in physics but crucial in biomedical research for individual evaluation. Thirdly, his index combined two elements considered as heterogeneous in the scientometric tradition: production on the one hand, and use on the other. And finally fourthly, whereas scientometricians are always very cautious about individual analysis and save it for “outliers”, Hirsch proposed a measurement which applies to all researchers, and as a cherry on the cake, argued that it would be of some use for the allocation of research funds.

How did he make such a bold move? Based on numbers crunched in the case of high-energy physicists, HIrsch forged a model of the “successful scientist”. As the result of his algorithm very much depends on the duration of a researcher’s career, the h divided by the number of years of the career was considered as a good indicator by Hirsch.


“An h index of 20 after 20 years of scientific activity, characterizes a successful scientist. […] an h index of 40 after 20 years of scientific activity, characterizes outstanding scientists, likely to be found only at the top universities or major research laboratories. […] an h index of 60 after 20 years, or 90 after 30 years, characterizes truly unique individuals”.


At that point, Hirsch woud have probably been considered as a bibliometrics crackpot, a talented physicist that happens to crunch citations numbers in his pastime and posting his “personal views” on a website.

An instant success,
an impressive series of implementations

To the surprise (and probably horror) of the scientometrics community, the h-index was taken up at a staggering rate. As soon as his manuscript had been available on ArXiv, Hirsch received extensive feedback from his physicist colleagues and, in view of the shared enthusiasm, an open archive specialized in high energy physics, SPIRES (Stanford Physics Information REtrieval System), implemented the algorithm on its dataset only two weeks later. The same day, August 17th, 2005, Nature presented Hirsch’s proposition and highlighted his colleagues’ enthusiasm, while an editorial entitled “Rating Games” discussed the respective role of metrics and peer review. Shortly afterwards, in November 2005, two of Hirsch’s colleagues published the manuscript as an article2 in the Proceedings of the National Academy of Sciences, thus confirming physicists’ keen interest in this new measurement.

The popularization of the h-index took a new turn a yer later with a bibliometric tool developed by Ann-Will Harzing, Publish or Perish (PoP). In October 2006, this management professor at the University of Melbourne put online a small software, operationalizing the h-index calculation. That way, for any author whose name is entered by the user, irrespective of the discipline, PoP calculates their h-index in a single click, based on the nascent Google Scholar dataset. This tool could be downloaded for free and has thus allowed the magic algorithm to reach users far beyond audiences specialized in scientometrics. Researchers and institutions adopted this bibliometric tool so fast that the British Medical Journal published a spoof article describing the different pathologies it generates. Meanwhile, in May 2007 Elsevier had included the h-index in Scopus; Thomson-Reuters likewise had changed its “ISI Highly Cited” and integrated this index into the WoS in 2008. Thus, in just three years, individual measurement became a practical operation drawing on bibliometric tools easily accessible to each researcher.

This cycle of implementation was provisionally finalized by the opening of Google Scholar Citations (GSC) in the summer of 2011. With this new service, every academic could create and have control on her/his profile page on Google Scholar, and could decide to make it public or not. Whatever the choice, GSC would then automatically compute three metrics:
the widely used h-index, the i-10 index, which is the number of articles with at least ten citations, and the total number of citations to your articles.
At this point, the definition of the h-index wasn’t anymore needed and thousands of academics quickly made theirs available online. As Paul Wouters and Rodrigo Costas soon noted in their 2012 manuscript, this was a typical example of what they named “technologies of narcissism”, a mirror through which you and others would look in order to measure your influence, evaluate your importance, worry about your deficiencies.

From journals to articles to authors:
new policies for evaluation

Then, despite professional scientometricians harshly criticizing the h-index for its crudeness, pointing out its variations and limits or taming it to make a g-index or a v-index, the utopic/dystopic vision of Hirsch had come into reality. Conversely, it is because of its very crudeness that made it easier to implement in databases & more readable to lay researchers. But its availability is not sufficient to explain the duration and intensity of uses – the orignial paper will probably pass the 10.000 citations mark in 2020. There are two main and very different reasons for its popularity, which we must analytically distinguish. The first one is the implementation of the algorithm to new objects, such as research groups or even journals. Confronted with its unexpected success and eager to use the new data avaialable, scientometricians got into an h-index frenzy3. While the dominant popular bibliometrics index, both within the community and outside of it, had been for 30 years the Journal Impact Factor (JIF) then prouced and owned by Thomson ISI, the growing success of the h-index made it a challenger in the bibliometrical index “market”. Many papers compared the pros and cons of each algorithm, based on different datasets.

Nevertheless, this “good index” competition shouldn’t hide a second reason for which the h-index became so popular and discussed. Its use was sustained by a political agenda in assesment and evaluation, best represented by the San Francisco Declaration on Research Assessment (DORA) published in 2013. This complex text is often subsumed as an anti-bibliometrics statement, in which signing institutions promise they won”t use the JIF as a way to evaluate research, would it be for hiring, promoting, giving grants, etc. Beyond this simple vision, there are more nuanced recommandations that oppose journal-based metrics, but don’t refuse bibliometrics as a whole. JIF is seen as a bad way to perform quantified assesment, where article-level metrics, whatever they are (citations, downloads, views, social networks mentions…) are more realistic of the “impact” of a given research.

The problem with these new metrics is that nobody really knows what they are used for and what they really mesure4. Consequently, the most popular article-level metric remains the number of citations in a given database (Web of Science, Scopus, Crossref, Google Scholar). Rather than summing these numbers for a given journal, the aggregation is made on a given author. It is so simple to perform on the same databases, that what was absurd a few years ago has become natural. The “h revolution” therefore went beyond Hirsch’s own vision on two points. Firstly, his h-index was to be used for senior scientists, it is now also being applied to/by early and mid-carrer researchers as a more “ethical way” to judge their impact. Secondly, its extension goes hand in hand with a potential transformation of the model of scientific communication, to a post-journal world, in which any kind of text could be cited and counted. Rather than highlighting the fact that you actually passed the test of supposed prestigious journals, you just now give your h number and academic age, so everybody checks whether you really are the successful scientist you pretend to be . ((Please cite selected papers of the author of this post so he may finally become one)).

  1. This post is partially adapted from Pontille David, Torny Didier, « La manufacture de l’évaluation scientifique. Algorithmes, jeux de données et outils bibliométriques », Réseaux, 2013/1 (n° 177), p. 23-61. DOI : 10.3917/res.177.0023 []
  2. There are almost no differences between the ArXiv manuscript from mid-August (V3) and the published PNAS paper []
  3. See for an early litterature review, Bornmann, Lutz, and Hans‐Dieter Daniel. “The state of h index research.” EMBO reports 10.1 (2009): 2-6. []
  4. See Haustein, Stefanie, Timothy D. Bowman, and Rodrigo Costas. “Interpreting” altmetrics”: Viewing acts on social media through the lens of citation and social theories.” on ArXiv []

The Political Economy of Academic Publications

This blog is part of a vast research program on the political economy of scientific publication, which has been strongly transformed over the last twenty years by the electronic dissemination of journals. It considers publishers, editorial committees and journals as socio-political actors to be studied in three complementary aspects detailed below.

Firstly, they are analysed as economic actors defining publishing markets. The conditions under which these markets were created have been the subject of much criticism, and strong transnational mobilisations around open access have been deployed, which has influenced the construction of public policies that are contrasted internationally. New economic models have emerged, of which direct payment by the authors (APC), is only the most visible, but not the most frequent. The multiplication of coloured labels (Green, Gold, Platinum, Bronze, Diamond) to designate these models does not fully account for their subtle differences, nor for the sustainability of the associated business model, compared to the classic subscription model which has led to a “serial crisis” over the last 20 years, with the massive increase in the cost of access to publications for libraries


Secondly, journals and publishers are studied as places of production, including innovations in evaluation technologies (open peer review, technical soundness based review…). In particular, it is the growing debate on post-publication peer review policies, including withdrawing articles, that will be examined, as well as the emergence of platforms for public discussion of their validity such as PubPeer. The question of the centrality of journals for peer review or their marginalization (overlay journals, recommendations…) will also be addressed.


Thirdly, journals are treated as places of valorisation, seeking to attract authors and promote their position through the use of different measures (citation, referencing, uses…), which they highlight or criticise. In addition to the recurring debates on the Journal Impact Factor, a measure that is currently much decried, there will be discussions on alternative metrics, or even on responsible metrics, which are supposed to better represent academic production and its uses.


These three aspects aim in particular at sheding light on new forms of self-regulation by academic actors (systematisation of advertising for the withdrawal of articles, generalisation of post-publication peer review, stigmatisation of predatory publishers, uses of creative commons licenses…), the innovative and argumentative work of publishers and platforms, whether public, para-public or private, and the redefinition of public policies in the field of academic publication.