Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Readers in the Making of Scholarly Knowledge or… how article (e)valuation has become more democratic

The wonderful book entitled Reassembling Scholarly Communications. Histories, Infrastructures, and Global Politics of Open Access, edited by Martin Paul Eve and Jonathan Gray at MIT Press is finally out!

So to push you to read the work of diverse and enlightning contributors, I have remixed and shortened our chapter about readers and their empowerement in contemporary peer review. In the full version, we underlined one of the most decisive effect of open access: the accelerating rise to power of ordinary readers1.

Pre-Publication Peer Review as Reading

Throughout the history of peer review, the three judging instances (editors-in-chief, editorial committees, outside reviewers) that have gradually emerged were the first readers of submitted manuscripts. This may seem trivial, but the essential activity of evaluating an article – unlike other types of academic evaluatiion – is indeed the handling of a text. Admittedly, the peer review article can be considered to include many other things, such as checking that ethical rules are being followed or that data is actually being made available, but the question of taking into account the content of the article – whether in the form of a paper file or a computer file – has always been essential. The acts of reading are far from being simple, whether you consider “geographies of reading”2 (with whom, where, in what setting), what attracts the attention of readers, how texts are annotated,  how journals inform those practicies and what are the purposes of such acts.

Their respective importance and the way in which their readings are coordinated may be subject to local conventions at a journal, disciplinary, or historical level. They are also marked by profound divergences due to distinct issues in manuscript evaluation. The space of possibilities within which these readings are conducted is a subject for public debate that leads to the invention of labels and the stabilization of categories, and to the elaboration of procedural and moral norms. For example, on the respective anonymity of authors and referees, four labels have been coined since the 1980s

  Reviewers  
Authors Anonymized Identifed
Anonymized Double Blind Blind review
Identified Single Blind Open review

Source: David Pontille and Didier Torny, “The Blind Shall See! The Question of Anonymiity in Journal Peer Review,” Ada 4 (2014), https://doi.org/10.7264/N3542KVW.

These spaces of possibility currently coexist in each discipline, being attached to different scientific and moral values, pertaining to the responsi- bility of reviewers, objectivity of judgements, transparency of process, and equity toward authors. The different possibilities here show that Merton’s “organized skepticism” and the agonistic nature of the production of scientific facts described by Latour and Woolgar long ago are, indeed, not self-evident. The contemporary moment is characterized by reflexive readings of peer- review technologies: manuscript evaluation has itself become an object of systematic scientific investigation. Authors, manuscripts, reviewers, journals, and readers have been scrupulously examined for their qualities and competencies, as well as for their “biases,” faults, or even unacceptable behavior. The diverse arrangements of manuscript evaluation are thus themselves systematically subjected to evaluation procedures.

Post-Publication Peer Review
as Ordinary Readers empowerement?

Peer review in the twenty-first century can also be distinguished by a growing trend: the empowerment of “ordinary” readers as new key judging instances. If editors and reviewers produce judgments, it is through a reading within a very specific framework, as it is confined to restricted interaction, essentially via written correspondence, which aims at authorizing the dissemination of manuscripts-become-articles. Other forms of reading accompany publications and participate in their evaluation, inde-pendently of their initial validation.

Citing Articles : with the popularization of bibliometric tools, citation counting has become a central element of journal and article evaluation. But it also need a transformation of formats, an identification of references and a fundamental transformation: the act of referencing relates to a given author, whereas a citation is a new and perhaps calculable property of the source text, creating what Wouters called “citation culture”. Then, highly disparate forms of intertextuality are rendered commensurable: the measured or radical criticism of a thought or result, integration within a scientific tradition, reliance on a standardized method described elsewhere, existence of data for a literary journal or meta-study, simple recopying of sources or self-promotion. Citation thus points towards two complementary horizons of reading: science as a system for accumulating knowledge via a referencing operation, and research as a necessary discussion of this same knowledge through criticism and commentary.

Commenting texts: in a view of publication as explicitly dialogical or polyphonic, reader can become commentersTraditionally, before an article was published, comments were mainly directed toward the editor- in-chief or the editorial committee. Through open review, commenters enter into a dialogue with the authors and thus open up a space for direct confrontation. Prior to the emergence of electronic spaces for discussion, objects like “special issues” or “reports” in which a series of articles are brought together around a given theme to feed off one another after a short presentation. Post-publication commenting was also common through two elementary forms: by referring to the original article or by sending a letter to the editor. The electronic space led to many experiments of post-commenting: most of them met no success (PLOS, Nature, Pubmedcentral…), until the unexpected success of anonymized comments on PubPeer.

Sharing papers: until recently, readers other than citers and commenters remained very much in the shadows. Yet library users, students in classes, and col- leagues in seminars, as just a few examples, also ascribe value to articles; for example, through annotation. The existence of articles in electronic form has made their readers more visible. Persons who access an “HTML” page or who download a “PDF” file are now taken into account, whereas in the past it was only the distribution of journals and texts, mostly through libraries, which allowed one to assess potential readership. By inventorying and aggregating the audience in this way, it is possible to assign readers the capacity to evaluate articles. The creation of online academic social networks (e.g., ResearchGate, Academia .edu) has trivialized this figure of the public, not only by counting “academic users,” but also by naming them and offering contact. At the same time, online bibliographic tools (e.g., CiteULike, Mendeley, Zotero) that objectify the readers and taggers who introduce references and attached documents into their bibliographic databases. Without being citers them selves, these readers select publications by sharing lists of references, the pertinence of which is notified by the use of “tags.” These reader-taggers are also embedded in the use of hyperlinks within “generalist” social net-works (e.g., Facebook, Twitter), by alerting others to interesting articles, or by briefly commenting on their content, feeding the whole “article-level metrics” movement. Here the readers, tracked by number and diversity, revalidate articles in the place of the judging instances historically qualified to do so.

Examining Documents : This movement is even more significant in that these tools are applied not only to published articles but also to documents which have not been vali-dated through the growth of prerint servers. This flow of electronic manuscripts feeds the enthusiasm of the most visionary who, since the 1990s, have been announcing the end of journals. On the contrary, we obsersed new technologies have been built on these archives, suchas “overlay journals,” in which available manuscripts are later validated by reading peers in various ways. With a view to dissemination, advocates of readers as a judging instance tend to downplay the importance of prior validation. While the valida- tion process sorts manuscripts in a binary fashion (accepted or rejected), such advocates contend that varied forms of dissemination instead encour- age permanent discussion and argument along a text’s entire trajectory. In this perspective, articles remain “alive” after publication and are therefore always subject not only to various reader appropriations, but also to public evaluations, which can reverse their initial validation through flagging articles in official journal policies.

The Academic Closet
vs. The Readers Bazaar

Driven by a constant process of specialization, the extension of judging instances to readers may appear as a reallocation of expertise, empowering a growing number of people in the name of distributed knowledge. In an ongoing context of revelations of massive scientific fraud, which often implicates editorial processes and journals themselves, the dereliction inherent to judging instances prior to publication has transformed the mass of readers in a vital resource for unearthing error and fraud. As in other domains where public expertise used to be exclusively held by a few professionals, crowdsourcing has become a collective gatekeeper for science publishing. Thus peerdom shall be reshaped, as lay readers have now full access to a large part of the scientific literature and have become valued audiences as quantified end-users of published articles.

If open science has become a motto, it encompasses two different visions for journal peer review. The first one, which includes open identities, takes place within the academic closet, where the dissemination of manuscripts is made possible by small discourse collectives which shape consensual facts. This vision is supported by the validation processes designed by Robert Boyle during the emergence of modern scientific practices. By contrast, in a Hobbesian fashion, the second one urges an openness in multiple ways, building an academic democracy where each reading may litterally been accounted. The disentanglement of peer evaluation goes througdh the ability given to readers to comment on published articles, to produce social media metrics through the sharing of documents, and to observe the whole evaluation process of each manuscript. In this vision, scholarly communication not only relies on crowdsourced peer review but on a plurality of instances that generates a continuous process of judgment. The first vision has been at the heart of the scientific article as a genre, and a key component of the scientific journal as the most important channel for scholarly communication. Whether journals remain central in the second world has yet to be determined.

  1. We means David Pontille and myself. You can read the full chapter here. Of course, as readers, you are welcome to cite, comment, share & examine this chapter []
  2. Livingstone, David N. “Science, text and space: thoughts on the geography of reading.” Transactions of the institute of British geographers 30.4 (2005): 391-401. []

The short history of the h-index or… being one click away to determine whether you are a succesful scientist

Sometimes, newness really happens in the academic world. Take bibliometrics indicators: for at least a century they have been usally proposed and discussed by specialized scientists from a scientometrics background in their field journals (currently JASIST, Scientometrics, …) and nobody else cared, at least for some time. But in 2005, something very peculiar happened: the h-index was coined by a total stranger to that field and had an instant success, which endures until now. How did that happen and why such a success?1

Physicist J.E. Hirsch proposed the h-index in a working paper posted on August 3rd, 2005 on arXiv, the famous open archive developed by physicists in Los Alamos. In his manuscript, he discussed the issue of comprehensively evaluating a researcher and wished, in a radical way, to subsume their whole career into a simple and practical measurement: an integer number. To do so, he considered that the production and its uses had to be taken into account through a citation measurement. So, number h is the greatest number for which h articles by an author have at least h citations. For example, for 5 articles cited at least 5 times, h is 5; likewise, for 50 articles cited at least 50 times each, h is 50.

Yet, the use of algorithms concerning authors was not new. Eugene Garfield, the founder of the Institut for ScIentific Information (ISI), claimed to regularly predict Nobel prizes using the Science Citation Index and had then developed the “ISI highly cited”, presenting results for a tiny fraction of “top”researchers ; similarly, a few disciplines like economics and management had a long tradition of ranking “top authors”. But, to our knowledge, no algorithm had to that date been specifically designed to evaluate authors . This novelty also comes from the lack of consideration Hirsch displayed for the existing litterature: there were only four references in his manuscript and only one in the field.

This departing from the scientometric tradition enabled Hirsch to make several shifts. Firstly, he did not take into account journals in which articles are published, probably because in high energy physicists, journals are used to archive knowledge more than to make discoveries public. Secondly, he excluded the pitfalls of the number and order of coauthors, which are of little relevance in physics but crucial in biomedical research for individual evaluation. Thirdly, his index combined two elements considered as heterogeneous in the scientometric tradition: production on the one hand, and use on the other. And finally fourthly, whereas scientometricians are always very cautious about individual analysis and save it for “outliers”, Hirsch proposed a measurement which applies to all researchers, and as a cherry on the cake, argued that it would be of some use for the allocation of research funds.

How did he make such a bold move? Based on numbers crunched in the case of high-energy physicists, HIrsch forged a model of the “successful scientist”. As the result of his algorithm very much depends on the duration of a researcher’s career, the h divided by the number of years of the career was considered as a good indicator by Hirsch.


“An h index of 20 after 20 years of scientific activity, characterizes a successful scientist. […] an h index of 40 after 20 years of scientific activity, characterizes outstanding scientists, likely to be found only at the top universities or major research laboratories. […] an h index of 60 after 20 years, or 90 after 30 years, characterizes truly unique individuals”.


At that point, Hirsch woud have probably been considered as a bibliometrics crackpot, a talented physicist that happens to crunch citations numbers in his pastime and posting his “personal views” on a website.

An instant success,
an impressive series of implementations

To the surprise (and probably horror) of the scientometrics community, the h-index was taken up at a staggering rate. As soon as his manuscript had been available on ArXiv, Hirsch received extensive feedback from his physicist colleagues and, in view of the shared enthusiasm, an open archive specialized in high energy physics, SPIRES (Stanford Physics Information REtrieval System), implemented the algorithm on its dataset only two weeks later. The same day, August 17th, 2005, Nature presented Hirsch’s proposition and highlighted his colleagues’ enthusiasm, while an editorial entitled “Rating Games” discussed the respective role of metrics and peer review. Shortly afterwards, in November 2005, two of Hirsch’s colleagues published the manuscript as an article2 in the Proceedings of the National Academy of Sciences, thus confirming physicists’ keen interest in this new measurement.

The popularization of the h-index took a new turn a yer later with a bibliometric tool developed by Ann-Will Harzing, Publish or Perish (PoP). In October 2006, this management professor at the University of Melbourne put online a small software, operationalizing the h-index calculation. That way, for any author whose name is entered by the user, irrespective of the discipline, PoP calculates their h-index in a single click, based on the nascent Google Scholar dataset. This tool could be downloaded for free and has thus allowed the magic algorithm to reach users far beyond audiences specialized in scientometrics. Researchers and institutions adopted this bibliometric tool so fast that the British Medical Journal published a spoof article describing the different pathologies it generates. Meanwhile, in May 2007 Elsevier had included the h-index in Scopus; Thomson-Reuters likewise had changed its “ISI Highly Cited” and integrated this index into the WoS in 2008. Thus, in just three years, individual measurement became a practical operation drawing on bibliometric tools easily accessible to each researcher.

This cycle of implementation was provisionally finalized by the opening of Google Scholar Citations (GSC) in the summer of 2011. With this new service, every academic could create and have control on her/his profile page on Google Scholar, and could decide to make it public or not. Whatever the choice, GSC would then automatically compute three metrics:
the widely used h-index, the i-10 index, which is the number of articles with at least ten citations, and the total number of citations to your articles.
At this point, the definition of the h-index wasn’t anymore needed and thousands of academics quickly made theirs available online. As Paul Wouters and Rodrigo Costas soon noted in their 2012 manuscript, this was a typical example of what they named “technologies of narcissism”, a mirror through which you and others would look in order to measure your influence, evaluate your importance, worry about your deficiencies.

From journals to articles to authors:
new policies for evaluation

Then, despite professional scientometricians harshly criticizing the h-index for its crudeness, pointing out its variations and limits or taming it to make a g-index or a v-index, the utopic/dystopic vision of Hirsch had come into reality. Conversely, it is because of its very crudeness that made it easier to implement in databases & more readable to lay researchers. But its availability is not sufficient to explain the duration and intensity of uses – the orignial paper will probably pass the 10.000 citations mark in 2020. There are two main and very different reasons for its popularity, which we must analytically distinguish. The first one is the implementation of the algorithm to new objects, such as research groups or even journals. Confronted with its unexpected success and eager to use the new data avaialable, scientometricians got into an h-index frenzy3. While the dominant popular bibliometrics index, both within the community and outside of it, had been for 30 years the Journal Impact Factor (JIF) then prouced and owned by Thomson ISI, the growing success of the h-index made it a challenger in the bibliometrical index “market”. Many papers compared the pros and cons of each algorithm, based on different datasets.

Nevertheless, this “good index” competition shouldn’t hide a second reason for which the h-index became so popular and discussed. Its use was sustained by a political agenda in assesment and evaluation, best represented by the San Francisco Declaration on Research Assessment (DORA) published in 2013. This complex text is often subsumed as an anti-bibliometrics statement, in which signing institutions promise they won”t use the JIF as a way to evaluate research, would it be for hiring, promoting, giving grants, etc. Beyond this simple vision, there are more nuanced recommandations that oppose journal-based metrics, but don’t refuse bibliometrics as a whole. JIF is seen as a bad way to perform quantified assesment, where article-level metrics, whatever they are (citations, downloads, views, social networks mentions…) are more realistic of the “impact” of a given research.

The problem with these new metrics is that nobody really knows what they are used for and what they really mesure4. Consequently, the most popular article-level metric remains the number of citations in a given database (Web of Science, Scopus, Crossref, Google Scholar). Rather than summing these numbers for a given journal, the aggregation is made on a given author. It is so simple to perform on the same databases, that what was absurd a few years ago has become natural. The “h revolution” therefore went beyond Hirsch’s own vision on two points. Firstly, his h-index was to be used for senior scientists, it is now also being applied to/by early and mid-carrer researchers as a more “ethical way” to judge their impact. Secondly, its extension goes hand in hand with a potential transformation of the model of scientific communication, to a post-journal world, in which any kind of text could be cited and counted. Rather than highlighting the fact that you actually passed the test of supposed prestigious journals, you just now give your h number and academic age, so everybody checks whether you really are the successful scientist you pretend to be . ((Please cite selected papers of the author of this post so he may finally become one)).

  1. This post is partially adapted from Pontille David, Torny Didier, « La manufacture de l’évaluation scientifique. Algorithmes, jeux de données et outils bibliométriques », Réseaux, 2013/1 (n° 177), p. 23-61. DOI : 10.3917/res.177.0023 []
  2. There are almost no differences between the ArXiv manuscript from mid-August (V3) and the published PNAS paper []
  3. See for an early litterature review, Bornmann, Lutz, and Hans‐Dieter Daniel. “The state of h index research.” EMBO reports 10.1 (2009): 2-6. []
  4. See Haustein, Stefanie, Timothy D. Bowman, and Rodrigo Costas. “Interpreting” altmetrics”: Viewing acts on social media through the lens of citation and social theories.” on ArXiv []