Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Matilda is finally available… or how open academic search engines are a key part of open science

Matilda homepage, 6th Ocotber 2023.

There was a time, towards the end of the 20th century, when things were simple. If you just wanted to count the publications of an author, an institution, a country, you had to refer to the databases of the Institute for Scientific Information (ISI), created and directed by Eugene Garfield. The most famous of these, the Science Citation Index, was built on the idea of selecting the most relevant journals to capture the heart of science, in the already long tradition of bibliotheconomics. And these core journals wer sufficient to draw a relevant picture of the whole of scientific content. Taking the part for the whole raised many questions about the representativeness of the journals present, data and calculation errors, biases in favour of certain disciplines, languages and countries, but as Margaret Thatcher said about her economic world: ‘There is no alternative’

25 years later, commercial competition is fierce between Clarivate’s Web of Science, Elsivier’s Scopus and (almost) Springer’s Dimensions to capture the most money available from Higher Education & Research institutions . In another world, Google Scholar has woven its web, Google Scholar (GS), the only corporate service without advertising or direct tracking of usage. But these systems still have their drawbacks: the commercial databases still are still excluding machines, deciding what is “searchable” among the whole literature; GS services are restricted in their uses (eg no massive downloads) and its sources are neither described nor open.

This is the landscape in which Matilda was created, thanks to Huma-num and an ANR grant. If you want to know more about how it was envisionned in 2019, there is an “origins” paper1. For now, let’s get straight to the tutorial. The video below is all you need to use it, no API coding, no computer skills, just an idea of what you are searching for as an academic or someone avid to find academic sources.

Open citations at the heart, open data everywhere.

Matilda: is one the outcomes of the “open citations” movement: Originally, in 2010, it was a reference data corpus, the Open Citation Corpus (Pironi et al. 2015), before these remarkable precursors2 were joined by various organizations demanding the release of Crossref citation data to publishers. The I4OC collective has consequently obtained the availability, under a CC0 license, of the whole CrossRef database by default., But what to do with this pile of data? A number of tools, including the VOS Viewer developed by Leiden University, use them.. However, they hope that other actors would take them and build services on this new shared resource. Like the Open Citations databases3, they often presuppose professional users, either experts in API manipulation or interested in very advanced bibliometric developments. Matilda took a different approach by making the simplest tool possible,

Follow an author, make citation tracking of a core text in your field, search for texts with a given expression in their title, download full metadata to zotero, download a copy of the text if it is legally available, create an alert through a RSS feed that is publicly available, share it in your team project through a Zotero group, all this and more with just a few clicks. It is free, reusable so are results, because the metadata has been liberated thanks to these activists and to the collective movement that followed, including publishers.

Almost real time, always get freshest texts

Even if there is almost no literature on how academics practically search for their sources, we assume that when they know their field, they are searching for new information, that is texts that weren’t there yesterday but are available today. That has been the promise of many information devices, from the first academic journals to ISI Current Contents, from abstracts/review journals to contemporary Scopus/WoS alerts.

Beyond openness, one of the promises of Matilda is to offer you this freshness by going to the sources, applying YOUR search keys and deliver them to you in no time. In practice, that means that around 2 days after their creation in Crossref, RePeC, ArXiv, Pumed, you will get the relevant metadata in your Zotero RSS feed. As a mean, around 40,000 new texts appear in Matilda and some will probably interest you, that is discover the title, read the abstract, include it in your bibliography while other will be rejected.

What’s next and how you can help Matilda

The current version of Matilda is V. 2.0.2 and we have money to build the V3 with plenty of new features, the most spectacular being full-text search as we will index every found PDF so that you can add these results to those on the metadata. We also will add boolean operators for search – currently by default it is OR. In the long run, codes will be available – everything is open source software – and APIs will be open for direct reuse, for example instead of an uncheckable “WoS citations”, you will find a tracable “Matilda citations”.

We also think about adding new sources such as aggregated online archives as we wish to be inclusive as possible, so that YOU choose what is relevant for your research, not US.

The ultimate aim is clear: offer an alternative to current WoS/Scopus users, so that their institutions stop paying millions for tools that were not made for lay researchers – the bibliometrics uses of such platforms are debatable, though the Open Research Information spurring movement could also push them into history. Show that we need to decolonize scholarly metada that was for long limited to 1/ jorunals 2/ with articles written in English 3/ from Global North scholars 4/ and especially those owned or disseminated by big publishers. It also aims at providing an open alternative to Google Scholar, with open, tracable sources and enrichments and no limitations in download and uses. As everybody knows, Google can decide to shut down services in a day, so there is no long-term garuantee that GS will exist in the long run.

What can you do to help develop and sustain this open science platform? First, talk about it, create and share links, go to your institution head and show them that they could invest in open science rather than funding capitalistic vilains. Second, use it, test it, send us some feedback, good or bad, ask for features, explain what you need and expect from such a tool. Third, your IP addresses are not traced, but we have the aggregated image of RSS feeds, so even by just using it, you will help us.

  1. Didier Torny, Laurent Capelli, Lydie Danjean, Stéphane Pouyllau. Matilda: Building a bibliographic/metric tool for open citations and open science. ELPUB 2019 23rd edition of the International Conference on Electronic Publishing, Jun 2019, Marseille, France. ⟨10.4000/proceedings.elpub.2019.22⟩. ⟨hal-02141839⟩ []
  2. Disclaimer: I have been a member of the Advisory Board of Open Citations on behalf of the French Open Science Committeee since 2021 []
  3. see Heibi, I., Peroni, S. & Shotton, D. Software review: COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations. Scientometrics 121, 1213–1228 (2019). https://doi.org/10.1007/s11192-019-03217-6 []

Readers in the Making of Scholarly Knowledge or… how article (e)valuation has become more democratic

The wonderful book entitled Reassembling Scholarly Communications. Histories, Infrastructures, and Global Politics of Open Access, edited by Martin Paul Eve and Jonathan Gray at MIT Press is finally out!

So to push you to read the work of diverse and enlightning contributors, I have remixed and shortened our chapter about readers and their empowerement in contemporary peer review. In the full version, we underlined one of the most decisive effect of open access: the accelerating rise to power of ordinary readers1.

Pre-Publication Peer Review as Reading

Throughout the history of peer review, the three judging instances (editors-in-chief, editorial committees, outside reviewers) that have gradually emerged were the first readers of submitted manuscripts. This may seem trivial, but the essential activity of evaluating an article – unlike other types of academic evaluatiion – is indeed the handling of a text. Admittedly, the peer review article can be considered to include many other things, such as checking that ethical rules are being followed or that data is actually being made available, but the question of taking into account the content of the article – whether in the form of a paper file or a computer file – has always been essential. The acts of reading are far from being simple, whether you consider “geographies of reading”2 (with whom, where, in what setting), what attracts the attention of readers, how texts are annotated,  how journals inform those practicies and what are the purposes of such acts.

Their respective importance and the way in which their readings are coordinated may be subject to local conventions at a journal, disciplinary, or historical level. They are also marked by profound divergences due to distinct issues in manuscript evaluation. The space of possibilities within which these readings are conducted is a subject for public debate that leads to the invention of labels and the stabilization of categories, and to the elaboration of procedural and moral norms. For example, on the respective anonymity of authors and referees, four labels have been coined since the 1980s

  Reviewers  
Authors Anonymized Identifed
Anonymized Double Blind Blind review
Identified Single Blind Open review

Source: David Pontille and Didier Torny, “The Blind Shall See! The Question of Anonymiity in Journal Peer Review,” Ada 4 (2014), https://doi.org/10.7264/N3542KVW.

These spaces of possibility currently coexist in each discipline, being attached to different scientific and moral values, pertaining to the responsi- bility of reviewers, objectivity of judgements, transparency of process, and equity toward authors. The different possibilities here show that Merton’s “organized skepticism” and the agonistic nature of the production of scientific facts described by Latour and Woolgar long ago are, indeed, not self-evident. The contemporary moment is characterized by reflexive readings of peer- review technologies: manuscript evaluation has itself become an object of systematic scientific investigation. Authors, manuscripts, reviewers, journals, and readers have been scrupulously examined for their qualities and competencies, as well as for their “biases,” faults, or even unacceptable behavior. The diverse arrangements of manuscript evaluation are thus themselves systematically subjected to evaluation procedures.

Post-Publication Peer Review
as Ordinary Readers empowerement?

Peer review in the twenty-first century can also be distinguished by a growing trend: the empowerment of “ordinary” readers as new key judging instances. If editors and reviewers produce judgments, it is through a reading within a very specific framework, as it is confined to restricted interaction, essentially via written correspondence, which aims at authorizing the dissemination of manuscripts-become-articles. Other forms of reading accompany publications and participate in their evaluation, inde-pendently of their initial validation.

Citing Articles : with the popularization of bibliometric tools, citation counting has become a central element of journal and article evaluation. But it also need a transformation of formats, an identification of references and a fundamental transformation: the act of referencing relates to a given author, whereas a citation is a new and perhaps calculable property of the source text, creating what Wouters called “citation culture”. Then, highly disparate forms of intertextuality are rendered commensurable: the measured or radical criticism of a thought or result, integration within a scientific tradition, reliance on a standardized method described elsewhere, existence of data for a literary journal or meta-study, simple recopying of sources or self-promotion. Citation thus points towards two complementary horizons of reading: science as a system for accumulating knowledge via a referencing operation, and research as a necessary discussion of this same knowledge through criticism and commentary.

Commenting texts: in a view of publication as explicitly dialogical or polyphonic, reader can become commentersTraditionally, before an article was published, comments were mainly directed toward the editor- in-chief or the editorial committee. Through open review, commenters enter into a dialogue with the authors and thus open up a space for direct confrontation. Prior to the emergence of electronic spaces for discussion, objects like “special issues” or “reports” in which a series of articles are brought together around a given theme to feed off one another after a short presentation. Post-publication commenting was also common through two elementary forms: by referring to the original article or by sending a letter to the editor. The electronic space led to many experiments of post-commenting: most of them met no success (PLOS, Nature, Pubmedcentral…), until the unexpected success of anonymized comments on PubPeer.

Sharing papers: until recently, readers other than citers and commenters remained very much in the shadows. Yet library users, students in classes, and col- leagues in seminars, as just a few examples, also ascribe value to articles; for example, through annotation. The existence of articles in electronic form has made their readers more visible. Persons who access an “HTML” page or who download a “PDF” file are now taken into account, whereas in the past it was only the distribution of journals and texts, mostly through libraries, which allowed one to assess potential readership. By inventorying and aggregating the audience in this way, it is possible to assign readers the capacity to evaluate articles. The creation of online academic social networks (e.g., ResearchGate, Academia .edu) has trivialized this figure of the public, not only by counting “academic users,” but also by naming them and offering contact. At the same time, online bibliographic tools (e.g., CiteULike, Mendeley, Zotero) that objectify the readers and taggers who introduce references and attached documents into their bibliographic databases. Without being citers them selves, these readers select publications by sharing lists of references, the pertinence of which is notified by the use of “tags.” These reader-taggers are also embedded in the use of hyperlinks within “generalist” social net-works (e.g., Facebook, Twitter), by alerting others to interesting articles, or by briefly commenting on their content, feeding the whole “article-level metrics” movement. Here the readers, tracked by number and diversity, revalidate articles in the place of the judging instances historically qualified to do so.

Examining Documents : This movement is even more significant in that these tools are applied not only to published articles but also to documents which have not been vali-dated through the growth of prerint servers. This flow of electronic manuscripts feeds the enthusiasm of the most visionary who, since the 1990s, have been announcing the end of journals. On the contrary, we obsersed new technologies have been built on these archives, suchas “overlay journals,” in which available manuscripts are later validated by reading peers in various ways. With a view to dissemination, advocates of readers as a judging instance tend to downplay the importance of prior validation. While the valida- tion process sorts manuscripts in a binary fashion (accepted or rejected), such advocates contend that varied forms of dissemination instead encour- age permanent discussion and argument along a text’s entire trajectory. In this perspective, articles remain “alive” after publication and are therefore always subject not only to various reader appropriations, but also to public evaluations, which can reverse their initial validation through flagging articles in official journal policies.

The Academic Closet
vs. The Readers Bazaar

Driven by a constant process of specialization, the extension of judging instances to readers may appear as a reallocation of expertise, empowering a growing number of people in the name of distributed knowledge. In an ongoing context of revelations of massive scientific fraud, which often implicates editorial processes and journals themselves, the dereliction inherent to judging instances prior to publication has transformed the mass of readers in a vital resource for unearthing error and fraud. As in other domains where public expertise used to be exclusively held by a few professionals, crowdsourcing has become a collective gatekeeper for science publishing. Thus peerdom shall be reshaped, as lay readers have now full access to a large part of the scientific literature and have become valued audiences as quantified end-users of published articles.

If open science has become a motto, it encompasses two different visions for journal peer review. The first one, which includes open identities, takes place within the academic closet, where the dissemination of manuscripts is made possible by small discourse collectives which shape consensual facts. This vision is supported by the validation processes designed by Robert Boyle during the emergence of modern scientific practices. By contrast, in a Hobbesian fashion, the second one urges an openness in multiple ways, building an academic democracy where each reading may litterally been accounted. The disentanglement of peer evaluation goes througdh the ability given to readers to comment on published articles, to produce social media metrics through the sharing of documents, and to observe the whole evaluation process of each manuscript. In this vision, scholarly communication not only relies on crowdsourced peer review but on a plurality of instances that generates a continuous process of judgment. The first vision has been at the heart of the scientific article as a genre, and a key component of the scientific journal as the most important channel for scholarly communication. Whether journals remain central in the second world has yet to be determined.

  1. We means David Pontille and myself. You can read the full chapter here. Of course, as readers, you are welcome to cite, comment, share & examine this chapter []
  2. Livingstone, David N. “Science, text and space: thoughts on the geography of reading.” Transactions of the institute of British geographers 30.4 (2005): 391-401. []

The absurd race for university rankings… or how publications are transformed into bad data

Lockdown or not, COVID-19 first wave in progress or over, universities open, teleworking or closed, it will still be out when August 15 comes. The AWRU Shanghai Ranking, like its cousins THE Rankings and QS Wold University Raankings and others of the same genre have their season, their inflexible communication, their teasers, their sports announcements of winners of the year and of emerging stars.

The recurrent criticisms that are made against them, the evisceration of their methods by the scientometers and rankings specialists1 will not change anything in their imperturbable march. So why write about it? Not to sum up their four decades history2, but because publications play a minor, but not negligible role, and their successive transformations into bad data is an interesting case study.. Let’s change this midsummer’s nightmare in an ironic and fun bedtime story for academics, mostly thanks to incredible French stories!

The tale of a French “emerging university”.
How to become ranked

The HER French system is incredibly ill-suited to fit these rankings. Indeed, not only do we have universities on one side and powerful research organisations on the other, but both share “joint research units” bringing together academics working for 2, 3, 4… up to 8 different employers. As a result, the signatures of scholarly publications are very long, contain multiple institutions for each author, and are subject to wide variation for a given lab or department.
Morever, from 2007 onwards, laws have defined the framework for “new universities”, regrouping institutions on a rather geographical basis. We will follow here the example of PSL3 (“Paris Sciences & Lettres”), whose name gives a foretaste of its diversity, with 11 establishments and 3 research organisations. Although there is only one university – Paris Dauphine – among all these institutions, some were already taken into account by rankers, such as the Ecole Normale Supérieure, very famous for its mathematics department. So how can they ensure that “PSL” becomes the brand name and that all academics outputs are counted under its name?

The only solution is to change the signature rules, homogenize them, and then measure their practical implementation by reluctant researchers. This is all the more difficult as the grouping of institutions did not make them disappear, and each one therefore retains its staff, budget, premises and laboratories. Thus, it took years of negotiations for an agreement to be signed by all the institutions, leading to about 70% of the publications signed in accordance with the model defined in 2015 three years later, where a “simplification” for the byline was carried out, to fashion the following model of affiliation :

Institution Name, PSL University, [institute or departement], Research Organization, [joint research unit number], University co-chair, Laboratory, [Team], [Address], Postal Code, Town, France

But this is not over as they have then to convince rankers and their underlying data sources (mostly Scopus from Elsevier and WoS from Clarivate) to pick the second place “PSL University” in this long list of possible affiliations in the byline. For that, they used the knowledge and work of a “ranking optimization policy officer”, Daniel Egret. former astrophysicist. This is no surprise: we have known for more than a decade that HER institutions react to rankings in different ways, trying to optimize their place if they think the goal is worth it4. Finally, like any other univeristy, they would cherry pick among all relevant rankings and boast about their amazing results on their website:

It’s somewhat ironic that PSL should be congratulating itself on its first place for “new universities” when most of its institutions are multi-secular, but PSL is just playing a game by pushing to the limit the “optimization” linked to changes in the signing of its affiliated researchers. However, as in fiscal matters, the limits between “opitimization” and “fraud” are tenuous and the management of the “Highly Cited Researcher” of the Web of Science shows us two examples of this.

It’s the authorship, stupid!
From private gaming to alternative facts

On this blog, we have already encountered this Web of Science tool, in the case of KC Chou, a serial peer review hacker who finally got caught after he became one of these HCR. At that time, like many, he had acquired a secondary affiliation: as Yves Gingras already noted six years ago, these “secondary institutions” tend to be concentrated in a few countries, and notably Saudi Arabia. However, by constructing a combination of indices, 20% of the Shanghai ranking derives directly from the number of HCR researchers at the university in question. In other words, it is no longer a question of directing publications one by one to the computing centre of a ranker just like PSL did, but of reassigning the most prolific research producers to a given university. In exchange for a given lumpsum, the researchers in question will therefore “sell” their authorship, possibly on the articles themselves, but especially in their response to the Web of Science’s query as to their affiliations. When HCR data became available, scientometers analyzed it and came to the conclusion that secondary affiliation should be left out to avoid massive ranking manipulation5.

The gaming had become too visible, forcing the ranker to change its methodology, as it stated that ” only the primary affiliations of new Highly Cited Researchers are considered in the calculation of an institution’s HiCi score for the new list.”. While this new method limits the direct interest of contractualisation between authors and institutions, it raises specific problems for French universities. Indeed, because of the multiplicity of affiliations described above, many authors dependent on research organizations indicate to Thomson Reuters/Clarivate/Web of Science group, that their main affiliation is CNRS, INSERM or INRA, which are not ranked. As a result, the research-intensive universities, united under the umbrella of the CURIF, have developed an extensive lobby to get these full-time researchers to indicate that University X is indeed their first affiliation and not the secondary one. After a long struggle, they obtained satisfaction in 2019 as the Higher Education and Research Minister herself, Frédérique Vidal, signed a letter which asked HCR researchers to pick their associated university rather than their own organization. This move has only one purpose, as detailed by Daniel Egret in the French economic press:

“The astrophysicist predicted a gain of 84 places at the University of Lorraine, 57 places for Toulouse-III and 26 places for Montpellier. The most “spectacular” effect of the new affiliation rules will mainly concern universities that are between the 200th and 300th place in Shanghai, according to Daniel Egret, because, at this level, “the scores are very close between universities, which can make them gain several dozen places”. The effect will be less visible for universities in the top 100, where the scores are “quite scattered”. Paris-Saclay would gain 3 places, and Sorbonne University, 5 places”. ((Les Echos, 28th Februray 2019))

There are no hidden practices, secret arrangements or small manipulations as in the reported data on former students’ salaries or employment levels, in the case of US management universities, even when it is UC Berkeley . It is just the very admission to fudge the data in order to skew the rankings so unfavourable to French universities and thus to produce alternative facts. Yes, these publications exist, they have been authored by Mr. X and Ms. Y., they just happen to work for a different organization, who cares?

Who are the users of these rankings?
Sold and actual markets

This example shows us how ambiguous university rankings are as a tool, not only because the measures they aggregate would be biased, false, or even falsified, but in their very objectives, whether on the side of their creators or their multiple users. Indeed, it is common knowledge that the purpose of mergers of French institutions was to move them up in the rankings, particularly that of Shanghai, even if the calculable effects were far from certain6. The reassignment of authorship is only the last adjustment, after signature changes, to obtain these good rankings through publications. Other strategies, such as lobbying voters for reputation rankings, follow the same logic. But why, how and to whom is it useful to be “top ranked”?

In theory and in the public display of the rankers, rankings are supposed to participate in the formation of students choice in order to pick their higher education institution. Supported by an audit society vision and the idea that something like a global market for universities actually exists, rankers are supposed to be third-party certified information providers. Which leads to two questions : on one hand, is it useful information for students, on the other hand, to which other stakeholders could it be? The first question is not easy to be answered, as Ellen Hazelkorn has recalled: the existing literature shows no clear sign of massive use by students in order to choose their places, except in very specific cases like some US schools7 I would add the absence of testimonies and anecdotal evidence in rankers ads and commmunication: you would never see Jim, Liu or Penelope videos on how they pick their wonderful university by examining their rankings. In fact, rankers probably don’t care so much about students, their target is elswhere. Or when they happen to make one such video, like THE below, the sound is so unprofessionnal that it is embarassing and, of course, students barely mention rankings as a factor in their decision.

If students are not the focus of the rankers’ attention, who are the rankings for? At least three types of users and uses can be listed:

  1. content that is easy to publish and comment on by the media
  2. direct objectives for the universities themselves and policymakers
  3. sources of revenue for rankers organizations

Without mentioning in detail sites specialising in this form such as Buzzfeed, Topito or WatchMojo, the general media have since the beginning of the 21st century integrated the dissemination of rankings produced by third party “rating agencies” on objects as diverse as holiday resorts, hospitals, public personalities or television series. So why not universities? More than complex information, the description and commentary of a ranking are objects that are easy to produce and can be appropriated by large audiences. On the other hand, the lack of popularisation of the modular European U-Multirank shows that classification must remain simple and “objective” in order to be widely disseminated.

We have already mentioned the study of the effects of rankings on the ranked organizations themselves. In addition to the question of optimisation and fraud, changes in practices in the recruitment and evaluation of academics have been observed, the presentation of universities themselves, the construction of coordinated regional, national and supra-national policies on the basis of indicators, etc.8. Whether these transformations are just visual makeup or whether they have a profound impact on universities is a subject of debate in many articles. As far as publications are concerned, China has proved to be a particularly fertile ground for observing the effects of rankings9.

Finally, we should conclude on something that is both obvious and not often discussed. The first stakeholders interested in rankings are rankers themselves. They not only organize conversations on their (free) productions and sell themselves as certified audit firms, but pave the way for services markets. Indeed, rankers offer services such as detailed rankings, training and help to get better ranked, communication or recruitment services to universities. Global rankings is less a market for students or universities than it is for service providers. You can mine for gold or you can sell pickaxes.

  1. as an example, see Billaut, Jean-Charles, Denis Bouyssou, and Philippe Vincke. “Should you believe in the Shanghai ranking? An MCDM view.” Scientometrics 84.1 (2010): 237-263. []
  2. A short version can be read here, Kehm, Barbara M. “Global University rankings–impacts and applications.” Gaming the Metrics (2020): 93. []
  3. Disclaimer: PSL is my “official affiliation” for publications []
  4. see this seminal article, Espeland, Wendy Nelson, and Michael Sauder. “Rankings and reactivity: How public measures recreate social worlds.” American journal of sociology 113.1 (2007): 1-40. []
  5. Bornmann, Lutz, and Johann Bauer. “Which of the world’s institutions employ the most highly cited researchers? An analysis of the data from highlycited. com.” Journal of the Association for Information Science and Technology 66.10 (2015): 2146-2148. []
  6. see Docampo, Domingo, Daniel Egret, and Lawrence Cram. “The effect of university mergers on the Shanghai ranking.” Scientometrics 104.1 (2015): 175-191. []
  7. see Hazelkorn, Ellen. “The impact of league tables and ranking systems on higher education decision making.” Higher education management and policy 19.2 (2007): 1-24. []
  8. See for example Stack, Michelle. Global university rankings and the mediatization of higher education. Springer, 2016. []
  9. Xu, Xin. “Performing under ‘the baton of administrative power’? Chinese academics’ responses to incentives for international publications.” Research Evaluation 29.1 (2020): 87-99. []

The perfect hacking of journal peer review or… The fastest way to become a Highly Cited Researcher

Since the beginning of the 21st century, the names of great fraudsters have spread beyond the academic arenas, each one bringing their biographies, their practices and the astonished tale of the discovery of their misdeeds. This starisation of fraudsters should neither hide the existence of famous cases in the past1, nor the multitude of ordinary misdemeanors and misconduct daily taking place in the academic world, which is hardly different from other professional circles in this respect. Nevetheless, they deserve a place in the Hall of Hame of academic fraudsters; so, before addressing the case of our new champion Kuo-Chen Chou, let’s review a few exemplary figures of this Hall of Fame, in alphabetical order.

Yoshitaka Fujii (2012): enduring Japanese anesthesiologist who owns is world record holder for the number of articles retracted (183). He spent his career inventing data and despite a statistical analysis published in 2000 showing how “too nice” his numbers were, he was not really worried until 10 years later.2

Woo-Suk Hwang (2006) : amazing Korean veterinarian and biologist, specialized in stem cells and producer of the first human clone, announced by a publication in Science. After being accused of forcing his technicians to donate their eggs for his research, investigations revealed the total absence of human cloning. A national glory in South Korea and an international star, his public downfall was so brutal that he made the cover of Time Magazine.

Jan-Hendrik Schön (2002): industrious German physicist, working at Bell Labs on the limits of matter and life, able to co-author in less than two years seven papers in Nature, eight in Science and six in Physic Review. All of course have since been retracted, and it seems that all his research, including his thesis, was based more on his desire to stick to the expectations of theory or those of his colleagues than on the empirical results he claimed to have achieved.3

Diederik Stapel (2011) : extraordinary Dutch psychologist, whose social experiments always proved the hypotheses made… since they were never carried out, but fabricated on paper and computer. Denounced by a whistleblower from his team and 58 retracted articles later, he was the object of a sensitive New York Times portrait . His own production became an object of psychology of deception as his colleagues found small differences in style between his genuine articles and the fake ones.

From transparent peer review
to citation manipulation

These eminent members of the Hall of Fame, all men – women are an extremely small minority among the elected members – have produced “false science” but have neither massively plagiarized nor attacked the peer review system. They rather provided, as good forgers, the expected raw material to journals. However, over the last 10 years, there has been concern about how reviewers or publishers can indirectly influence the science produced, in a more subtle way. “Coerced citations”, “fake peer review” and “cartel citations” are all designations of practices that do not directly fudge the content of articles, but act on the margins by hacking into the journal peer review process.

So, the old criticisms about the misdeeds of anonymity in journal peer review4 were reborn at great expense and many debates about “transparent peer review” took place. Where in the past editors and authors were at the helm of these discussions, now the publishers are in charge and, above all, Elsevier. The company provided two in-house researchers with access to his back office and they were able to compare the bibliography of the manuscripts with those of the published articles and check whether added references were coauthored by reviewers of the manuscript. Unsurprisingly, the authors of When Peer Reviewers Go Rogue concluded that there was a manipulated citation, even if its level is quite low (0.79%).

At the same time, a research team led by the famous John Ioannidis sought to build a “clean” base of citations for the most cited researchers. For them, this meant being able to separate self-citations from the rest, and it was on this occasion that they made a surprising discovery: the staggering intensity in the level of self-citation of some colleagues. Indeed, as your level of citations rise, you expect that they are all the more coming from distant colleagues. Not for everybody:

Vaidyanathan, a computer scientist at the Vel Tech R&D Institute of Technology, a privately run institute, is an extreme example: he has received 94% of his citations from himself or his co-authors up to 2017 (…) He is not alone. The data set, which lists around 100,000 researchers, shows that at least 250 scientists have amassed more than 50% of their citations from themselves or their co-authors, while the median self-citation rate is 12.7% ((Nature, Hundreds of extreme self-citing scientists revealed in new database, 19th August 2019)).”

But then, what are the Highly Cited Researchers, whose numbers are one of the components of the Shanghai Ranking, actually doing? Are they renowned for their influential results or are they more adept at being manufacturers on the citation chain? Bibliometricians would say that a “high” self-citation rate is not necessarly a sign of fraud, but that a detailed inquiry woud be neeeded This is where we return to our newest member of the Hall of Fame, whose work is worthy of close consideration.

A perfect hacker,
always greedy for citations

It all starts with a mundane story: a reviewer asks for additional references in a manuscript. But the request itself is not so trivial: it consists of 35 references, the vast majority of which are co-signed by him, and he indicates that his recommandation to editors, on whether or not to accept the manuscript, will depend heavily on this inclusion. It should also be specified that it is not for a single review that this request is made, but for each of the manuscripts passed through his hands. In describing their decision to ban this unnamed reviewer, editors did not indicate how long this practice has existed. Indeed, it is unusual, to say the least, to request the addition of so many references, and one might question their own responsibility in this matter if it lasted as they reference to the “most recent reviews” seems to imply. To which they reply:

One might ask how this reviewer got away with submitting multiple reviews containing coercive requests for citation before being banned. The shortest explanation is that excessive self-citation demands are generally not seen as an ethical problem until a pattern is established, and a decentralized peer-review system is not amenable to detecting patterns“((Wren, Jonathan D., Alfonso Valencia, and Janet Kelso. “Reviewer-coerced citation: case report, update on journal policy and suggestions for future prevention.” (2019): 3217-3218.)).

And in fact they inquire in other journals, and that suggested the same pattern of behavirour for this reviewer. A year later, in early 2020, the investigation leads to an editorial in another journal, the Journal for Theoretical Biology, (JTB), which reveals perhaps the most complete case of manipulated citation to date. Indeed, the hacker is no longer a simple reviewer there, but a “handling editor” for JTB, which enables him to act at several stages of the manuscript, with a single objective: to accumulate citations.

  1. He took the charge of many manuscripts from his research centre to ensure that they are well treated (conflict of interest).
  2. He chose reviewers requested by the authors, or designated colleagues from his own centre (conflict of interest) or even reviewed them himself under a false name (ghost peer review).
  3. In many cases, with the return of the reviews, he would ask for the title of the article to be changed so that it explicitly refers to his own algorithm, as well as a discussion of his own work in the introduction and conclusion (coerced citations).
  4. As a result, he requested the addition of a very large number of references (up to more than 50) to the bibliography of the manuscript (coerced citations).
  5. Just before the acceptance of the manuscript, he was added as co-author of the article (gift authorship).

We therefore observe two complementary types of behaviour. On the one hand, hidden from the outside, it consists in hacking the flow of the peer review journal, capturing the evaluation process to ensure that the articles most “favourable” to its citation count are actually published – and sometimes with his coauthorship. On the other hand, visible to the authors and perhaps the editor-in-chief and publisher, the aim is to hack the byline, content and references of the manuscript by making imperative requests for inclusion. Thus, these ordinary manuscripts became articles loaded with citations from the hacker.

It can be noted that at this stage, the name of the reviewer is not given by JTB, which caused some to make educated guess on Twitter. News articles in Nature among others, soon follow and revealed his identity: Kuo-Chen Chou, a retired chinese-american biophysicist. We then learn that he has been for years a member of the Highly Cited Researcher “club”5. So, as usual, this extraordinary case will be treated as “rare”, counter-measures have been taken such as an algorithm written by one of the Bioinformatics editor. But the ordinary gaming will still happen, would it be in so-called predatory journals or “prestigious” publishers, with smarter colleagues less greedy on citations and not obsessed with the HCR club. Will you be one of them?6

  1. for example John Darsee, see Broad, William; Wade, Nicholas (1983), Betrayers of the Truth: Fraud and Deceit in the Halls of Science, London: Century Publishing, ISBN0-7126-0243-7 []
  2. For a quick view of this case, see Pontille, David, and Didier Torny. “Behind the scenes of scientific articles: defining categories of fraud and regulating cases.” (2012). []
  3. He was the subject of a wonderful book, Plastic Fantastic, ISBN 978-0-230-22467-4 []
  4. See David Pontille and Didier Torny, “The blind shall see! the question of anonymity in journal peer review.” Ada: A Journal of Gender, New Media, and Technology, No.4. doi:10.7264/N3542KVW (2014). []
  5. the Web of Science Group didn’t list him in 2019 as he had, like others, a high rate of self-citations but, as stated, “Although this list is updated and refreshed each year, a Highly Cited Researcher is always a Highly Cited Researcher—whether their name was included in 2013 or 2019.” []
  6. I am aware that this post contains two self-references but they won’t be counted in any database []

The short history of the h-index or… being one click away to determine whether you are a succesful scientist

Sometimes, newness really happens in the academic world. Take bibliometrics indicators: for at least a century they have been usally proposed and discussed by specialized scientists from a scientometrics background in their field journals (currently JASIST, Scientometrics, …) and nobody else cared, at least for some time. But in 2005, something very peculiar happened: the h-index was coined by a total stranger to that field and had an instant success, which endures until now. How did that happen and why such a success?1

Physicist J.E. Hirsch proposed the h-index in a working paper posted on August 3rd, 2005 on arXiv, the famous open archive developed by physicists in Los Alamos. In his manuscript, he discussed the issue of comprehensively evaluating a researcher and wished, in a radical way, to subsume their whole career into a simple and practical measurement: an integer number. To do so, he considered that the production and its uses had to be taken into account through a citation measurement. So, number h is the greatest number for which h articles by an author have at least h citations. For example, for 5 articles cited at least 5 times, h is 5; likewise, for 50 articles cited at least 50 times each, h is 50.

Yet, the use of algorithms concerning authors was not new. Eugene Garfield, the founder of the Institut for ScIentific Information (ISI), claimed to regularly predict Nobel prizes using the Science Citation Index and had then developed the “ISI highly cited”, presenting results for a tiny fraction of “top”researchers ; similarly, a few disciplines like economics and management had a long tradition of ranking “top authors”. But, to our knowledge, no algorithm had to that date been specifically designed to evaluate authors . This novelty also comes from the lack of consideration Hirsch displayed for the existing litterature: there were only four references in his manuscript and only one in the field.

This departing from the scientometric tradition enabled Hirsch to make several shifts. Firstly, he did not take into account journals in which articles are published, probably because in high energy physicists, journals are used to archive knowledge more than to make discoveries public. Secondly, he excluded the pitfalls of the number and order of coauthors, which are of little relevance in physics but crucial in biomedical research for individual evaluation. Thirdly, his index combined two elements considered as heterogeneous in the scientometric tradition: production on the one hand, and use on the other. And finally fourthly, whereas scientometricians are always very cautious about individual analysis and save it for “outliers”, Hirsch proposed a measurement which applies to all researchers, and as a cherry on the cake, argued that it would be of some use for the allocation of research funds.

How did he make such a bold move? Based on numbers crunched in the case of high-energy physicists, HIrsch forged a model of the “successful scientist”. As the result of his algorithm very much depends on the duration of a researcher’s career, the h divided by the number of years of the career was considered as a good indicator by Hirsch.


“An h index of 20 after 20 years of scientific activity, characterizes a successful scientist. […] an h index of 40 after 20 years of scientific activity, characterizes outstanding scientists, likely to be found only at the top universities or major research laboratories. […] an h index of 60 after 20 years, or 90 after 30 years, characterizes truly unique individuals”.


At that point, Hirsch woud have probably been considered as a bibliometrics crackpot, a talented physicist that happens to crunch citations numbers in his pastime and posting his “personal views” on a website.

An instant success,
an impressive series of implementations

To the surprise (and probably horror) of the scientometrics community, the h-index was taken up at a staggering rate. As soon as his manuscript had been available on ArXiv, Hirsch received extensive feedback from his physicist colleagues and, in view of the shared enthusiasm, an open archive specialized in high energy physics, SPIRES (Stanford Physics Information REtrieval System), implemented the algorithm on its dataset only two weeks later. The same day, August 17th, 2005, Nature presented Hirsch’s proposition and highlighted his colleagues’ enthusiasm, while an editorial entitled “Rating Games” discussed the respective role of metrics and peer review. Shortly afterwards, in November 2005, two of Hirsch’s colleagues published the manuscript as an article2 in the Proceedings of the National Academy of Sciences, thus confirming physicists’ keen interest in this new measurement.

The popularization of the h-index took a new turn a yer later with a bibliometric tool developed by Ann-Will Harzing, Publish or Perish (PoP). In October 2006, this management professor at the University of Melbourne put online a small software, operationalizing the h-index calculation. That way, for any author whose name is entered by the user, irrespective of the discipline, PoP calculates their h-index in a single click, based on the nascent Google Scholar dataset. This tool could be downloaded for free and has thus allowed the magic algorithm to reach users far beyond audiences specialized in scientometrics. Researchers and institutions adopted this bibliometric tool so fast that the British Medical Journal published a spoof article describing the different pathologies it generates. Meanwhile, in May 2007 Elsevier had included the h-index in Scopus; Thomson-Reuters likewise had changed its “ISI Highly Cited” and integrated this index into the WoS in 2008. Thus, in just three years, individual measurement became a practical operation drawing on bibliometric tools easily accessible to each researcher.

This cycle of implementation was provisionally finalized by the opening of Google Scholar Citations (GSC) in the summer of 2011. With this new service, every academic could create and have control on her/his profile page on Google Scholar, and could decide to make it public or not. Whatever the choice, GSC would then automatically compute three metrics:
the widely used h-index, the i-10 index, which is the number of articles with at least ten citations, and the total number of citations to your articles.
At this point, the definition of the h-index wasn’t anymore needed and thousands of academics quickly made theirs available online. As Paul Wouters and Rodrigo Costas soon noted in their 2012 manuscript, this was a typical example of what they named “technologies of narcissism”, a mirror through which you and others would look in order to measure your influence, evaluate your importance, worry about your deficiencies.

From journals to articles to authors:
new policies for evaluation

Then, despite professional scientometricians harshly criticizing the h-index for its crudeness, pointing out its variations and limits or taming it to make a g-index or a v-index, the utopic/dystopic vision of Hirsch had come into reality. Conversely, it is because of its very crudeness that made it easier to implement in databases & more readable to lay researchers. But its availability is not sufficient to explain the duration and intensity of uses – the orignial paper will probably pass the 10.000 citations mark in 2020. There are two main and very different reasons for its popularity, which we must analytically distinguish. The first one is the implementation of the algorithm to new objects, such as research groups or even journals. Confronted with its unexpected success and eager to use the new data avaialable, scientometricians got into an h-index frenzy3. While the dominant popular bibliometrics index, both within the community and outside of it, had been for 30 years the Journal Impact Factor (JIF) then prouced and owned by Thomson ISI, the growing success of the h-index made it a challenger in the bibliometrical index “market”. Many papers compared the pros and cons of each algorithm, based on different datasets.

Nevertheless, this “good index” competition shouldn’t hide a second reason for which the h-index became so popular and discussed. Its use was sustained by a political agenda in assesment and evaluation, best represented by the San Francisco Declaration on Research Assessment (DORA) published in 2013. This complex text is often subsumed as an anti-bibliometrics statement, in which signing institutions promise they won”t use the JIF as a way to evaluate research, would it be for hiring, promoting, giving grants, etc. Beyond this simple vision, there are more nuanced recommandations that oppose journal-based metrics, but don’t refuse bibliometrics as a whole. JIF is seen as a bad way to perform quantified assesment, where article-level metrics, whatever they are (citations, downloads, views, social networks mentions…) are more realistic of the “impact” of a given research.

The problem with these new metrics is that nobody really knows what they are used for and what they really mesure4. Consequently, the most popular article-level metric remains the number of citations in a given database (Web of Science, Scopus, Crossref, Google Scholar). Rather than summing these numbers for a given journal, the aggregation is made on a given author. It is so simple to perform on the same databases, that what was absurd a few years ago has become natural. The “h revolution” therefore went beyond Hirsch’s own vision on two points. Firstly, his h-index was to be used for senior scientists, it is now also being applied to/by early and mid-carrer researchers as a more “ethical way” to judge their impact. Secondly, its extension goes hand in hand with a potential transformation of the model of scientific communication, to a post-journal world, in which any kind of text could be cited and counted. Rather than highlighting the fact that you actually passed the test of supposed prestigious journals, you just now give your h number and academic age, so everybody checks whether you really are the successful scientist you pretend to be . ((Please cite selected papers of the author of this post so he may finally become one)).

  1. This post is partially adapted from Pontille David, Torny Didier, « La manufacture de l’évaluation scientifique. Algorithmes, jeux de données et outils bibliométriques », Réseaux, 2013/1 (n° 177), p. 23-61. DOI : 10.3917/res.177.0023 []
  2. There are almost no differences between the ArXiv manuscript from mid-August (V3) and the published PNAS paper []
  3. See for an early litterature review, Bornmann, Lutz, and Hans‐Dieter Daniel. “The state of h index research.” EMBO reports 10.1 (2009): 2-6. []
  4. See Haustein, Stefanie, Timothy D. Bowman, and Rodrigo Costas. “Interpreting” altmetrics”: Viewing acts on social media through the lens of citation and social theories.” on ArXiv []

The Political Economy of Academic Publications

This blog is part of a vast research program on the political economy of scientific publication, which has been strongly transformed over the last twenty years by the electronic dissemination of journals. It considers publishers, editorial committees and journals as socio-political actors to be studied in three complementary aspects detailed below.

Firstly, they are analysed as economic actors defining publishing markets. The conditions under which these markets were created have been the subject of much criticism, and strong transnational mobilisations around open access have been deployed, which has influenced the construction of public policies that are contrasted internationally. New economic models have emerged, of which direct payment by the authors (APC), is only the most visible, but not the most frequent. The multiplication of coloured labels (Green, Gold, Platinum, Bronze, Diamond) to designate these models does not fully account for their subtle differences, nor for the sustainability of the associated business model, compared to the classic subscription model which has led to a “serial crisis” over the last 20 years, with the massive increase in the cost of access to publications for libraries


Secondly, journals and publishers are studied as places of production, including innovations in evaluation technologies (open peer review, technical soundness based review…). In particular, it is the growing debate on post-publication peer review policies, including withdrawing articles, that will be examined, as well as the emergence of platforms for public discussion of their validity such as PubPeer. The question of the centrality of journals for peer review or their marginalization (overlay journals, recommendations…) will also be addressed.


Thirdly, journals are treated as places of valorisation, seeking to attract authors and promote their position through the use of different measures (citation, referencing, uses…), which they highlight or criticise. In addition to the recurring debates on the Journal Impact Factor, a measure that is currently much decried, there will be discussions on alternative metrics, or even on responsible metrics, which are supposed to better represent academic production and its uses.


These three aspects aim in particular at sheding light on new forms of self-regulation by academic actors (systematisation of advertising for the withdrawal of articles, generalisation of post-publication peer review, stigmatisation of predatory publishers, uses of creative commons licenses…), the innovative and argumentative work of publishers and platforms, whether public, para-public or private, and the redefinition of public policies in the field of academic publication.