Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Dealing with the grey zone of publishing or… how I will never be an editorial board member of MDPI Publications.

The “predatory publisher” category raises more questions than answers. Just like “academic fraud”, it tends to validate a black & white world in which rules and norms are clear-cut and universally shared through time, disciplines and countries. There is now an extensive literature presenting lists, criteria and even automatic detection for such publishers or their journals, most of it being written without questioning the label “predatory”1. More interestingly, there are a few papers describing the point of view of authors publishing in such vilified outputs, showing both the deceptions performed by the publisher and the good faith of most authors2.

Such results could be downplayed on the grounds of the authors’ peripheral position or the low power of the studies. On the opposite, everyday stories show that separating the wheat from the chaff is rather complex because a huge and diverse “grey zone” exists, even for scholars well versed in the arcane of publishing. This post aims to describe such an example by making the story personal rather than abstract, using testimonies, personal opinions and statements. In this world, the choice to review, write or edit for a given journal or publisher remains tricky, based on existing alternatives, personal ethics and situated decisions

It has become an almost daily ritual: an invitation to present at a conference, to submit a manuscript to a journal, or even to join an editorial committee, sent by people you don’t know, with a vague personalized message through your name and the reproduction of the title of an article you’ve written, with zero relevance to the issuing conference or journal. On August, 9th 2023, I received one of these messages, entitled: “[Publications] (ISSN 2304-6775) Invitation to Serve as an Editorial Board Member”. It caught my attention for three reasons: firstly, it’s that time of the year with the lowest e-mail volume; secondly, there was an apparent MDPI account in the copy of the mailing; and thirdly, I know this journal, some of its articles I found relevant or even enlightning, and others cite some of my work (which is not a sign of quality but could explain this invitation). So let’s read it again:

We would like to invite you to join the Editorial Board of Publications (ISSN 2304-6775, https://www.mdpi.com/journal/publications). Publications is open access and peer-reviewed, covering all aspects of scholarly publishing and communication. You can find the proposed scope of the journal here: https://www.mdpi.com/journal/publications/about/.
Publications is abstracted and indexed by Scopus, ESCI, DOAJ.

This presentation is not typical of « predatory » emails that begin by flattering the recipient of the message, emphasising the importance of their work and what they can contribute to the journal, and at the same time inviting them to join the editorial board and submit a manuscript.

Editorial Board Members will be responsible for final decisions on  manuscripts in their field of expertise and may be invited to review manuscripts. The initial term lasts for 2 years, and entails:
• Pre-screening and making decisions on new submissions related to your research interests;
• Providing input or feedback regarding journal policies;
• Helping to promote the journal among your peers or at conferences;
• Attending Board Meetings to suggest journal development strategies;
• Reviewing manuscripts.
• Helping to attract suitable expert authors.

The job description for editorial board members is typical of a journal owned by a commercial publisher: in a nutshell, you run the journal, you promote it without being the decision-maker on its policies and, obviously, without compensation except in what many colleagues name a prestige economy.3

If you accept our invitation, please provide your contact information and a list of keywords reflecting your expertise, in accordance with the entry examples at https://www.mdpi.com/journal/publications/editors/. If possible, please also send, for our records, a CV or an official website with your biographical data (including a list of your publications). If this is of interest, but comes at an inopportune time, you may have a recommendation for a senior expert to serve as an Editorial Board member. If you have any questions, suggestions or recommendations, please let us know.

This detailed procedure is another hint to add in the direction of a genuine invitation for a position in the committee of a standard commercial publisher journal, which is rarely considered as a “grey zone” output. Nevertheless, there is another surprising piece of information about the ‘benefits attached to the position of editorial board member. I don’t know if it is a standard practice in APC-based journals5. Yet this three-line paragraph looks extremely problematic to me for three cumulative reasons. Once again, others could consider it completely benign, due to shared ethics and practices, which would prevent everything that follows.

Additionally, you are welcome to publish with the journal—this will be free of charge once accepted for publication. The term for the Editorial Board membership lasts for two years and can be renewed.

This paragraph alone could have made me cringe and refuse the invitation. But it also comes from a publisher which is piling up 50 shades of grey stories.

At the end of the last century, a Chinese chemist woking in Switzerland invented a preservation practice: to deposit a sample of compounds associated with an article, and to do so confounded the la Molecular Diversity Preservation International (MDPI). He then created a journal, Molecules, published by Springer, taking the position of editor-in-chief and explained his aim:

Soon, the relationship with the publisher turned sour, and the chemist, Sun-Kun Lin, decided to add to MDPI operations the publication of a journal entitled… Molecules. Springer threatened to sue him claiming it owned the title but finally did not. The lone rebel quickly developed a successful business formula, sometimes labelled “low-cost full open access journals”. It was a striking example of the “new journals” that the open access activists pushed for in the BOAI. Later, MDPI was rebranded MultiDisciplinary Publishing Initiative, became one of the top 5 academic publishers as far as volume is concerned, with most journal titles being as generic as possible from Acoustics to Youth, passing by Genes, Physics, Societies and Software. So, the initial declaration of indépendance made by one academic grew into a worldwide success story, but went along with dubious reputation and negative narratives.

Let’s give a few examples of these stories. A neurosurgeon received a request for an assessment from the Journal of Clinical Medicine and responded with a very negative assessment within 2 days. In his view, there were major methodological problems preventing publication, in particular discrepancies between the protocol described and the reality of the clinical trial. Two days after sending his review, he received a request for the revised article. This new manuscript was very different from the first:

Despite these problems, two other reviewers accepted the manuscript, while another colleague agreed to reject it, ‘forcing’ the editor-in-chief to refuse publication. But the story doesn’t end there, because the same manuscript reappeared in another MDPI journal, Geriatrics… in a version very similar to the first manuscript submitted. The reviewer contacted the editor-in-chief, shared his experience and the manuscript was ‘withdrawn’ in agreement with the authors. But like a B-movie with undead enemies who never disappear, the manuscript was eventually published in a third MDPI journal, Medicina. This story is typical of the grey zone: authors willing to do anything to get published, complacent reviewers, a system of manuscript transfer to ‘optimise’ publication, but also honest reviewers and editors willing to listen even when they want to reject.

Let’s turn now to the employees of MDPI, those who maintain the supply chain publication. In Bucharest, a young employee died of a heart attack on MDPI’s premises, and the local media questioned the employer’s responsibility in dealing with the health emergency and, more broadly, working conditions. As in other capital-intensive production facilities, journal editors are partly judged and rewarded on the basis of quantitative indicators. Not only does the APC-based model generate income only if the manuscript is accepted, but the workers are directly incentivised by this ‘success’:

This production culture, which combines speed and pressure, is felt by both editors and reviewers. It is always in this form of experience and testimony that colleagues react, in a ‘me too’ frame. This often leads them to break off all relations with the publisher, its journals and its editors, despite frequent reminders by e-mail:

Responses to the Neurosurgeon reviewer story on Blue Sky https://bsky.app/profile/supersciencegrl.co.uk/post/3l2kapfizr626

These experiences happen within the frame of “normal” journals, but there is an elephant in the room: MDPI developed/invented the concept of multiple special issues happening at the same time for any given journal. As a social scientist, I normally enjoy special issues on my topics of interest and cherish a well-crafted collection of papers which, ideally, discuss with one another. For example “Women in Chemistry Science” seems like a very interesting topic for the already cited Molecules. But MDPI twisted this nice idea making the traditional “special issue” another grey object.

I said at the beginning of this post that I would give priority to a situated view, to experience rather than objective data. But for those of you who don’t know, here were the figures for MDPI on the importance of special issues:

Source: https://paolocrosetto.wordpress.com/wp-content/uploads/2021/04/overall_articles_si_waffle.png

This figure is extracted from a very informative blog post by Paolo Crosetto, written almost 4 years ago, and entitled “Is MDPI a predatory publisher?” in which the author answers “yes and no”.

That very fine post was heavily commented on the data, its interpretation, showing very contrasted views to answer on the nature of MDPI publishing. Furthermore, by the end of 2021, the number of special issues had exploded, with some journals publishing more than one a day. On the producer side, we now know that the special issue system has a similar “point system” that pushes MDPI employees to publish more. But how do we explain this success on authors side? In my opinion, it is a perfect combination of what at first sight might not seem to be so: the close relationships of small groups of colleagues sharing peculiar interests and a large-scale industrial production model. Again, let’s be concrete and talk about my own experience. After not responding to the invitation to join the editorial board, I received an invitation 14 months later to manage a special issue:

Dear Dr. Torny,

This is Mike Tang, Section Managing Editor of Publications.
We are planning to launch a new Special Issue, "Preprint and Open-Access Publishing", in the journal Publications, and we would like to invite you to act as the Guest Editor for this issue.

Wouah ! Preprint and OA publishing, exactly the topics I am interested in. I would easily find 10 colleagues in different countries who would provide very interesting and innovative contributions from diverse perspectives. Sure, I would also suggest expert reviewers, find 10 APCs in grants, manage the whole process and be proud of the results. All for free: I forgot to tell you, there is no financial incentive in getting the job done, no APC waiver,, except fulfilling your dreams as a knowledge creator and disseminator. And all the benefits of such a special issue, as well as your duties are being described in depth by the publisher. Despite being emailed with the exact same message3 more times, I did not answer.

After my narration I could state, with other colleagues that some elements of MDPI business models and practices are unpleasant, incentivize problematic publications or that its special issue model is deeply fraught at scale. That is a personal opinion based on empirical matter, it does not make enough to stand in an academic article. Especially as the “predatory” label has led to strong responses from publishers, as it has been seen again in the MDPI case. In July 2021, an article was published on the question of predatory publishing definition , taking the empirical example of MDPI journals. Its conclusion could be summed up like this::”The formal criteria together with the analysis of the citation patterns of the 53 journals under analysis all singled them out as predatory journals. 

Its fate illustrates the consequences of a black-and-white representation and the difficulties to qualify the ‘grey zone’ in academic discourse. In fact, less than a month later, MDPI responded point by point to this article on its website11. Above and beyond the refutation of some empirical data, MDPI would point out the problems to define the predatory category thus invalidating the overall conclusion of the author.. For example, they discussion the number of editorial board members:

MDPI also insists in several occurrences that it is not unique among publishers, or at least among commercial publishers. It even acknowledges the possibility of predatory publishing… naturally into other publishing houses!

So the author made a paper claiming a lot of MDPI journals were predatory, the publisher responded in a very civil academic way. What happens next ? One month later, Research Evaluation/OUP published an Expression of concern without detail, then retracted the article and published simultaneously a new version, redacted by them and the author. Beyond the fact that such a process is rare – you would expect a simple correction, has MDPI played a role in the process ? Was there pressure? How dit the author feel? As often, we have to turn to Retraction Watch to have all the details, with interviews of the author and MDPI12. We will focus here on MDPI position:

So not only MDPI claims to have been left in the dark, but ironically considers that the largest university press (and one of the oldest) does not follow best practices. Their call was heard as a PDF was published a month later as supplementary data, depicting all the changes between the original and the revised version. Let’s simply take the following two exemples:

In both cases, the author has kept her general argumentation, but she has abandoned her objective language in favor of verbal modalisations that leave room for subjectivity: we may think that these journals are predatory, but we no longer assert, without any possible discussion, that they are classified as such. In other words, we’re back in the grey zone where it’s up to everyone (authors, reviewers, publishers, institutions) to decide what they want to do.

And this is where we go back to our starting point. Why did I receive that invitation to join Publications editorial board in the first place? Because the vast majority of the editorial board of Publications had resigned a few months earlier, relatively discreetly. The most immediately visible trace came from its former editor in chief, Gemma E Derrick, which started the exit movement.

Source: https://x.com/GemmaDerrick/status/1636719479441727488

Her justification is typically framed as a ‘grey area’ one : it is not possible to articulate good editorial practices in her specific scientific collective with the multiplication of stories about the problematic practices of MDPI, the owner and publisher of the title. The subsequent resignations are reported by a Norwegian newspaper13. In addition to the reasons given above, it is also the case that this particular journal deals with publications that need to be brought even more into line with current publishing standards. I was already aware of the resignations when I received the first MDPI email, and that was an additional reason for me not to accept the invitatio, as it would belittle the move from colleagues whom I hold in high esteem, and not to show the solidarity they have thus expressed.

Unlike other cases of mass resignation from journals, Publications editorial board members didi not directly criticize MDPI, because they target the publisher’s general policies rather than their specific treatment. Their resignation reminds us that we can decide, in each of our micro-acts, in which scholarly communication world we want to live . Let’s take a final MDPI example: one journal had really become their flagship, International Journal of Environmental Research and Public Health (IJERPH). In fact, its growth seemed limitless, notably through the special issue format, to reach the incredible number of 16,889 articles published in 2022. And then, in April 2023, Clarivate announced the delisting of more than 50 journals, notably many Hindawi journals, but also MDPI IJERPH. What were the consequences? MDPI’s PR stated that still 90% of its content was listed by Web of Science. But authors fled in flocks as soon as the announcement was made, even before the journal was going to effectively lose its Journal Impact Factor.


Source: https://scholarlykitchen.sspnet.orgwp-content/uploads/2023/09/Figure-1-1024×530.png

This trend has been confirmed in 2024, with the total number of IJERPH articles making less than 10% of its peak in 2022, raising the question of authors’ responsibility. All it takes is Clarivate’s ‘quality signal’ to disappear for them to turn their backs on a journal in which they used to publish en masse. This is the magic of the grey zone, where one can venture without any real consequences. Playing with the rules, bending practices, encouraging mass production – these are not specific to MDPI, but to varying degrees are shared by all commercial publishers, with authors complying and most often editorial teams. And it is probable that their megajournals like Elsevier’s Heliyon and Springer’s Cureus will meet a similar fate.

  1. On the geopolitical consequences of that position, see Taşkın, Zehra, Franciszek Krawczyk, and Emanuel Kulczycki. “Are papers published in predatory journals worthless? A geopolitical dimension revealed by content-based analysis of citations.” Quantitative Science Studies 4.1 (2023): 44-67, https://doi.org/10.1162/qss_a_00242 []
  2. Boukacem-Zeghmouri, Chérifa, Lucas Pergola, and Hugo Castaneda. “Profiles, motives and experiences of authors publishing in predatory journals: OMICS as a case study.” (2023) []
  3. For example, Tennant, Jonathan P., et al. “Ten hot topics around scholarly publishing.” Publications 7.2 (2019): 34.https://doi.org/10.3390/publications7020034 []
  4. This is a theoretical division, actual tasks performed are another story see on the Diamond journals case, Dufour, Quentin, David Pontille, and Didier Torny. “Supporting Diamond Open Access journals.” Nordic Journal of Library and Information Studies 4.2 (2023): 35-55., 10.7146/njlis.v4i2.140344 []
  5. To my knowledge there is no data on these policies towards editors, except on the question of paid editorial board members, especially in biomedicine []
  6. Teixeira da Silva, Jaime A. “The Conceptual ‘APC Ring’: Is There a Risk of APC-Driven Guest Authorship, and Is a Change in the Culture of the APC Needed?.” Journal of Scholarly Publishing 55.3 (2024): 404-425.https://doi.org/10.3138/jsp-2023-0060 []
  7. Lin, SK. Editorial: A Good Yield and a High Standard. Molecules 1, 1–2 (1996). https://doi.org/10.1007/s00783005000). []
  8. Rene Aquarius, “My reviewer experience at MDPI”, August 2024, https://deevybee.blogspot.com/2024/08/guest-post-my-experience-as-reviewer.html []
  9. Young employee’s death puts workplace culture in spotlight at publisher MDPI, Retractionwatch, 22nd October 2024 []
  10. Paolo Crosetto, “Is MDPI a predatory publisher?”, 12 April 2021 []
  11. MDPI: Comment on: ‘Journal citation reports and the definition of a predatory journal: The case of the Multidisciplinary Digital Publishing Institute (MDPI)’ from Oviedo-García, []
  12. .Article that assessed MDPI journals as “predatory” retracted and replaced, Retraction Watch, 8 May 2023. Once again, it has become a very commented post []
  13. https://www.khrono.no/truer-med-a-flykte-fra-tidsskrift-etter-at-redaktoren-ble-kastet/794389 []

The fair price of an open access article or… how Nature relaunched a long lasting conversation

If you ask the open access community what happened in October 2003, chances are they will cite the Berlin Declaration as an important moment of consolidation of international mobilisation. At the same time, however, there was a large-scale attempt to charge authors for the publication of their open access research. Indeed, this was the time when the publisher Biomedcentral announced the switch of all its journals to a then little known financial model: Article Processing Charges. Let’s take the example of two journals passing on this announcement when they discuss the price of the service:

Although some authors may consider US$525 expensive, it must be remembered that The Journal of Translational Medicine does not levy additional page or colour charges on top of this fee, which can easily exceed US$525. With the article being online only, any number of colour figures and photographs can be included, at no extra cost.

There is no remuneration of any kind provided to the Editors-in-Chief, to any members of the editorial board, or to peer reviewers; all of whose work is entirely voluntary. Although some authors may consider US$525 expensive, it must be remembered that Journal of Neuroinflammation does not levy any additional page or color charges on top of this fee. Because we are an online-only journal, any number of color figures, photographs, and ‘extra’ pages can be included at no extra cost. Such color and page charges, as assessed by more traditional journals, can easily exceed our flat US$525 per-article APC. Another common expense with traditional journals is the purchase of reprints for distribution, and the cost of these reprints is also frequently greater than our APCs. The Journal of Neuroinflammation provides free, publication-quality pdf files for distribution, in lieu of reprints.

Three elements emerge from these excerpts: firstly, their similarity indicates a copying of elements provided by BMC to justify this change in business model, the financing having previously had to rely on any source except the authors and in particular a support programme for research institutions; secondly, the price is related to the costs of making content and formats available free of charge to readers; and thirdly, the novelty of the payment for authors is minimised in favour of a continuous interpretation between page charges and article processing charges. Indeed, at least since the 1930s, in some disciplines, the authors’ contribution to publication costs – and not only to the cost of reprinting copies for personal circulation – has been documented. And a vast majority of science journals were still asking for such charges at the beginning of the 2010s1

This continuity is debatable, but the APC system put in place by BMC, like the one adopted at the official launch of PLOS Biology around the same time, is a partial legacy of these in print practices. As in the past, only accepted items are invoiced at a single “catalogue” price for defined services.

TO BE CONTINUED

  1. Curb, Lisa A., and Charles I. Abramson. “An examination of author-paid charges in science journals.” Comprehensive Psychology 1 (2012): 01-17. []

Readers in the Making of Scholarly Knowledge or… how article (e)valuation has become more democratic

The wonderful book entitled Reassembling Scholarly Communications. Histories, Infrastructures, and Global Politics of Open Access, edited by Martin Paul Eve and Jonathan Gray at MIT Press is finally out!

So to push you to read the work of diverse and enlightning contributors, I have remixed and shortened our chapter about readers and their empowerement in contemporary peer review. In the full version, we underlined one of the most decisive effect of open access: the accelerating rise to power of ordinary readers1.

Pre-Publication Peer Review as Reading

Throughout the history of peer review, the three judging instances (editors-in-chief, editorial committees, outside reviewers) that have gradually emerged were the first readers of submitted manuscripts. This may seem trivial, but the essential activity of evaluating an article – unlike other types of academic evaluatiion – is indeed the handling of a text. Admittedly, the peer review article can be considered to include many other things, such as checking that ethical rules are being followed or that data is actually being made available, but the question of taking into account the content of the article – whether in the form of a paper file or a computer file – has always been essential. The acts of reading are far from being simple, whether you consider “geographies of reading”2 (with whom, where, in what setting), what attracts the attention of readers, how texts are annotated,  how journals inform those practicies and what are the purposes of such acts.

Their respective importance and the way in which their readings are coordinated may be subject to local conventions at a journal, disciplinary, or historical level. They are also marked by profound divergences due to distinct issues in manuscript evaluation. The space of possibilities within which these readings are conducted is a subject for public debate that leads to the invention of labels and the stabilization of categories, and to the elaboration of procedural and moral norms. For example, on the respective anonymity of authors and referees, four labels have been coined since the 1980s

  Reviewers  
Authors Anonymized Identifed
Anonymized Double Blind Blind review
Identified Single Blind Open review

Source: David Pontille and Didier Torny, “The Blind Shall See! The Question of Anonymiity in Journal Peer Review,” Ada 4 (2014), https://doi.org/10.7264/N3542KVW.

These spaces of possibility currently coexist in each discipline, being attached to different scientific and moral values, pertaining to the responsi- bility of reviewers, objectivity of judgements, transparency of process, and equity toward authors. The different possibilities here show that Merton’s “organized skepticism” and the agonistic nature of the production of scientific facts described by Latour and Woolgar long ago are, indeed, not self-evident. The contemporary moment is characterized by reflexive readings of peer- review technologies: manuscript evaluation has itself become an object of systematic scientific investigation. Authors, manuscripts, reviewers, journals, and readers have been scrupulously examined for their qualities and competencies, as well as for their “biases,” faults, or even unacceptable behavior. The diverse arrangements of manuscript evaluation are thus themselves systematically subjected to evaluation procedures.

Post-Publication Peer Review
as Ordinary Readers empowerement?

Peer review in the twenty-first century can also be distinguished by a growing trend: the empowerment of “ordinary” readers as new key judging instances. If editors and reviewers produce judgments, it is through a reading within a very specific framework, as it is confined to restricted interaction, essentially via written correspondence, which aims at authorizing the dissemination of manuscripts-become-articles. Other forms of reading accompany publications and participate in their evaluation, inde-pendently of their initial validation.

Citing Articles : with the popularization of bibliometric tools, citation counting has become a central element of journal and article evaluation. But it also need a transformation of formats, an identification of references and a fundamental transformation: the act of referencing relates to a given author, whereas a citation is a new and perhaps calculable property of the source text, creating what Wouters called “citation culture”. Then, highly disparate forms of intertextuality are rendered commensurable: the measured or radical criticism of a thought or result, integration within a scientific tradition, reliance on a standardized method described elsewhere, existence of data for a literary journal or meta-study, simple recopying of sources or self-promotion. Citation thus points towards two complementary horizons of reading: science as a system for accumulating knowledge via a referencing operation, and research as a necessary discussion of this same knowledge through criticism and commentary.

Commenting texts: in a view of publication as explicitly dialogical or polyphonic, reader can become commentersTraditionally, before an article was published, comments were mainly directed toward the editor- in-chief or the editorial committee. Through open review, commenters enter into a dialogue with the authors and thus open up a space for direct confrontation. Prior to the emergence of electronic spaces for discussion, objects like “special issues” or “reports” in which a series of articles are brought together around a given theme to feed off one another after a short presentation. Post-publication commenting was also common through two elementary forms: by referring to the original article or by sending a letter to the editor. The electronic space led to many experiments of post-commenting: most of them met no success (PLOS, Nature, Pubmedcentral…), until the unexpected success of anonymized comments on PubPeer.

Sharing papers: until recently, readers other than citers and commenters remained very much in the shadows. Yet library users, students in classes, and col- leagues in seminars, as just a few examples, also ascribe value to articles; for example, through annotation. The existence of articles in electronic form has made their readers more visible. Persons who access an “HTML” page or who download a “PDF” file are now taken into account, whereas in the past it was only the distribution of journals and texts, mostly through libraries, which allowed one to assess potential readership. By inventorying and aggregating the audience in this way, it is possible to assign readers the capacity to evaluate articles. The creation of online academic social networks (e.g., ResearchGate, Academia .edu) has trivialized this figure of the public, not only by counting “academic users,” but also by naming them and offering contact. At the same time, online bibliographic tools (e.g., CiteULike, Mendeley, Zotero) that objectify the readers and taggers who introduce references and attached documents into their bibliographic databases. Without being citers them selves, these readers select publications by sharing lists of references, the pertinence of which is notified by the use of “tags.” These reader-taggers are also embedded in the use of hyperlinks within “generalist” social net-works (e.g., Facebook, Twitter), by alerting others to interesting articles, or by briefly commenting on their content, feeding the whole “article-level metrics” movement. Here the readers, tracked by number and diversity, revalidate articles in the place of the judging instances historically qualified to do so.

Examining Documents : This movement is even more significant in that these tools are applied not only to published articles but also to documents which have not been vali-dated through the growth of prerint servers. This flow of electronic manuscripts feeds the enthusiasm of the most visionary who, since the 1990s, have been announcing the end of journals. On the contrary, we obsersed new technologies have been built on these archives, suchas “overlay journals,” in which available manuscripts are later validated by reading peers in various ways. With a view to dissemination, advocates of readers as a judging instance tend to downplay the importance of prior validation. While the valida- tion process sorts manuscripts in a binary fashion (accepted or rejected), such advocates contend that varied forms of dissemination instead encour- age permanent discussion and argument along a text’s entire trajectory. In this perspective, articles remain “alive” after publication and are therefore always subject not only to various reader appropriations, but also to public evaluations, which can reverse their initial validation through flagging articles in official journal policies.

The Academic Closet
vs. The Readers Bazaar

Driven by a constant process of specialization, the extension of judging instances to readers may appear as a reallocation of expertise, empowering a growing number of people in the name of distributed knowledge. In an ongoing context of revelations of massive scientific fraud, which often implicates editorial processes and journals themselves, the dereliction inherent to judging instances prior to publication has transformed the mass of readers in a vital resource for unearthing error and fraud. As in other domains where public expertise used to be exclusively held by a few professionals, crowdsourcing has become a collective gatekeeper for science publishing. Thus peerdom shall be reshaped, as lay readers have now full access to a large part of the scientific literature and have become valued audiences as quantified end-users of published articles.

If open science has become a motto, it encompasses two different visions for journal peer review. The first one, which includes open identities, takes place within the academic closet, where the dissemination of manuscripts is made possible by small discourse collectives which shape consensual facts. This vision is supported by the validation processes designed by Robert Boyle during the emergence of modern scientific practices. By contrast, in a Hobbesian fashion, the second one urges an openness in multiple ways, building an academic democracy where each reading may litterally been accounted. The disentanglement of peer evaluation goes througdh the ability given to readers to comment on published articles, to produce social media metrics through the sharing of documents, and to observe the whole evaluation process of each manuscript. In this vision, scholarly communication not only relies on crowdsourced peer review but on a plurality of instances that generates a continuous process of judgment. The first vision has been at the heart of the scientific article as a genre, and a key component of the scientific journal as the most important channel for scholarly communication. Whether journals remain central in the second world has yet to be determined.

  1. We means David Pontille and myself. You can read the full chapter here. Of course, as readers, you are welcome to cite, comment, share & examine this chapter []
  2. Livingstone, David N. “Science, text and space: thoughts on the geography of reading.” Transactions of the institute of British geographers 30.4 (2005): 391-401. []

The institutionnalization of retraction … or how to reconsider the status of truth of published papers

If you wish to buy this mug (no conlict of interest, I get no money if you click).

Retraction Watch has celebratied its 10th anniversary and its creators have grown from a small blog to a reputable entity, funded by numerous donors, source of academic publications, run by the Center for Scientific Integrity and manager of a database acknowledged for its quality. With the COVID-19 epidemic, the retraction of scientific articles (and even preprints) has become a mainstream media object, fully public beyond the academic communities directly concerned.

The institutionnalization of the website mirrors the one of the retractions themselves, which have become partly normalized into the publshing process as a key part of post-publication peer review. In this post written for the Peer Review 2020 week, which theme is “Trust in peer review”, we wil briefly look at journal policies and how they change the actual trust given to published articles1.

Flagging published articles.
Don’t trust what your read

“Certified”, ” peer-validated”, “peer-reviewed”, all these notions are aimed at different practices but with the same objective: to assert that the text you are reading is not the simple product of authors’ reflections and their exploration of a phenomenon, of theories and observations, but that of a more or less complex process of evaluation of a manuscript by others, not recognised as co-authors but sufficiently knowledgeable about the subject, the methods, the literature, that they indicate to you that this content is valid.

Then, of course, scandals and other fraud cases multiplied, science stars falling one after another, but you could always believe that these were exceptions, special cases, that almost all the articles contained true and proven statements… at least until 2009. That year, the COPE organisation published its first standards2 on retracted articles, showing that it was not only normal, but expected that journals would plan to remove from the scientific canon articles they had previously published. To be more precise, it was a matter of flagging different articles according to the situation:

Journals editors should consider issuing an expression of concern if:…
Journals editors should consider issuing a correction if:…
Journals editors should consider consider retracting a publication if…

In this system, an “expression of concern” casts doubt about an article and warns readers that its content raise some issues. In most cases, it describes information that has been given to the journal, which led it to alert its readers about an ongoing investigation, but does not directly state about the validity of the work.
On the contrary, when it comes to “correction”, it is always stated that the core validity of the original article remains, some parts of its content being lightly or extensively modified. In some cases, the transformations have been carried to such an extent (e.g. every figure have been changed) that some actors have ironically coined the term “mega-correction“ to characterize them. Contrary to an expression of concern, the authors of the article are fully aware of these modifications and, even if they have not written it, do necessarily validate them before the publication of the so-called (mega)correction. If they don’t, journals sometimes publish editorial notes instead of corrections.
Finally, a “retraction” aims at to inform readership that the article validity and/or reliability and/or ethcal background and/or authorship does not stand anymore. Far from being an erasure, it is conceived of as the final step of the publishing record of the original article, as the notice of retraction “should be linked to the retracted article”. A retraction is either conducted in close collaboration with the authors, or against them upon the request of someone else who is explicitly named (e.g. a journal editor-in-chief, a colleague, a funding body…).
Ten years later, COPE produced a second version3 of its guidelines, in which the grounds for retraction were lengthened, such as the use of prohibited material or copyright infrigements. Two motives are of particular interest:

  • It has been published solely on the basis of a compromised or manipulated peer review process
  • The author(s) failed to disclose a major competing interest (a.k.a. conflict of interest) that, in the view of the editor, would have unduly affected interpretations of the work or recommendations by editors and peer reviewers.

It is no longer only the conditions of production of articles that are targeted or its content, but the very processes of evaluation that can be pirated or simply distorted if the relationship of the authors to their object is not revealed. Not only you can’t trust the content of the paper, but you can’t anymore trust the process by which journals certify this content. You can only trust them when they certify they have failed… and these new motives were quickly put to the test.

An Epidemic of retractions?
COVID-19 as a public discussion on papers status

A month after the publication of these guidelines the COVID-19 epidemic began, with the adoption of open science as borders closed. We have already dealt with articles about the HCQ treatment and the Lancetgate that followed, i.e. an ultra-fast but complex case of retraction, which moreover recently led to the commercial journal to change its peer review process. The editors of the Lancet group conclude their op-ed “Learning from a retraction” with unintended irony: “As trusted sources of information, the Lancet journals are committed to ensuring that our editorial processes will continue to be as robust as possible.” Who needs learning from a failure if robustness has always been there?

This highly visible example refers to another phase in the institutionalisation of the retraction object: its public debate, beyond academic circles, The existence of the Retractionwatch database has been acknowledged and the commonness of retraction has become a public concern, for example in this canadian article:

Similarly, a 2019 Leger poll for the Ontario Science Centre found 29 per cent of respondents said that because scientific theories are fluid, they can’t be trusted.. What’s more important than the erosion in trust, says Caulfield, “is a polarization where people are gravitating toward conspiracy theories or messaging (including misinformation) that is trying to increase distrust because those messages either appeal to their ideological leanings or preconceived notions. “My fear is if people don’t trust the good science, don’t trust science from these respected journals, it’s going to be increasingly difficult to fight misinformation because people aren’t going to trust the correction.”

Simultaneously, the same database lead to discussion and even papers in certain scientific communities. Thus, some authors have calculated retraction rates for different topic, in order to assert that the Covid-19 was not only leading to two epidemics: the one on human bodies and another one in l retractions.

From An alarming retraction rate for scientific publications on Coronavirus Disease 2019 (COVID-19)
Nicole Shu Ling Yeo-Teh & Bor Luen Tang (2020): An alarming retraction rate for scientific publications on Coronavirus Disease 2019 (COVID-19), Accountability in Research, DOI: 10.1080/08989621.2020.1782203

The founders and employees of Retractionwatch gave a reply themselves in the same journal. Apart from technical remarks about the limitations of the corpus and the inclusion of preprints, the main explanation for these respondents is the speed with which the journals have intervened, where it usually takes years to produce a retraction, not days or weeks.

The institutionalisation of retractions, combined with the focus and urgency of the COVID-19 epidemic, therefore leads to seemingly virtuous behaviour, as journals no longer drag their feet in admitting problems and even communicate widely about retractions, no longer shameful but proud of their professionalism, like The Lancet group journals did. At the risk of giving the articles and more generally the scientific discourse a perfume of permanent reversibility far from the idea of incremental self-correction fo science.

Yesterday’s truth is today’s ignorance.
Living in a post-truth academic world

Far away from the COVID-19 epidemic urgency, what happens to flagged papers through time? Beyond knee-jerk reactions, corrections can be later themselves corrected, retractions can be “unretracted“, expression of concern be itself retracted after 15 years, and some have proposed that “good faith” retractions could be combined with the publication of “replacement” papers4 , while the other ones would be permanent. Besides, there is life after death for scientific publications: retracted papers are still cited, and most of their citations do not take notice of their “zombie” status5.

Instead of incorrectly equating the prevalence of retractions with that of misconduct, some consider the proliferation of flagged articles as a positive trend6. In this vision, the very concrete effects of post-publication peer review do reinforce scientific facts already built through peer review, publication and citation. Symmetrically, as every published article is potentially correctable or retractable, any scientific information rhymes with uncertainty. The visibility given to these flags and policies undermine the very basic components of the economy of science: How long can we collectively trust peer review and consider peer-reviewed knowledge should be the anchor to face a “post-truth” world?

  1. This post is partly adapted from Pontille, David, and Didier Torny. “Beyond Fact Checking: Reconsidering the Status of Truth of Published Articles.“, EASST Review, 36, 1, 2017 []
  2. Wager, E., Barbour, V., Yentis, S., & Kleinert on behalf of COPE Council, S. (2010). Retractions: guidance from the Committee on Publication Ethics (COPE) []
  3. https://publicationethics.org/files/retraction-guidelines.pdf []
  4. Like this strange story about a paper on sexual practices during the pandemic []
  5. Bar-Ilan, Judit, and Gali Halevi. “Post retraction citations in context: a case study.Scientometrics 113.1 (2017): 547-565. []
  6. Fanelli, Daniele. “Why growing retractions are (mostly) a good sign.PLoS Med 10.12 (2013): e1001563. []

Living in a post-Ingelfinger world or… The HCQ-COVID-19 publication show

Disclaimer: this post does not address the merits of the treatments proposed by the IHU team nor their risks, and even less the fact that Prof. Raoult would be a genius, a madman or a top scientist who got lost along the way.

It all started with a video, posted on February 25th, 2020, then entitled “Covid-19: endgame”, and put by IHU Méditerranée-Infection on Youtube. In that less than 2 mn video clip, extracted from the end of a seminar, Didier Raoult states that COVID-19 is “probably the easiest respiratory infection to treat” and that chloroquine (CQ) is effective and already “recommended for all clinically positive cases” in China. It wasn’t the first time this infectious disease star recommended CQ and its cousin molecule, Hydroxychloroquine (HCQ) to fight viral infections. Indeed, as early as 2007, he presented these drugs as “an interesting weapon to face present and future infectious diseases worldwide” in the International Journal of Antimicrobial Agents. (IJAA). Framed as a recycling of these antimalarial drugs, the article constituted a literature review, mainly of in vitro studies, and was part of the scientific and medical strategy of the IHU, the repositioning of old molecules, free of rights, towards new uses. And this possibility of reuse was taken up in a letter sent on February 11th, 2020 to the same journal (IJAA), accepted the same day and published on Februray 15th.

The series of IJAA publications continued. The day after the Youtube video, a new article was submitted, specifically dedicated to the use of CQ as a treatment for the COVID-19 epidemic. Accepted the next day, February 27th and published a week later, it repeated the efficacy claims observed by the Chinese and as a result of clinical recommendation. This assertion is based in particular on one of the strangest references I have ever encountered. Indeed, it is a letter of exactly ten lines published in BioSciences Trend, which body is copied below :

The coronavirus disease 2019 (COVID-19) virus is spreading rapidly, and scientists are endeavoring to discover drugs for its efficacious treatment in China. Chloroquine phosphate, an old drug for treatment of malaria, is shown to have apparent efficacy and acceptable safety against COVID-19 associated pneumonia in multicenter clinical trials conducted in China. The drug is recommended to be included in the next version of the Guidelines for the Prevention, Diagnosis, and Treatment of Pneumonia Caused by COVID-19 issued by the National Health Commission of the People’s Republic of China for treatment of COVID-19 infection in larger populations in the future.

Defined as an “abstract” on the journal site, but without any other body of text, this “article” doesn’t seem to be fully supported by the 7 references listed. It relies mainly an in vitro study from early February, already widely cited, which indicates that CQ could be effective. In fact, It wasn’t until February 29 that the results of a CQ clinical study were submitted to a Chinese journal, before being published on March 6. But let’s go back to the IHU timeline.

Ten days later, a second video was put on Youtube, presenting the results of an observational study made in Marseille and showing the effects of HCQ alone and in combination with an antibiotic, azithromycine (AZ). So there was a slight shift: going from CQ to HCQ and adding an antibiotic. The main result is only the absence of virus in nose and throat, so it is not clinical results but Didier Raoult drew from results to tell his audience their consequences for the clinical institution he manages:

The fact that you no longer have the virus changes the prognosis. Actually, that’s what infectious diseases are all about. If you don’t have the germ anymore, you’re saved… You have a right to be tested here, and if you’re tested, you have a right to be treated here. That is what we will do.

So basically, for him, results were so good that you HAD to treat people when they are tested positive. No more trials or research needed, the time for clinical medicine had come, hoping other places would follow his lead. Slides were available on the same webpage but no link to an existing paper, though the same day, but not mentionned in the video, a preprint was submitted to MedrXiv Simultaneously, as it is often the case with biomedical preprints, it was submitted to a journal… the ever-welcoming IJAA, who accepted it, as usual, one day later and published it on March, 20th. Before we come to the extraordinary fate of this paper, let us go back to the title of this post and its interest at this point.

From preprints to preprints:
the life and dearth of the Ingelfinger rule

We can observe from the two examples above a pattern of scientific communication: the IHU first posts videos, then produces preprints and finally publishes articles in academic journals – here IJAA. This is very unusual, at least in contemporary times, but happened in various ways during centuries of scholarly communication. The idea that you had first to communicate with your peers through a journal before getting to “the public” is neither constant nor dominating in all disciplines. In our era, it was pushed at a key moment in the mid-1960s. Back then, a first wave of preprints was being supported by NIH and was gaining momentuml in some biomed communities through Information Exchange Groups (IEG) that would circulate by air mail printed copies of unpublished manuscripts1. Nature started a campaign against the “preprint galore” and a few European and US biology and biochemistry journals editors-in-chief met in Vienna in 1966 to get rid of them by stating that : “The journals listed below will not consider manuscripts for publication if preprints, of essentially identical content, are to be distributed, in substantial numbers, by an agency independent of the author or of the publisher of the journal. “2

That led to the termination of the IEG experiment by the NIH in 1967. Two years later, the New England Journal of Medicine (NEJM) editor-in-chief, Franz J. Ingelfinger, coined the rule of acceptance of a paper, based on his interpretation of “sole contribution”, de facto forbidding even “circulation-controlled journals” to print something ahead of the NEJM3. In the same sentence, he remarkably included “news media”: he therefore aimed not only at the exclusive circulation of the article within scientific communities, but also to the prohibition of dissemination of its content to journalists and other medical news enthusiasts. In the early 1970s, his work to promote this exclusivity had a double effect: this practice was given the name Ingelfinger Rule, and many high-profile journals adopted it explicitely. While at the beginning of the 21st century the Ingelfinger Rule was often interpreted as a means to fight against the duplication of papers, its aims were more about controlling the circulation of knowledge in order to protect the newsworthiness of “general medical journals”4 and to organize communication about medical academic papers in a specifc way, favorable to a limited number of journals.

Indeed, as Vincent Kiernan beautifully described in his 1997 article5, the Ingelfinger Rule had become prevalent in Anglo-American journals. It is in particular the efforts of the International Committe of Journal Medical Editors (ICMJE) that built it as a “publishing standard”, which effect was for these journals and their editors-in-chief to simultaneously operate a double control:

  1. control on the authors by requiring them not to reveal the content of their articles, and even less so share the figures and other synthetic representations of results.
  2. control on journalists by providing them with preprint copies of articles in advance, while imposing an embargo on them until actual publication by the journal.

As a result, the general press (free of charge) advertises the content of the journals – it is not an article by Dr. X & Y., but an article from the NEJM or The Lancet – and organizes the dissemination of “medical discoveries” by strengthening the influence of these journals both within academic communities and within press professionnals and the general public. To conclude his paper, Kiernan questions the durability of such practics in the Internet-era and points out the effect of ArXiv preprints, citing the efforts of the ICMJE to extend the Ingelfinger rule to e-prints, with the argument of the direct consequences of biaised or false medical knowledge for the public.

The biomedical field resisted 15 more years to preprints and the Ingelfinger Rule largely stood6, even if it was adapted to emergency contexts, such as the AIDS epidemic. But Kiernan’s forecast came into reality, notably with the creation of BiorXiv in 2013 and the subsequent success of preprints in biology and biomedicine, until preprints became quasi-articles. Consequently, the Ingelfinger rule was dropped by numerous journals and publishers, even if NEJM itself keeps a case by case policy.

Prof. Raoult and his videos, possibly including slides with the figures so dear to the NEJM, thus live in a post-Ingelfinger world, in which academics can directly ensure their communication, not only in terms of content, but also in terms of comments, criticism, reporting or response. Indeed, we will see that the primary communication is not the only one modified by the abandonment of this rule, but the complete organization of the journal’s centrality in the whole chain of scientific communication.

Chaos and creation around one paper

Let us go back to this first publication by Raoult’s team on the effects of HCQ on viral porting, published in the IJAA on March 20, 2020. At the time of writing this post, the article has received 1124 citations according to Google Scholar but also thousands of tweets, blog posts and other references in press articles according to PlumX, a company owned by Elsevier, itself the IJAA publisher. The early circulation of the article was not based on a press release of the IJAA, but on Raoult’s own video and that of his various networks. As Wired recounts, with the help of a lawyer, a retired doctor, a shared google doc and an interview to Fox News, an heterogeneous assemblage à la Bruno Latour, the study published in the IJAA won a quote in a Tweet from the President of the United States the day after its publication:

That Trump endorsement of course had enormous consequences on the HCQ market, the launching of clinical trials, self-medication HCQ practices and the scope of public discussion on the efficacy and dangers of such a treatment. We won’t directly treat these important questions here, but keep on following the exotic trajectory of the publication itself. Simultaneouly to the Trump tweet, a PubPeer thread was lauched on the famous post-publication comment platform, but contrary to the Voinnet affair7, most of the first commentators signed their critiques. Among other topics, the communication trajectory of the paper helped the critique: for example, Leonid Schneider noticed the discrepancies between the figures attached to the video and the ones drawn in the published paper.

Above and beyond Pubpeer, three reviews were quickly published, questionning many aspects of the IJAA paper. The first one is a twitter thread by a master student on March, 22nd ; the second one is a zenodo 18-pages paper by three British/Irish statisticians on March, 23rd ; the third one was a blog post by a very famous Dutch microbiologist and scientific misconduct specialist, Elizabeth Bik on March, 24th. So only four days after publication – still four times the actual reviewing IJAA delay – the paper is being trounced online. Among the many points, let us note that the publishing history was being questioned, some noticing the differences between the first “preprint” on IHU website and the final paper, others underlying the lack of changes, an hint for them on how tenuous the peer review process has been., the 24h delay being surprising to every commentator. The fact that one of the authors was also the editor-in-chief of IJAA was underlined, as well as the “vanishing” of 6 patients (among 26 treated by the combined drugs), which could completly change the statistical value of the results.

While Prof. Raoult was fighting for HCQ to be authorized for general physicians in France, the online discussion kept on going until the learned society, the International Society of Antimicrobial Chemotherapy (ISAC) behind the journal, made a troubling press relase on April 3rd:

“ISAC shares the concerns regarding the above article published recently in the International Journal of Antimicrobial Agents (IJAA). The ISAC Board believes the article does not meet the Society’s expected standard, especially relating to the lack of better explanations of the inclusion criteria and the triage of patients to ensure patient safety. Despite some suggestions online as to the reliability of the article’s peer review process, the process did adhere to the industry’s peer review rules. Given his role as Editor in Chief of this journal, Jean-Marc Rolain had no involvement in the peer review of the manuscript and has no access to information regarding its peer review. Full responsibility for the manuscript’s peer review process was delegated to an Associate Editor. Although ISAC recognises it is important to help the scientific community by publishing new data fast, this cannot be at the cost of reducing scientific scrutiny and best practices. Both Editors in Chief of our journals (IJAA and Journal of Global Antimicrobial Resistance) are in full agreement.”

So the paper has a lot of problems, but stuck by the peer review rules. This cryptic PR became even more troubling a week later as it was “replaced” by an ISAC and Elsevier press release. In fact, the journal is not owned by the learned society, but by the Publisher, only being an “official society journal”. This second PR is streamlined compared to the first one as the “not meeting standard” sentence has disappeard and an announcement of post-publication peer review audit. Through this example, we measure how much different is the situation from what was prevalent under the Ingelfinger Rule. But it is with another Raoult’s team paper that science communication came back to its 17th century roots.

From presidential visit to media frenzy:
the marginalization of journals in scholarly communication

After a follow-up study published at the end of March which made less headlines and as some HCQ trials on diverse patient groups were starting to being published, it is with another observationnal study that Prof. Raoult showed the world how he was really managing scholarly communication. On April 9th, the French president, Emmanuel Macron unexpectidely visits IHU Mediterrannée and meets with Prof. Raoult, who presents him the results of its ongoing study. There was no press, but members of the IHU had recorded the arrival of Macron and posted it, making it available to the whole French media.

Here we need to go back to the origins of scientific communication, even before journals were born, when the quality of witnesses – meaning mostly royalty kinship – were an important element of the credit given to the narrative of an experiment or an observation8. In our times, it became a two-way credit flux: Macron was showing his will to base public health on evidence-based, all the more given by a star scientist, while Raoult was legitimizing his position in the French public health landscape, where critics of his methods and results were numerous.

The next day, Raoult made public his first results, not in the form of a preprint or slides with an associated video, but as a simple tweet with the abstract and a summary table.

This tweet was of course massively picked up, commented on and aroused strong media interest, all the more so as the results reinforced those of the previous study by moving from a purely biological effect to a clinical effect: “The HCQ-AZ combination, when started immediately after diagnosis, is a safe and efficient treatment for COVID-19, with a mortality rate of 0.5%, in elderly patients. It avoids worsening and clears virus persistence and contagiosity in most cases. ” Four days later, Prof. Raoult was invited in Dr Oz show, a famous TV host in the US, harshly criticized for his often unproven medical advice.

https://www.youtube.com/watch?v=uy1cPT1ztko

At the day of the interview, there was no preprint and the paper was not even submitted to a journal. Yet, Prof. Raoult presents his results as facts. It was only on the 20th that the manuscript was sent to Travel Medicine and Infectious Disease,9, with 10 days for peer review and a publication on May, 5th. Tens of thousands of comments on Facebook and tweets have followed according to PlumX,10 though media as much endorsed the results as they reported the methodological limits os the study – mostly the absence of a control group.

This study is undoubtedly a borderline case in the marginalization of journals, with communication aimed primarily at peers being out of step with announcements to political leaders and media outlets. Nevertheless, the massive availability of preprints, abstracts or other materials on topics such as the effectiveness of masks or tests, the persistence of coronavirus on this or that surface, or cases of cure, has led to significant media coverage. From the point of view of the public authorities and the general public, it could have strengthened the authority of academic journals, again in a position to assert their necessity as a obligatory passage point for public dissemination. But this return to grace assumed that the journal peer review is an effective barrier against “bad science”, an hyptohesis which has been dismissed by thirty years of studies and literature.

Prestige journals in epidemic times:
an economy of reputation crumbling down?

Indeed, prestige journals are bad for methodology: they don’t follow their own standards on reporting clinical trials, and more generally disicplinary standards. Yet they remain prized places to publish, even during the pandemic where preprints are so trendy because of the urgency to share results and knowledge. And some HCQ papers have been quietly published in such journals, until one observationnal study seemed to close the dabate on this treatment efficacy and risks.

For this study, there was no advance communication, no preprint but a straight article published in The Lancet by 4 authors. Oh, yes, there is a little gem still there on Twitter : two days before online publication, the “first author” answered a tweet by Richard Horton, editor-in-chief of The Lancet:

https://twitter.com/MRMehraMD/status/1263034198870429696

The reaffirmation of their confidence in the journal peer review system, even in times of health emergency, is comforting. And their trust is shared by the highest health authorities. On May 22nd, the study was published and asserted on the basis of a gigantic aggregation of almost worldwide patient databases that HCQ is not only inefficient, but also a very dangerous for COVID-19 patients. This announcement came at a time when many ongoing trials are displaying HCQ treatment arms. As a result, the WHO decided the next day to evaluate the continuation of its Solidarity study and announced its position on May 25th:

“Having met on 23 May 2020, the Executive Group of the Solidarity Trial decided to implement a temporary pause of the hydroxychloroquine arm of the trial, because of concerns raised about the safety of the drug. This decision was taken as a precaution while the safety data were reviewed by the Data Safety and Monitoring Committee of the Solidarity Trial. “

Nevertheless, in a manner similar to Prof. Raoult’s article, statisticians then look at the content of the article, the data it provides, and begin to point out obvious errors. But for some it was more a police investigation than data re-analysis: how can there be only 4 authors (and no acknowledgements) for such a study? Why are the hospitals involved not mentioned? What is this mysterious enterprise – Surgisphere – unknown until recently, which provides this data? What is the career of its manager and co-author of the paper? Putting apart questions about the company, 6 days after publication, they end up writing an open letter to the authors and the journal, signed by 201 colleagues and endorsed by James Watson11. They mainly point out the necessity to open the data, even more considering the extraordinary results, and describe obvious errors, questionning the quality of the database and the way (including ethics) data was gathered.

The Lancet and the authors were very prompt in responding to these criticisms: in fact, on May 30 a correction was published, covering very minor aspects. : “the numbers of participants from Asia and Australia should have been 8101 (8·4%) and 63 (0·1%), respectively. One hospital self-designated as belonging to the Australasia continental designation should have been assigned to the Asian continental designation.” Of course, the conclusion was a classic in those corrections : “There have been no changes to the findings of the paper.” But critics keep on pushing on the problems, would they be HCQ supporters, Prof. Raoult himself stating “fake data” or “manipulated data” on Twitter or clinicians trying to find coherence between the papers’ data and their own. So, only 3 days after the correction, The Lancet puts an expression of concern on the paper:

“Although an independent audit of the provenance and validity of the data has been commissioned by the authors not affiliated with Surgisphere and is ongoing, with results expected very shortly, we are issuing an Expression of Concern to alert readers to the fact that serious scientific questions have been brought to our attention”.

The paper was still saveable, thanks to the independant impeding audit. Alas, another 2 days and the 3 authors who do not belong to Surgisphere threw in the towel by stating they haven’t seen the data, and demanded the retraction of the article. The Lancet officialized it, provoking expression of outrage, the questioning of the seriousness of the journal and… the reactivation of the suspended trials. Thus, in less than a week, the worldwide study published in what many consider to be “one of the best medical journals in the world” has been awarded the 3 labels commonly used in post-publication peer review – Correction, Expression of Concern, Retraction12 – nullifying the evidence claimed on May, 22nd. But the Surgisphere story goes beyond that article: another paper, published by NEJM on the “same kind of data” was retracted on the same day. Moreover, there are at last two regions – South America and Africa – which have and will suffer from public health policies being developed on preprints and data published by Surgisphere. While #LancetGate was trending on twitter, in-depth inquiries were being made on Surgisphere and the 4th author of study who, ironically, coauthored a paper entitled : “Combating Fraud in Medical Research’ in 2013 !

Science at its best:
boring, negative results

To conclude this story on scholarly communication, we have to add that most HCQ articles have not been given the same media treatment and have not been communicated in fancy ways by authors: a preprint on BiorXiv or MedrXiv, then an article with often no spectacular results and limitations because of the number of patients, their previous health conditions, incomparability between groups, etc. One day before the retractions, the same NEJM published the first randomized-control trial on post-exposition use of HCQ, so close to the “Raoult treatment” – AZ not being included. Here is part of the abstract published:
Side effects were more common with hydroxychloroquine than with placebo (40.1% vs. 16.8%), but no serious adverse reactions were reported.After high-risk or moderate-risk exposure to Covid-19, hydroxychloroquine did not prevent illness compatible with Covid-19 or confirmed infection when used as postexposure prophylaxis within 4 days after exposure.”

What do we get from this abstract? That the article is a typical example of those “negative results” that fail to be published, leading to significant biases in the evaluation of treatments in clinical trials through a “publication bias”13. And yet, not because of its own interest, originiality, breakthrough knowledge, but because of its relevance to public health in an epidemic situation, this trial has been published by the other “world’s best medical journal”.

While predictions of “really bad science to come” have sounded true for most commenters and supported by a high number of retractions, the COVID-19 academic publication landscape has also shown a massive uptake on preprints, public education on scientific controversies, conflict of interest and statistical analysis and furthermore… yes, publication of null results in prestige journals. Whether you think this is a total mess and you prefered the Ingelfinger rule depends on the way you conceive academic research and scholarly communication. Back then, preprints were non-existent in biology and social networks had to be invented, but The Lancet published the Wakefield paper on the link between MMR vaccine and autism. Was it a better time?

  1. See Cobb, Matthew., 2017. “The prehistory of biology preprints: A forgotten experiment from the 1960s.” PLoS biology 15.11 []
  2. Thorpe, W. V. (1967). International Statement on I nformation Exchange Groups. Science, 155(3767), 1195-1196. []
  3. Ingelfinger, Franz. “Definition of” sole contribution”.” N Engl J Med 281 (1969): 676-677. []
  4. Ingelfinger, F. J. (1977). The general medical journal: for readers or repositories?. New England Journal of Medicine, 296(22), 1258-1264. []
  5. Kiernan, V. (1997). Ingelfinger, embargoes, and other controls on the dissemination of science news. Science Communication, 18(4), 297-319. []
  6. See as an example this defense of the rule by Nature in 2010, five years after having written they were ok with preprint servers []
  7. see Torny, Didier. “Pubpeer: vigilante science, journal club or alarm raiser? The controversies over anonymity in post-publication peer review.” 2018 and Guaspare, Catherine, and Emmanuel Didier. “The Voinnet Affair: Testing the Norms of Scientific Image Management.Gaming the Metrics: Misconduct and Manipulation in Academic Research (2020): 157. []
  8. See the classic book Shapin, S., & Schaffer, S. (1985). Leviathan and the air-pump: Hobbes, Boyle, and the experimental life (Vol. 109). Princeton University Press []
  9. A journal in which one of the authors is an associate editor have underlined Raoult’s critics []
  10. The story is quite different within the academic world with “only” 21 citations until now, far much less than the March study. In fact, many observationnal studies and trials were competing with this study []
  11. EDIT June 9th: James Watson made a fantastic interview on an australian radio where he gets into detail about how he started and run this 5-days inquiry, hear it there []
  12. On the standization of journal policies, see Pontille, D., & Torny, D. (2017). Beyond Fact Checking: Reconsidering the Status of Truth of Published Articles. []
  13. There is a huge literature on this topic in the last 30 years, see as an example this The Lancet article, Easterbrook, P. J., Gopalan, R., Berlin, J. A., & Matthews, D. R. (1991). Publication bias in clinical research. The Lancet, 337(8746), 867-872. []

The perfect hacking of journal peer review or… The fastest way to become a Highly Cited Researcher

Since the beginning of the 21st century, the names of great fraudsters have spread beyond the academic arenas, each one bringing their biographies, their practices and the astonished tale of the discovery of their misdeeds. This starisation of fraudsters should neither hide the existence of famous cases in the past1, nor the multitude of ordinary misdemeanors and misconduct daily taking place in the academic world, which is hardly different from other professional circles in this respect. Nevetheless, they deserve a place in the Hall of Hame of academic fraudsters; so, before addressing the case of our new champion Kuo-Chen Chou, let’s review a few exemplary figures of this Hall of Fame, in alphabetical order.

Yoshitaka Fujii (2012): enduring Japanese anesthesiologist who owns is world record holder for the number of articles retracted (183). He spent his career inventing data and despite a statistical analysis published in 2000 showing how “too nice” his numbers were, he was not really worried until 10 years later.2

Woo-Suk Hwang (2006) : amazing Korean veterinarian and biologist, specialized in stem cells and producer of the first human clone, announced by a publication in Science. After being accused of forcing his technicians to donate their eggs for his research, investigations revealed the total absence of human cloning. A national glory in South Korea and an international star, his public downfall was so brutal that he made the cover of Time Magazine.

Jan-Hendrik Schön (2002): industrious German physicist, working at Bell Labs on the limits of matter and life, able to co-author in less than two years seven papers in Nature, eight in Science and six in Physic Review. All of course have since been retracted, and it seems that all his research, including his thesis, was based more on his desire to stick to the expectations of theory or those of his colleagues than on the empirical results he claimed to have achieved.3

Diederik Stapel (2011) : extraordinary Dutch psychologist, whose social experiments always proved the hypotheses made… since they were never carried out, but fabricated on paper and computer. Denounced by a whistleblower from his team and 58 retracted articles later, he was the object of a sensitive New York Times portrait . His own production became an object of psychology of deception as his colleagues found small differences in style between his genuine articles and the fake ones.

From transparent peer review
to citation manipulation

These eminent members of the Hall of Fame, all men – women are an extremely small minority among the elected members – have produced “false science” but have neither massively plagiarized nor attacked the peer review system. They rather provided, as good forgers, the expected raw material to journals. However, over the last 10 years, there has been concern about how reviewers or publishers can indirectly influence the science produced, in a more subtle way. “Coerced citations”, “fake peer review” and “cartel citations” are all designations of practices that do not directly fudge the content of articles, but act on the margins by hacking into the journal peer review process.

So, the old criticisms about the misdeeds of anonymity in journal peer review4 were reborn at great expense and many debates about “transparent peer review” took place. Where in the past editors and authors were at the helm of these discussions, now the publishers are in charge and, above all, Elsevier. The company provided two in-house researchers with access to his back office and they were able to compare the bibliography of the manuscripts with those of the published articles and check whether added references were coauthored by reviewers of the manuscript. Unsurprisingly, the authors of When Peer Reviewers Go Rogue concluded that there was a manipulated citation, even if its level is quite low (0.79%).

At the same time, a research team led by the famous John Ioannidis sought to build a “clean” base of citations for the most cited researchers. For them, this meant being able to separate self-citations from the rest, and it was on this occasion that they made a surprising discovery: the staggering intensity in the level of self-citation of some colleagues. Indeed, as your level of citations rise, you expect that they are all the more coming from distant colleagues. Not for everybody:

Vaidyanathan, a computer scientist at the Vel Tech R&D Institute of Technology, a privately run institute, is an extreme example: he has received 94% of his citations from himself or his co-authors up to 2017 (…) He is not alone. The data set, which lists around 100,000 researchers, shows that at least 250 scientists have amassed more than 50% of their citations from themselves or their co-authors, while the median self-citation rate is 12.7% ((Nature, Hundreds of extreme self-citing scientists revealed in new database, 19th August 2019)).”

But then, what are the Highly Cited Researchers, whose numbers are one of the components of the Shanghai Ranking, actually doing? Are they renowned for their influential results or are they more adept at being manufacturers on the citation chain? Bibliometricians would say that a “high” self-citation rate is not necessarly a sign of fraud, but that a detailed inquiry woud be neeeded This is where we return to our newest member of the Hall of Fame, whose work is worthy of close consideration.

A perfect hacker,
always greedy for citations

It all starts with a mundane story: a reviewer asks for additional references in a manuscript. But the request itself is not so trivial: it consists of 35 references, the vast majority of which are co-signed by him, and he indicates that his recommandation to editors, on whether or not to accept the manuscript, will depend heavily on this inclusion. It should also be specified that it is not for a single review that this request is made, but for each of the manuscripts passed through his hands. In describing their decision to ban this unnamed reviewer, editors did not indicate how long this practice has existed. Indeed, it is unusual, to say the least, to request the addition of so many references, and one might question their own responsibility in this matter if it lasted as they reference to the “most recent reviews” seems to imply. To which they reply:

One might ask how this reviewer got away with submitting multiple reviews containing coercive requests for citation before being banned. The shortest explanation is that excessive self-citation demands are generally not seen as an ethical problem until a pattern is established, and a decentralized peer-review system is not amenable to detecting patterns“((Wren, Jonathan D., Alfonso Valencia, and Janet Kelso. “Reviewer-coerced citation: case report, update on journal policy and suggestions for future prevention.” (2019): 3217-3218.)).

And in fact they inquire in other journals, and that suggested the same pattern of behavirour for this reviewer. A year later, in early 2020, the investigation leads to an editorial in another journal, the Journal for Theoretical Biology, (JTB), which reveals perhaps the most complete case of manipulated citation to date. Indeed, the hacker is no longer a simple reviewer there, but a “handling editor” for JTB, which enables him to act at several stages of the manuscript, with a single objective: to accumulate citations.

  1. He took the charge of many manuscripts from his research centre to ensure that they are well treated (conflict of interest).
  2. He chose reviewers requested by the authors, or designated colleagues from his own centre (conflict of interest) or even reviewed them himself under a false name (ghost peer review).
  3. In many cases, with the return of the reviews, he would ask for the title of the article to be changed so that it explicitly refers to his own algorithm, as well as a discussion of his own work in the introduction and conclusion (coerced citations).
  4. As a result, he requested the addition of a very large number of references (up to more than 50) to the bibliography of the manuscript (coerced citations).
  5. Just before the acceptance of the manuscript, he was added as co-author of the article (gift authorship).

We therefore observe two complementary types of behaviour. On the one hand, hidden from the outside, it consists in hacking the flow of the peer review journal, capturing the evaluation process to ensure that the articles most “favourable” to its citation count are actually published – and sometimes with his coauthorship. On the other hand, visible to the authors and perhaps the editor-in-chief and publisher, the aim is to hack the byline, content and references of the manuscript by making imperative requests for inclusion. Thus, these ordinary manuscripts became articles loaded with citations from the hacker.

It can be noted that at this stage, the name of the reviewer is not given by JTB, which caused some to make educated guess on Twitter. News articles in Nature among others, soon follow and revealed his identity: Kuo-Chen Chou, a retired chinese-american biophysicist. We then learn that he has been for years a member of the Highly Cited Researcher “club”5. So, as usual, this extraordinary case will be treated as “rare”, counter-measures have been taken such as an algorithm written by one of the Bioinformatics editor. But the ordinary gaming will still happen, would it be in so-called predatory journals or “prestigious” publishers, with smarter colleagues less greedy on citations and not obsessed with the HCR club. Will you be one of them?6

  1. for example John Darsee, see Broad, William; Wade, Nicholas (1983), Betrayers of the Truth: Fraud and Deceit in the Halls of Science, London: Century Publishing, ISBN0-7126-0243-7 []
  2. For a quick view of this case, see Pontille, David, and Didier Torny. “Behind the scenes of scientific articles: defining categories of fraud and regulating cases.” (2012). []
  3. He was the subject of a wonderful book, Plastic Fantastic, ISBN 978-0-230-22467-4 []
  4. See David Pontille and Didier Torny, “The blind shall see! the question of anonymity in journal peer review.” Ada: A Journal of Gender, New Media, and Technology, No.4. doi:10.7264/N3542KVW (2014). []
  5. the Web of Science Group didn’t list him in 2019 as he had, like others, a high rate of self-citations but, as stated, “Although this list is updated and refreshed each year, a Highly Cited Researcher is always a Highly Cited Researcher—whether their name was included in 2013 or 2019.” []
  6. I am aware that this post contains two self-references but they won’t be counted in any database []

From sharing to versioning to citing to retracting or… How preprints became quasi-articles

The forms of communication in academic communities are very diverse: articles, seminars, books, colloquia, mailing lists, posters, letters, workshops, proceedings,… The reasons why each one is chosen are multiple and the formats live their own life with new uses, far beyond the initial intentions of their creators. As we will see, preprints, though they have a relatively short history, followed complex patterns, to become something more than shared documents.

It is first necessary to agree on the designation of these entities: working papers, discussion papers, e-prints, preprints will be considered in this post as equivalents. All are written texts, produced by authors without any form of certification, and are made available without any paywall on a perennial web address. Contrary to Wikipedia, we don’t distinguish them on the basis of their future publiation in a journal. We will also ignore the issue of their licensing for two reasons: historically, preprints have existed long before the release of CC licenses – and many of them continue to be unlicensed; pragmatically, because our focus here is on the use of preprints, not their re-use.

Prior to the electronicization of scholarly communication, some disciplines had already experienced preprints, notably psychology and biomedical sciences. This meant that paper manuscripts circulated by mail, with associated quite high material costs (reproduction, stamps). This was not the primary reason for the cessation of certain practices: in biomedicine, publishers were vigilant and their editors-in-chief allies declared a ban on the publication of manuscripts that had already been circulated. On the other hand, physics, especially high-energy physics, pioneered these practices and continued to generate these mail flows before transferring them to the electronic world in the 1970s. Using the compactness of the TEX format, these preprints started to be distributed by email and then Paul Ginsparg had the idea of building an automatic BBS, basically inventing ArXiv.

E-prints servers as competitors of journals?

Until then, in all disciplines, usage has essentially been the same: to facilitate the consideration and discussion of recent research and results, by circumventing the obstacle of delays of publication in journals. Admittedly, a large number of conferences had adopted the practice of proceedings, thus allowing a reduction in this delay, but they remained then very largely attached to the world of paper printing. Following the success of ArXiv, several e-print services were launched in the mid-1990s and Steven Harnad predicted their pre-eminence over journals as the central venue for distribution:

“the best people start putting stuff and readers start saying :’Why wait for the journal to come out? I have to teach this stuff, I have to know this stuff, I can get it to the archive’ and then the libraries come around and say ‘should we order this journal?’ and the scientist says ‘I don’t care, I no longer read in paper’.”

It seems obvious that this prediction did not come true, far from it, and the 2000s saw a world divided between a few disciplines massively practicing preprinting (physics, mathematics, computer science, economics) and the rest of the academic world ignoring them superbly. Nevertheless, their uses – both on the authors’ and the readers’ side – have started to compete with the journals ones. In a peer reviw fashion, the “raw” circulation of a manuscript for discussion regularly produced new versions of a preprint. On ArXiv, more than a third of the preprints exist in 2 versions, and more than 10% exist in 3 versions; or even more as Hirsch’s famous manuscript inventing the h-index had 5 versions, 4 of them before submission to PNAS. On the readers’ side, researchers soon started to cite not only published papers, but also preprints – then often called e-prints, on a massive scale.

These new reading and referencing practices have led to a vast literature on the citation advantage of open access articles over those available only through subscription and its paywalls1. Beyond this possible advantage – monetarised by big publishers for their hybrid journals in a commercial version of open access – these practices shed light on the change in status from simple “manuscripts” to texts integrated into the published literature. To completly get them out of their grey literature status, Paul Ginsparg had proposed as early as 1996 to add overlaid information on preprint servers, which led on the one hand to the creation of journals overlays proper, and on the other hand to various recommendation devices for preprints, among other texts.

The accelerated life cycle of preprints

The “standardization” of preprints through citation or certification is not the only notable development. Indeed, the recent disciplinary extension of preprints servers, in what is often described as a second wave2) is a significant development and has consequences for their uses. Let us take the example of life sciences, with the development of biorXiv, a platform launched in 2013 and published 30,000 preprints in the year 2019.

From this video put online at the time of the platfom inauguration, we will retain two elements: fastness and discussion. If high energy physicists, because of the weight of the infrastructures, work organization and authorship practices are used to live in a world with little publishing competition3 , this is not the case for many computer scientists who already published on ArXiv, especially in the artificial intelligence branch. Also, flag-planting to estabilish priority and (thoretically) gain the scientific credit has been a common operation on ArXiv, the use of timestamp by the server being a certfication of the order of arrival. If this fastness is also important in life sciences to avoid getting scooped, it shall be equally considered in contrast with the slowness of journals : speed of publication has often been an argument for different outputs, and the tension between rapid dissemination and quality of certification is at the heart of the history of the peer review in journals4.

For life scientists and especially early carreer researchers with short-term contracts, speed is less a question of priority than to simply see their results being widespread to be able to build some credit for their next assignment. Until preprints, no publications meant no credit. Now, they have at least something, especially since some organization have recognized preprints as legitimate outputs for CVs in grant applications. Of course, they still need publication in journals, which leads us to the role of discussions. As we have seen, in the case of ArXiv, discussions often feed a release cycle in the form of new preprints. In life sciences, this is apparently much less the case: a recent study by Kent Anderson5. shows that the majority of preprints were posted after they were submitted to a journal, so the “discussion » rather than feedback from the readers of the preprint takes the form of a peer review within a given journal

From fastness to emergency:
Preprints can be retracted too

At this point, we need to address the question of the targeted audiences for preprint servers: if it was initially pure academic community exchanges, things have changed with the popularity of social networks. Indeed, the cited Knowledge Exchange report highlighted the crucial role played by Twitter in the dissemination of preprints by their authors or platforms themselves. This dissemination to fringe and non-academic audiences has several consequences, such as the reuse of preprints by maginalised communities or communities with minority knowledge and beliefs. This is also the case for links to blogs included in ArXiV trackbacks for which it is very difficult to reach a consensus on the “serious” or “eccentric” character of a website.6. If Anderson concluded that the promise of a discussion was not kept within the platform in the case of biorXiv, it doesn”t necessarly mean that it is limited to journal peer review, as an unexpected event has just shown us.

In fact, the 2019-nCov coronavirus has been a test for biorXiv as it became the forefront of scientific information. Yet, since the 2003 SARS virus, the international health community, strongly pushed by WHO, seemed to have favored data and information sharing over scientific credit or patents. In recent epidemics, even the paywalls of big publishers have been opened in order to maximise the sharing of the existing knowledge. Now that biorXiv has been strongly established, it is the easiest legal way to combine sharing, speedness and some credit coming from priority7 And indeed, the preprint server has been flooded with coronavirus papers.

This new disclaimer – which specifies in the current case a general policy stated at the top of each preprint – emphasizes the potential audience of preprints, media. For long, the majority of senior life scientists have feared that uncertified preprints would be taken for granted and that a flow of “bad science” would be given to lay audiences. And their strongest fears apparently came true, as an article suggesting the artificial nature of the current virus quickly fed the conspiracy sites and flows, “proving” the epidemic could only be, at the very least, the result of a failed experiment. But the preprint publicity is more ambiguous : as its links spread, it was severely criticized, in a very well-argued way, by colleagues. Moreover, biorXiv is one of the few preprint servers that has included a comment feature attached to the preprints it hosts. And this paper has received a lot of them! So much so that the preprint was retracted less than 2 days after its publication – or more exactly the authors withdrew it following all these comments, whereas previously the retraction of a preprint was envisioned only in case his published heir would have previously endured this exact fate.

The interpretation of this ultra-fast life cycle is of course contrasted: the creators of Retraction Watch see it as a victory for science in preprint mode, while K. Anderson and others consider that such an article would never have appeared in a top-level journal. But the outcome of this debate on journals vs. preprint servers quality should not obscure the profound transformations of preprints. The Harnad vision began to come into reality more than 20 years later, but in a twisted way. While preprint servers didn’t replace journals, preprints have become quasi-articles: used for priority, have a DOI, generate some scientific credit, read and cited, change through at least informal discussion processes, appear on CVs and are archived, generate media interest. And now even if by name they are pre-publications, they are submitted to the stringest post-publication peer review decision.


  1. This literature is so vast and contradictory that Ben Wagner has made an annotated bibliography of it []
  2. see the very good synthesis funded by Knowledge Exchange, Chiarelli, Andrea, et al. “Accelerating scholarly communication: the transformative role of preprints.”(2019 []
  3. In her groundbreaking 1988 book, Sharown Traweek stated that publications were not important for them, as they were only archives, record-keepingof the things that really matters []
  4. see Pontille, David, and Didier Torny, “From manuscript evaluation to article valuation: the changing technologies of journal peer review.“. Human Studies 38.1 (2015): 57-79. []
  5. “bioRxiv: Trends and analysis of five years of preprints.” Learned Publishing (2019). []
  6. see Ritson, Sophie. “‘Crackpots’ and ‘active researchers’: The controversy over links between arXiv and the scientific blogosphere.” Social studies of science 46.4 (2016): 607-628. []
  7. On the illegal side, activists have built a specialized archive based on Scihub. []

The Political Economy of Academic Publications

This blog is part of a vast research program on the political economy of scientific publication, which has been strongly transformed over the last twenty years by the electronic dissemination of journals. It considers publishers, editorial committees and journals as socio-political actors to be studied in three complementary aspects detailed below.

Firstly, they are analysed as economic actors defining publishing markets. The conditions under which these markets were created have been the subject of much criticism, and strong transnational mobilisations around open access have been deployed, which has influenced the construction of public policies that are contrasted internationally. New economic models have emerged, of which direct payment by the authors (APC), is only the most visible, but not the most frequent. The multiplication of coloured labels (Green, Gold, Platinum, Bronze, Diamond) to designate these models does not fully account for their subtle differences, nor for the sustainability of the associated business model, compared to the classic subscription model which has led to a “serial crisis” over the last 20 years, with the massive increase in the cost of access to publications for libraries


Secondly, journals and publishers are studied as places of production, including innovations in evaluation technologies (open peer review, technical soundness based review…). In particular, it is the growing debate on post-publication peer review policies, including withdrawing articles, that will be examined, as well as the emergence of platforms for public discussion of their validity such as PubPeer. The question of the centrality of journals for peer review or their marginalization (overlay journals, recommendations…) will also be addressed.


Thirdly, journals are treated as places of valorisation, seeking to attract authors and promote their position through the use of different measures (citation, referencing, uses…), which they highlight or criticise. In addition to the recurring debates on the Journal Impact Factor, a measure that is currently much decried, there will be discussions on alternative metrics, or even on responsible metrics, which are supposed to better represent academic production and its uses.


These three aspects aim in particular at sheding light on new forms of self-regulation by academic actors (systematisation of advertising for the withdrawal of articles, generalisation of post-publication peer review, stigmatisation of predatory publishers, uses of creative commons licenses…), the innovative and argumentative work of publishers and platforms, whether public, para-public or private, and the redefinition of public policies in the field of academic publication.