So to push you to read the work of diverse and enlightning contributors, I have remixed and shortened our chapter about readers and their empowerement in contemporary peer review. In the full version, we underlined one of the most decisive effect of open access: the accelerating rise to power of ordinary readers1.
Pre-Publication Peer Review as Reading
Throughout the history of peer review, the three judging instances (editors-in-chief, editorial committees, outside reviewers) that have gradually emerged were the first readers of submitted manuscripts. This may seem trivial, but the essential activity of evaluating an article – unlike other types of academic evaluatiion – is indeed the handling of a text. Admittedly, the peer review article can be considered to include many other things, such as checking that ethical rules are being followed or that data is actually being made available, but the question of taking into account the content of the article – whether in the form of a paper file or a computer file – has always been essential. The acts of reading are far from being simple, whether you consider “geographies of reading”2 (with whom, where, in what setting), what attracts the attention of readers, how texts are annotated, how journals inform those practicies and what are the purposes of such acts.
Their respective importance and the way in which their readings are coordinated may be subject to local conventions at a journal, disciplinary, or historical level. They are also marked by profound divergences due to distinct issues in manuscript evaluation. The space of possibilities within which these readings are conducted is a subject for public debate that leads to the invention of labels and the stabilization of categories, and to the elaboration of procedural and moral norms. For example, on the respective anonymity of authors and referees, four labels have been coined since the 1980s
Reviewers
Authors
Anonymized
Identifed
Anonymized
Double Blind
Blind review
Identified
Single Blind
Open review
Source: David Pontille and Didier Torny, “The Blind Shall See! The Question of Anonymiity in Journal Peer Review,” Ada 4 (2014), https://doi.org/10.7264/N3542KVW.
These spaces of possibility currently coexist in each discipline, being attached to different scientific and moral values, pertaining to the responsi- bility of reviewers, objectivity of judgements, transparency of process, and equity toward authors. The different possibilities here show that Merton’s “organized skepticism” and the agonistic nature of the production of scientific facts described by Latour and Woolgar long ago are, indeed, not self-evident. The contemporary moment is characterized by reflexive readings of peer- review technologies: manuscript evaluation has itself become an object of systematic scientific investigation. Authors, manuscripts, reviewers, journals, and readers have been scrupulously examined for their qualities and competencies, as well as for their “biases,” faults, or even unacceptable behavior. The diverse arrangements of manuscript evaluation are thus themselves systematically subjected to evaluation procedures.
Post-Publication Peer Review as Ordinary Readers empowerement?
Peer review in the twenty-first century can also be distinguished by a growing trend: the empowerment of “ordinary” readers as new key judging instances. If editors and reviewers produce judgments, it is through a reading within a very specific framework, as it is confined to restricted interaction, essentially via written correspondence, which aims at authorizing the dissemination of manuscripts-become-articles. Other forms of reading accompany publications and participate in their evaluation, inde-pendently of their initial validation.
Citing Articles : with the popularization of bibliometric tools, citation counting has become a central element of journal and article evaluation. But it also need a transformation of formats, an identification of references and a fundamental transformation: the act of referencing relates to a given author, whereas a citation is a new and perhaps calculable property of the source text, creating what Wouters called “citation culture”. Then, highly disparate forms of intertextuality are rendered commensurable: the measured or radical criticism of a thought or result, integration within a scientific tradition, reliance on a standardized method described elsewhere, existence of data for a literary journal or meta-study, simple recopying of sources or self-promotion. Citation thus points towards two complementary horizons of reading: science as a system for accumulating knowledge via a referencing operation, and research as a necessary discussion of this same knowledge through criticism and commentary.
Commenting texts: in a view of publication as explicitly dialogical or polyphonic, reader can become commentersTraditionally, before an article was published, comments were mainly directed toward the editor- in-chief or the editorial committee. Through open review, commenters enter into a dialogue with the authors and thus open up a space for direct confrontation. Prior to the emergence of electronic spaces for discussion, objects like “special issues” or “reports” in which a series of articles are brought together around a given theme to feed off one another after a short presentation. Post-publication commenting was also common through two elementary forms: by referring to the original article or by sending a letter to the editor. The electronic space led to many experiments of post-commenting: most of them met no success (PLOS, Nature, Pubmedcentral…), until the unexpected success of anonymized comments on PubPeer.
Sharing papers: until recently, readers other than citers and commenters remained very much in the shadows. Yet library users, students in classes, and col- leagues in seminars, as just a few examples, also ascribe value to articles; for example, through annotation. The existence of articles in electronic form has made their readers more visible. Persons who access an “HTML” page or who download a “PDF” file are now taken into account, whereas in the past it was only the distribution of journals and texts, mostly through libraries, which allowed one to assess potential readership. By inventorying and aggregating the audience in this way, it is possible to assign readers the capacity to evaluate articles. The creation of online academic social networks (e.g., ResearchGate, Academia .edu) has trivialized this figure of the public, not only by counting “academic users,” but also by naming them and offering contact. At the same time, online bibliographic tools (e.g., CiteULike, Mendeley, Zotero) that objectify the readers and taggers who introduce references and attached documents into their bibliographic databases. Without being citers them selves, these readers select publications by sharing lists of references, the pertinence of which is notified by the use of “tags.” These reader-taggers are also embedded in the use of hyperlinks within “generalist” social net-works (e.g., Facebook, Twitter), by alerting others to interesting articles, or by briefly commenting on their content, feeding the whole “article-level metrics” movement. Here the readers, tracked by number and diversity, revalidate articles in the place of the judging instances historically qualified to do so.
Examining Documents : This movement is even more significant in that these tools are applied not only to published articles but also to documents which have not been vali-dated through the growth of prerint servers. This flow of electronic manuscripts feeds the enthusiasm of the most visionary who, since the 1990s, have been announcing the end of journals. On the contrary, we obsersed new technologies have been built on these archives, suchas “overlay journals,” in which available manuscripts are later validated by reading peers in various ways. With a view to dissemination, advocates of readers as a judging instance tend to downplay the importance of prior validation. While the valida- tion process sorts manuscripts in a binary fashion (accepted or rejected), such advocates contend that varied forms of dissemination instead encour- age permanent discussion and argument along a text’s entire trajectory. In this perspective, articles remain “alive” after publication and are therefore always subject not only to various reader appropriations, but also to public evaluations, which can reverse their initial validation through flagging articles in official journal policies.
The Academic Closet vs. The Readers Bazaar
Driven by a constant process of specialization, the extension of judging instances to readers may appear as a reallocation of expertise, empowering a growing number of people in the name of distributed knowledge. In an ongoing context of revelations of massive scientific fraud, which often implicates editorial processes and journals themselves, the dereliction inherent to judging instances prior to publication has transformed the mass of readers in a vital resource for unearthing error and fraud. As in other domains where public expertise used to be exclusively held by a few professionals, crowdsourcing has become a collective gatekeeper for science publishing. Thus peerdom shall be reshaped, as lay readers have now full access to a large part of the scientific literature and have become valued audiences as quantified end-users of published articles.
If open science has become a motto, it encompasses two different visions for journal peer review. The first one, which includes open identities, takes place within the academic closet, where the dissemination of manuscripts is made possible by small discourse collectives which shape consensual facts. This vision is supported by the validation processes designed by Robert Boyle during the emergence of modern scientific practices. By contrast, in a Hobbesian fashion, the second one urges an openness in multiple ways, building an academic democracy where each reading may litterally been accounted. The disentanglement of peer evaluation goes througdh the ability given to readers to comment on published articles, to produce social media metrics through the sharing of documents, and to observe the whole evaluation process of each manuscript. In this vision, scholarly communication not only relies on crowdsourced peer review but on a plurality of instances that generates a continuous process of judgment. The first vision has been at the heart of the scientific article as a genre, and a key component of the scientific journal as the most important channel for scholarly communication. Whether journals remain central in the second world has yet to be determined.
We means David Pontille and myself. You can read the full chapter here. Of course, as readers, you are welcome to cite, comment, share & examine this chapter [↩]
Livingstone, David N. “Science, text and space: thoughts on the geography of reading.” Transactions of the institute of British geographers 30.4 (2005): 391-401. [↩]
Retraction Watch has celebratied its 10th anniversary and its creators have grown from a small blog to a reputable entity, funded by numerous donors, source of academic publications, run by the Center for Scientific Integrity and manager of a database acknowledged for its quality. With the COVID-19 epidemic, the retraction of scientific articles (and even preprints) has become a mainstream media object, fully public beyond the academic communities directly concerned.
The institutionnalization of the website mirrors the one of the retractions themselves, which have become partly normalized into the publshing process as a key part of post-publication peer review. In this post written for the Peer Review 2020 week, which theme is “Trust in peer review”, we wil briefly look at journal policies and how they change the actual trust given to published articles1.
Flagging published articles. Don’t trust what your read
“Certified”, ” peer-validated”, “peer-reviewed”, all these notions are aimed at different practices but with the same objective: to assert that the text you are reading is not the simple product of authors’ reflections and their exploration of a phenomenon, of theories and observations, but that of a more or less complex process of evaluation of a manuscript by others, not recognised as co-authors but sufficiently knowledgeable about the subject, the methods, the literature, that they indicate to you that this content is valid.
Then, of course, scandals and other fraud cases multiplied, science stars falling one after another, but you could always believe that these were exceptions, special cases, that almost all the articles contained true and proven statements… at least until 2009. That year, the COPE organisation published its first standards2 on retracted articles, showing that it was not only normal, but expected that journals would plan to remove from the scientific canon articles they had previously published. To be more precise, it was a matter of flagging different articles according to the situation:
Journals editors should consider issuing an expression of concern if:… Journals editors should consider issuing a correction if:… Journals editors should consider consider retracting a publication if…
In this system, an “expression of concern” casts doubt about an article and warns readers that its content raise some issues. In most cases, it describes information that has been given to the journal, which led it to alert its readers about an ongoing investigation, but does not directly state about the validity of the work. On the contrary, when it comes to “correction”, it is always stated that the core validity of the original article remains, some parts of its content being lightly or extensively modified. In some cases, the transformations have been carried to such an extent (e.g. every figure have been changed) that some actors have ironically coined the term “mega-correction“ to characterize them. Contrary to an expression of concern, the authors of the article are fully aware of these modifications and, even if they have not written it, do necessarily validate them before the publication of the so-called (mega)correction. If they don’t, journals sometimes publish editorial notes instead of corrections. Finally, a “retraction” aims at to inform readership that the article validity and/or reliability and/or ethcal background and/or authorship does not stand anymore. Far from being an erasure, it is conceived of as the final step of the publishing record of the original article, as the notice of retraction “should be linked to the retracted article”. A retraction is either conducted in close collaboration with the authors, or against them upon the request of someone else who is explicitly named (e.g. a journal editor-in-chief, a colleague, a funding body…). Ten years later, COPE produced a second version3 of its guidelines, in which the grounds for retraction were lengthened, such as the use of prohibited material or copyright infrigements. Two motives are of particular interest:
It has been published solely on the basis of a compromised or manipulated peer review process
The author(s) failed to disclose a major competing interest (a.k.a. conflict of interest) that, in the view of the editor, would have unduly affected interpretations of the work or recommendations by editors and peer reviewers.
It is no longer only the conditions of production of articles that are targeted or its content, but the very processes of evaluation that can be pirated or simply distorted if the relationship of the authors to their object is not revealed. Not only you can’t trust the content of the paper, but you can’t anymore trust the process by which journals certify this content. You can only trust them when they certify they have failed… and these new motives were quickly put to the test.
An Epidemic of retractions? COVID-19 as a public discussion on papers status
A month after the publication of these guidelines the COVID-19 epidemic began, with the adoption of open science as borders closed. We have already dealt with articles about the HCQ treatment and the Lancetgate that followed, i.e. an ultra-fast but complex case of retraction, which moreover recently led to the commercial journal to change its peer review process. The editors of the Lancet group conclude their op-ed “Learning from a retraction” with unintended irony: “As trusted sources of information, the Lancet journals are committed to ensuring that our editorial processes will continue to be as robust as possible.” Who needs learning from a failure if robustness has always been there?
This highly visible example refers to another phase in the institutionalisation of the retraction object: its public debate, beyond academic circles, The existence of the Retractionwatch database has been acknowledged and the commonness of retraction has become a public concern, for example in this canadian article:
Similarly, a 2019 Leger poll for the Ontario Science Centre found 29 per cent of respondents said that because scientific theories are fluid, they can’t be trusted.. What’s more important than the erosion in trust, says Caulfield, “is a polarization where people are gravitating toward conspiracy theories or messaging (including misinformation) that is trying to increase distrust because those messages either appeal to their ideological leanings or preconceived notions. “My fear is if people don’t trust the good science, don’t trust science from these respected journals, it’s going to be increasingly difficult to fight misinformation because people aren’t going to trust the correction.”
Simultaneously, the same database lead to discussion and even papers in certain scientific communities. Thus, some authors have calculated retraction rates for different topic, in order to assert that the Covid-19 was not only leading to two epidemics: the one on human bodies and another one in l retractions.
The founders and employees of Retractionwatch gave a reply themselves in the same journal. Apart from technical remarks about the limitations of the corpus and the inclusion of preprints, the main explanation for these respondents is the speed with which the journals have intervened, where it usually takes years to produce a retraction, not days or weeks.
The institutionalisation of retractions, combined with the focus and urgency of the COVID-19 epidemic, therefore leads to seemingly virtuous behaviour, as journals no longer drag their feet in admitting problems and even communicate widely about retractions, no longer shameful but proud of their professionalism, like The Lancet group journals did. At the risk of giving the articles and more generally the scientific discourse a perfume of permanent reversibility far from the idea of incremental self-correction fo science.
Yesterday’s truth is today’s ignorance. Living in a post-truth academic world
Far away from the COVID-19 epidemic urgency, what happens to flagged papers through time? Beyond knee-jerk reactions, corrections can be later themselves corrected, retractions can be “unretracted“, expression of concern be itself retracted after 15 years, and some have proposed that “good faith” retractions could be combined with the publication of “replacement” papers4 , while the other ones would be permanent. Besides, there is life after death for scientific publications: retracted papers are still cited, and most of their citations do not take notice of their “zombie” status5.
Instead of incorrectly equating the prevalence of retractions with that of misconduct, some consider the proliferation of flagged articles as a positive trend6. In this vision, the very concrete effects of post-publication peer review do reinforce scientific facts already built through peer review, publication and citation. Symmetrically, as every published article is potentially correctable or retractable, any scientific information rhymes with uncertainty. The visibility given to these flags and policies undermine the very basic components of the economy of science: How long can we collectively trust peer review and consider peer-reviewed knowledge should be the anchor to face a “post-truth” world?
Wager, E., Barbour, V., Yentis, S., & Kleinert on behalf of COPE Council, S. (2010). Retractions: guidance from the Committee on Publication Ethics (COPE) [↩]
Disclaimer: this post does not address the merits of the treatments proposed by the IHU team nor their risks, and even less the fact that Prof. Raoult would be a genius, a madman or a top scientist who got lost along the way.
It all started with a video, posted on February 25th, 2020, then entitled “Covid-19: endgame”, and put by IHU Méditerranée-Infection on Youtube. In that less than 2 mn video clip, extracted from the end of a seminar, Didier Raoult states that COVID-19 is “probably the easiest respiratory infection to treat” and that chloroquine (CQ) is effective and already “recommended for all clinically positive cases” in China. It wasn’t the first time this infectious disease star recommended CQ and its cousin molecule, Hydroxychloroquine (HCQ) to fight viral infections. Indeed, as early as 2007, he presented these drugs as “an interesting weapon to face present and future infectious diseases worldwide” in the International Journal of Antimicrobial Agents. (IJAA). Framed as a recycling of these antimalarial drugs, the article constituted a literature review, mainly of in vitro studies, and was part of the scientific and medical strategy of the IHU, the repositioning of old molecules, free of rights, towards new uses. And this possibility of reuse was taken up in a letter sent on February 11th, 2020 to the same journal (IJAA), accepted the same day and published on Februray 15th.
The series of IJAA publications continued. The day after the Youtube video, a new article was submitted, specifically dedicated to the use of CQ as a treatment for the COVID-19 epidemic. Accepted the next day, February 27th and published a week later, it repeated the efficacy claims observed by the Chinese and as a result of clinical recommendation. This assertion is based in particular on one of the strangest references I have ever encountered. Indeed, it is a letter of exactly ten lines published in BioSciences Trend, which body is copied below :
The coronavirus disease 2019 (COVID-19) virus is spreading rapidly, and scientists are endeavoring to discover drugs for its efficacious treatment in China. Chloroquine phosphate, an old drug for treatment of malaria, is shown to have apparent efficacy and acceptable safety against COVID-19 associated pneumonia in multicenter clinical trials conducted in China. The drug is recommended to be included in the next version of the Guidelines for the Prevention, Diagnosis, and Treatment of Pneumonia Caused by COVID-19 issued by the National Health Commission of the People’s Republic of China for treatment of COVID-19 infection in larger populations in the future.
Defined as an “abstract” on the journal site, but without any other body of text, this “article” doesn’t seem to be fully supported by the 7 references listed. It relies mainly an in vitro study from early February, already widely cited, which indicates that CQ could be effective. In fact, It wasn’t until February 29 that the results of a CQ clinical study were submitted to a Chinese journal, before being published on March 6. But let’s go back to the IHU timeline.
Ten days later, a second video was put on Youtube, presenting the results of an observational study made in Marseille and showing the effects of HCQ alone and in combination with an antibiotic, azithromycine (AZ). So there was a slight shift: going from CQ to HCQ and adding an antibiotic. The main result is only the absence of virus in nose and throat, so it is not clinical results but Didier Raoult drew from results to tell his audience their consequences for the clinical institution he manages:
“The fact that you no longer have the virus changes the prognosis. Actually, that’s what infectious diseases are all about. If you don’t have the germ anymore, you’re saved… You have a right to be tested here, and if you’re tested, you have a right to be treated here. That is what we will do.“
So basically, for him, results were so good that you HAD to treat people when they are tested positive. No more trials or research needed, the time for clinical medicine had come, hoping other places would follow his lead. Slides were available on the same webpage but no link to an existing paper, though the same day, but not mentionned in the video, a preprint was submitted to MedrXiv Simultaneously, as it is often the case with biomedical preprints, it was submitted to a journal… the ever-welcoming IJAA, who accepted it, as usual, one day later and published it on March, 20th. Before we come to the extraordinary fate of this paper, let us go back to the title of this post and its interest at this point.
From preprints to preprints: the life and dearth of the Ingelfinger rule
We can observe from the two examples above a pattern of scientific communication: the IHU first posts videos, then produces preprints and finally publishes articles in academic journals – here IJAA. This is very unusual, at least in contemporary times, but happened in various ways during centuries of scholarly communication. The idea that you had first to communicate with your peers through a journal before getting to “the public” is neither constant nor dominating in all disciplines. In our era, it was pushed at a key moment in the mid-1960s. Back then, a first wave of preprints was being supported by NIH and was gaining momentuml in some biomed communities through Information Exchange Groups (IEG) that would circulate by air mail printed copies of unpublished manuscripts1. Nature started a campaign against the “preprint galore” and a few European and US biology and biochemistry journals editors-in-chief met in Vienna in 1966 to get rid of them by stating that : “The journals listed below will not consider manuscripts for publication if preprints, of essentially identical content, are to be distributed, in substantial numbers, by an agency independent of the author or of the publisher of the journal. “2
That led to the termination of the IEG experiment by the NIH in 1967. Two years later, the New England Journal of Medicine (NEJM) editor-in-chief, Franz J. Ingelfinger, coined the rule of acceptance of a paper, based on his interpretation of “sole contribution”, de facto forbidding even “circulation-controlled journals” to print something ahead of the NEJM3. In the same sentence, he remarkably included “news media”: he therefore aimed not only at the exclusive circulation of the article within scientific communities, but also to the prohibition of dissemination of its content to journalists and other medical news enthusiasts. In the early 1970s, his work to promote this exclusivity had a double effect: this practice was given the name Ingelfinger Rule, and many high-profile journals adopted it explicitely. While at the beginning of the 21st century the Ingelfinger Rule was often interpreted as a means to fight against the duplication of papers, its aims were more about controlling the circulation of knowledge in order to protect the newsworthiness of “general medical journals”4 and to organize communication about medical academic papers in a specifc way, favorable to a limited number of journals.
Indeed, as Vincent Kiernan beautifully described in his 1997 article5, the Ingelfinger Rule had become prevalent in Anglo-American journals. It is in particular the efforts of the International Committe of Journal Medical Editors (ICMJE) that built it as a “publishing standard”, which effect was for these journals and their editors-in-chief to simultaneously operate a double control:
control on the authors by requiring them not to reveal the content of their articles, and even less so share the figures and other synthetic representations of results.
control on journalists by providing them with preprint copies of articles in advance, while imposing an embargo on them until actual publication by the journal.
As a result, the general press (free of charge) advertises the content of the journals – it is not an article by Dr. X & Y., but an article from the NEJM or The Lancet – and organizes the dissemination of “medical discoveries” by strengthening the influence of these journals both within academic communities and within press professionnals and the general public. To conclude his paper, Kiernan questions the durability of such practics in the Internet-era and points out the effect of ArXiv preprints, citing the efforts of the ICMJE to extend the Ingelfinger rule to e-prints, with the argument of the direct consequences of biaised or false medical knowledge for the public.
The biomedical field resisted 15 more years to preprints and the Ingelfinger Rule largely stood6, even if it was adapted to emergency contexts, such as the AIDS epidemic. But Kiernan’s forecast came into reality, notably with the creation of BiorXiv in 2013 and the subsequent success of preprints in biology and biomedicine, until preprints became quasi-articles. Consequently, the Ingelfinger rule was dropped by numerous journals and publishers, even if NEJM itself keeps a case by case policy.
Prof. Raoult and his videos, possibly including slides with the figures so dear to the NEJM, thus live in a post-Ingelfinger world, in which academics can directly ensure their communication, not only in terms of content, but also in terms of comments, criticism, reporting or response. Indeed, we will see that the primary communication is not the only one modified by the abandonment of this rule, but the complete organization of the journal’s centrality in the whole chain of scientific communication.
Chaos and creation around one paper
Let us go back to this first publication by Raoult’s team on the effects of HCQ on viral porting, published in the IJAA on March 20, 2020. At the time of writing this post, the article has received 1124 citations according to Google Scholar but also thousands of tweets, blog posts and other references in press articles according to PlumX, a company owned by Elsevier, itself the IJAA publisher. The early circulation of the article was not based on a press release of the IJAA, but on Raoult’s own video and that of his various networks. As Wired recounts, with the help of a lawyer, a retired doctor, a shared google doc and an interview to Fox News, an heterogeneous assemblage à la Bruno Latour, the study published in the IJAA won a quote in a Tweet from the President of the United States the day after its publication:
That Trump endorsement of course had enormous consequences on the HCQ market, the launching of clinical trials, self-medication HCQ practices and the scope of public discussion on the efficacy and dangers of such a treatment. We won’t directly treat these important questions here, but keep on following the exotic trajectory of the publication itself. Simultaneouly to the Trump tweet, a PubPeer thread was lauched on the famous post-publication comment platform, but contrary to the Voinnet affair7, most of the first commentators signed their critiques. Among other topics, the communication trajectory of the paper helped the critique: for example, Leonid Schneider noticed the discrepancies between the figures attached to the video and the ones drawn in the published paper.
Above and beyond Pubpeer, three reviews were quickly published, questionning many aspects of the IJAA paper. The first one is a twitter thread by a master student on March, 22nd ; the second one is a zenodo 18-pages paper by three British/Irish statisticians on March, 23rd ; the third one was a blog post by a very famous Dutch microbiologist and scientific misconduct specialist, Elizabeth Bik on March, 24th. So only four days after publication – still four times the actual reviewing IJAA delay – the paper is being trounced online. Among the many points, let us note that the publishing history was being questioned, some noticing the differences between the first “preprint” on IHU website and the final paper, others underlying the lack of changes, an hint for them on how tenuous the peer review process has been., the 24h delay being surprising to every commentator. The fact that one of the authors was also the editor-in-chief of IJAA was underlined, as well as the “vanishing” of 6 patients (among 26 treated by the combined drugs), which could completly change the statistical value of the results.
While Prof. Raoult was fighting for HCQ to be authorized for general physicians in France, the online discussion kept on going until the learned society, the International Society of Antimicrobial Chemotherapy (ISAC) behind the journal, made a troubling press relase on April 3rd:
“ISAC shares the concerns regarding the above article published recently in the International Journal of Antimicrobial Agents (IJAA). The ISAC Board believes the article does not meet the Society’s expected standard, especially relating to the lack of better explanations of the inclusion criteria and the triage of patients to ensure patient safety. Despite some suggestions online as to the reliability of the article’s peer review process, the process did adhere to the industry’s peer review rules. Given his role as Editor in Chief of this journal, Jean-Marc Rolain had no involvement in the peer review of the manuscript and has no access to information regarding its peer review. Full responsibility for the manuscript’s peer review process was delegated to an Associate Editor. Although ISAC recognises it is important to help the scientific community by publishing new data fast, this cannot be at the cost of reducing scientific scrutiny and best practices. Both Editors in Chief of our journals (IJAA and Journal of Global Antimicrobial Resistance) are in full agreement.”
So the paper has a lot of problems, but stuck by the peer review rules. This cryptic PR became even more troubling a week later as it was “replaced” by an ISAC and Elsevier press release. In fact, the journal is not owned by the learned society, but by the Publisher, only being an “official society journal”. This second PR is streamlined compared to the first one as the “not meeting standard” sentence has disappeard and an announcement of post-publication peer review audit. Through this example, we measure how much different is the situation from what was prevalent under the Ingelfinger Rule. But it is with another Raoult’s team paper that science communication came back to its 17th century roots.
From presidential visit to media frenzy: the marginalization of journals in scholarly communication
After a follow-up study published at the end of March which made less headlines and as some HCQ trials on diverse patient groups were starting to being published, it is with another observationnal study that Prof. Raoult showed the world how he was really managing scholarly communication. On April 9th, the French president, Emmanuel Macron unexpectidely visits IHU Mediterrannée and meets with Prof. Raoult, who presents him the results of its ongoing study. There was no press, but members of the IHU had recorded the arrival of Macron and posted it, making it available to the whole French media.
Here we need to go back to the origins of scientific communication, even before journals were born, when the quality of witnesses – meaning mostly royalty kinship – were an important element of the credit given to the narrative of an experiment or an observation8. In our times, it became a two-way credit flux: Macron was showing his will to base public health on evidence-based, all the more given by a star scientist, while Raoult was legitimizing his position in the French public health landscape, where critics of his methods and results were numerous.
The next day, Raoult made public his first results, not in the form of a preprint or slides with an associated video, but as a simple tweet with the abstract and a summary table.
This tweet was of course massively picked up, commented on and aroused strong media interest, all the more so as the results reinforced those of the previous study by moving from a purely biological effect to a clinical effect: “The HCQ-AZ combination, when started immediately after diagnosis, is a safe and efficient treatment for COVID-19, with a mortality rate of 0.5%, in elderly patients. It avoids worsening and clears virus persistence and contagiosity in most cases. ” Four days later, Prof. Raoult was invited in Dr Oz show, a famous TV host in the US, harshly criticized for his often unproven medical advice.
At the day of the interview, there was no preprint and the paper was not even submitted to a journal. Yet, Prof. Raoult presents his results as facts. It was only on the 20th that the manuscript was sent to Travel Medicine and Infectious Disease,9, with 10 days for peer review and a publication on May, 5th. Tens of thousands of comments on Facebook and tweets have followed according to PlumX,10 though media as much endorsed the results as they reported the methodological limits os the study – mostly the absence of a control group.
This study is undoubtedly a borderline case in the marginalization of journals, with communication aimed primarily at peers being out of step with announcements to political leaders and media outlets. Nevertheless, the massive availability of preprints, abstracts or other materials on topics such as the effectiveness of masks or tests, the persistence of coronavirus on this or that surface, or cases of cure, has led to significant media coverage. From the point of view of the public authorities and the general public, it could have strengthened the authority of academic journals, again in a position to assert their necessity as a obligatory passage point for public dissemination. But this return to grace assumed that the journal peer review is an effective barrier against “bad science”, an hyptohesis which has been dismissed by thirty years of studies and literature.
Prestige journals in epidemic times: an economy of reputation crumbling down?
Indeed, prestige journals are bad for methodology: they don’t follow their own standards on reporting clinical trials, and more generally disicplinary standards. Yet they remain prized places to publish, even during the pandemic where preprints are so trendy because of the urgency to share results and knowledge. And some HCQ papers have been quietly published in such journals, until one observationnal study seemed to close the dabate on this treatment efficacy and risks.
For this study, there was no advance communication, no preprint but a straight article published in The Lancet by 4 authors. Oh, yes, there is a little gem still there on Twitter : two days before online publication, the “first author” answered a tweet by Richard Horton, editor-in-chief of The Lancet:
The reaffirmation of their confidence in the journal peer review system, even in times of health emergency, is comforting. And their trust is shared by the highest health authorities. On May 22nd, the study was published and asserted on the basis of a gigantic aggregation of almost worldwide patient databases that HCQ is not only inefficient, but also a very dangerous for COVID-19 patients. This announcement came at a time when many ongoing trials are displaying HCQ treatment arms. As a result, the WHO decided the next day to evaluate the continuation of its Solidarity study and announced its position on May 25th:
“Having met on 23 May 2020, the Executive Group of the Solidarity Trial decided to implement a temporary pause of the hydroxychloroquine arm of the trial, because of concerns raised about the safety of the drug. This decision was taken as a precaution while the safety data were reviewed by the Data Safety and Monitoring Committee of the Solidarity Trial. “
Nevertheless, in a manner similar to Prof. Raoult’s article, statisticians then look at the content of the article, the data it provides, and begin to point out obvious errors. But for some it was more a police investigation than data re-analysis: how can there be only 4 authors (and no acknowledgements) for such a study? Why are the hospitals involved not mentioned? What is this mysterious enterprise – Surgisphere – unknown until recently, which provides this data? What is the career of its manager and co-author of the paper? Putting apart questions about the company, 6 days after publication, they end up writing an open letter to the authors and the journal, signed by 201 colleagues and endorsed by James Watson11. They mainly point out the necessity to open the data, even more considering the extraordinary results, and describe obvious errors, questionning the quality of the database and the way (including ethics) data was gathered.
The Lancet and the authors were very prompt in responding to these criticisms: in fact, on May 30 a correction was published, covering very minor aspects. : “the numbers of participants from Asia and Australia should have been 8101 (8·4%) and 63 (0·1%), respectively. One hospital self-designated as belonging to the Australasia continental designation should have been assigned to the Asian continental designation.” Of course, the conclusion was a classic in those corrections : “There have been no changes to the findings of the paper.” But critics keep on pushing on the problems, would they be HCQ supporters, Prof. Raoult himself stating “fake data” or “manipulated data” on Twitter or clinicians trying to find coherence between the papers’ data and their own. So, only 3 days after the correction, The Lancet puts an expression of concern on the paper:
“Although an independent audit of the provenance and validity of the data has been commissioned by the authors not affiliated with Surgisphere and is ongoing, with results expected very shortly, we are issuing an Expression of Concern to alert readers to the fact that serious scientific questions have been brought to our attention”.
The paper was still saveable, thanks to the independant impeding audit. Alas, another 2 days and the 3 authors who do not belong to Surgisphere threw in the towel by stating they haven’t seen the data, and demanded the retraction of the article. The Lancet officialized it, provoking expression of outrage, the questioning of the seriousness of the journal and… the reactivation of the suspended trials. Thus, in less than a week, the worldwide study published in what many consider to be “one of the best medical journals in the world” has been awarded the 3 labels commonly used in post-publication peer review – Correction, Expression of Concern, Retraction12 – nullifying the evidence claimed on May, 22nd. But the Surgisphere story goes beyond that article: another paper, published by NEJM on the “same kind of data” was retracted on the same day. Moreover, there are at last two regions – South America and Africa – which have and will suffer from public health policies being developed on preprints and data published by Surgisphere. While #LancetGate was trending on twitter, in-depth inquiries were being made on Surgisphere and the 4th author of study who, ironically, coauthored a paper entitled : “Combating Fraud in Medical Research’ in 2013 !
Science at its best: boring, negative results
To conclude this story on scholarly communication, we have to add that most HCQ articles have not been given the same media treatment and have not been communicated in fancy ways by authors: a preprint on BiorXiv or MedrXiv, then an article with often no spectacular results and limitations because of the number of patients, their previous health conditions, incomparability between groups, etc. One day before the retractions, the same NEJM published the first randomized-control trial on post-exposition use of HCQ, so close to the “Raoult treatment” – AZ not being included. Here is part of the abstract published: “Side effects were more common with hydroxychloroquine than with placebo (40.1% vs. 16.8%), but no serious adverse reactions were reported.After high-risk or moderate-risk exposure to Covid-19, hydroxychloroquine did not prevent illness compatible with Covid-19 or confirmed infection when used as postexposure prophylaxis within 4 days after exposure.”
What do we get from this abstract? That the article is a typical example of those “negative results” that fail to be published, leading to significant biases in the evaluation of treatments in clinical trials through a “publication bias”13. And yet, not because of its own interest, originiality, breakthrough knowledge, but because of its relevance to public health in an epidemic situation, this trial has been published by the other “world’s best medical journal”.
While predictions of “really bad science to come” have sounded true for most commenters and supported by a high number of retractions, the COVID-19 academic publication landscape has also shown a massive uptake on preprints, public education on scientific controversies, conflict of interest and statistical analysis and furthermore… yes, publication of null results in prestige journals. Whether you think this is a total mess and you prefered the Ingelfinger rule depends on the way you conceive academic research and scholarly communication. Back then, preprints were non-existent in biology and social networks had to be invented, but The Lancet published the Wakefield paper on the link between MMR vaccine and autism. Was it a better time?
See the classic book Shapin, S., & Schaffer, S. (1985). Leviathan and the air-pump: Hobbes, Boyle, and the experimental life (Vol. 109). Princeton University Press [↩]
A journal in which one of the authors is an associate editor have underlined Raoult’s critics [↩]
The story is quite different within the academic world with “only” 21 citations until now, far much less than the March study. In fact, many observationnal studies and trials were competing with this study [↩]
EDIT June 9th: James Watson made a fantastic interview on an australian radio where he gets into detail about how he started and run this 5-days inquiry, hear it there [↩]
There is a huge literature on this topic in the last 30 years, see as an example this The Lancet article, Easterbrook, P. J., Gopalan, R., Berlin, J. A., & Matthews, D. R. (1991). Publication bias in clinical research. The Lancet, 337(8746), 867-872. [↩]
The forms of communication in academic communities are very diverse: articles, seminars, books, colloquia, mailing lists, posters, letters, workshops, proceedings,… The reasons why each one is chosen are multiple and the formats live their own life with new uses, far beyond the initial intentions of their creators. As we will see, preprints, though they have a relatively short history, followed complex patterns, to become something more than shared documents.
It is first necessary to agree on the designation of these entities: working papers, discussion papers, e-prints, preprints will be considered in this post as equivalents. All are written texts, produced by authors without any form of certification, and are made available without any paywall on a perennial web address. Contrary to Wikipedia, we don’t distinguish them on the basis of their future publiation in a journal. We will also ignore the issue of their licensing for two reasons: historically, preprints have existed long before the release of CC licenses – and many of them continue to be unlicensed; pragmatically, because our focus here is on the use of preprints, not their re-use.
Prior to the electronicization of scholarly communication, some disciplines had already experienced preprints, notably psychology and biomedical sciences. This meant that paper manuscripts circulated by mail, with associated quite high material costs (reproduction, stamps). This was not the primary reason for the cessation of certain practices: in biomedicine, publishers were vigilant and their editors-in-chief allies declared a ban on the publication of manuscripts that had already been circulated. On the other hand, physics, especially high-energy physics, pioneered these practices and continued to generate these mail flows before transferring them to the electronic world in the 1970s. Using the compactness of the TEX format, these preprints started to be distributed by email and then Paul Ginsparg had the idea of building an automatic BBS, basically inventing ArXiv.
E-prints servers as competitors of journals?
Until then, in all disciplines, usage has essentially been the same: to facilitate the consideration and discussion of recent research and results, by circumventing the obstacle of delays of publication in journals. Admittedly, a large number of conferences had adopted the practice of proceedings, thus allowing a reduction in this delay, but they remained then very largely attached to the world of paper printing. Following the success of ArXiv, several e-print services were launched in the mid-1990s and Steven Harnad predicted their pre-eminence over journals as the central venue for distribution:
“the best people start putting stuff and readers start saying :’Why wait for the journal to come out? I have to teach this stuff, I have to know this stuff, I can get it to the archive’ and then the libraries come around and say ‘should we order this journal?’ and the scientist says ‘I don’t care, I no longer read in paper’.”
It seems obvious that this prediction did not come true, far from it, and the 2000s saw a world divided between a few disciplines massively practicing preprinting (physics, mathematics, computer science, economics) and the rest of the academic world ignoring them superbly. Nevertheless, their uses – both on the authors’ and the readers’ side – have started to compete with the journals ones. In a peer reviw fashion, the “raw” circulation of a manuscript for discussion regularly produced new versions of a preprint. On ArXiv, more than a third of the preprints exist in 2 versions, and more than 10% exist in 3 versions; or even more as Hirsch’s famous manuscript inventing the h-index had 5 versions, 4 of them before submission to PNAS. On the readers’ side, researchers soon started to cite not only published papers, but also preprints – then often called e-prints, on a massive scale.
These new reading and referencing practices have led to a vast literature on the citation advantage of open access articles over those available only through subscription and its paywalls1. Beyond this possible advantage – monetarised by big publishers for their hybrid journals in a commercial version of open access – these practices shed light on the change in status from simple “manuscripts” to texts integrated into the published literature. To completly get them out of their grey literature status, Paul Ginsparg had proposed as early as 1996 to add overlaid information on preprint servers, which led on the one hand to the creation of journals overlays proper, and on the other hand to various recommendation devices for preprints, among other texts.
The accelerated life cycle of preprints
The “standardization” of preprints through citation or certification is not the only notable development. Indeed, the recent disciplinary extension of preprints servers, in what is often described as a second wave2) is a significant development and has consequences for their uses. Let us take the example of life sciences, with the development of biorXiv, a platform launched in 2013 and published 30,000 preprints in the year 2019.
From this video put online at the time of the platfom inauguration, we will retain two elements: fastness and discussion. If high energy physicists, because of the weight of the infrastructures, work organization and authorship practices are used to live in a world with little publishing competition3 , this is not the case for many computer scientists who already published on ArXiv, especially in the artificial intelligence branch. Also, flag-planting to estabilish priority and (thoretically) gain the scientific credit has been a common operation on ArXiv, the use of timestamp by the server being a certfication of the order of arrival. If this fastness is also important in life sciences to avoid getting scooped, it shall be equally considered in contrast with the slowness of journals : speed of publication has often been an argument for different outputs, and the tension between rapid dissemination and quality of certification is at the heart of the history of the peer review in journals4.
For life scientists and especially early carreer researchers with short-term contracts, speed is less a question of priority than to simply see their results being widespread to be able to build some credit for their next assignment. Until preprints, no publications meant no credit. Now, they have at least something, especially since some organization have recognized preprints as legitimate outputs for CVs in grant applications. Of course, they still need publication in journals, which leads us to the role of discussions. As we have seen, in the case of ArXiv, discussions often feed a release cycle in the form of new preprints. In life sciences, this is apparently much less the case: a recent study by Kent Anderson5. shows that the majority of preprints were posted after they were submitted to a journal, so the “discussion » rather than feedback from the readers of the preprint takes the form of a peer review within a given journal
From fastness to emergency: Preprints can be retracted too
At this point, we need to address the question of the targeted audiences for preprint servers: if it was initially pure academic community exchanges, things have changed with the popularity of social networks. Indeed, the cited Knowledge Exchange report highlighted the crucial role played by Twitter in the dissemination of preprints by their authors or platforms themselves. This dissemination to fringe and non-academic audiences has several consequences, such as the reuse of preprints by maginalised communities or communities with minority knowledge and beliefs. This is also the case for links to blogs included in ArXiV trackbacks for which it is very difficult to reach a consensus on the “serious” or “eccentric” character of a website.6. If Anderson concluded that the promise of a discussion was not kept within the platform in the case of biorXiv, it doesn”t necessarly mean that it is limited to journal peer review, as an unexpected event has just shown us.
In fact, the 2019-nCov coronavirus has been a test for biorXiv as it became the forefront of scientific information. Yet, since the 2003 SARS virus, the international health community, strongly pushed by WHO, seemed to have favored data and information sharing over scientific credit or patents. In recent epidemics, even the paywalls of big publishers have been opened in order to maximise the sharing of the existing knowledge. Now that biorXiv has been strongly established, it is the easiest legal way to combine sharing, speedness and some credit coming from priority7 And indeed, the preprint server has been flooded with coronavirus papers.
This new disclaimer – which specifies in the current case a general policy stated at the top of each preprint – emphasizes the potential audience of preprints, media. For long, the majority of senior life scientists have feared that uncertified preprints would be taken for granted and that a flow of “bad science” would be given to lay audiences. And their strongest fears apparently came true, as an article suggesting the artificial nature of the current virus quickly fed the conspiracy sites and flows, “proving” the epidemic could only be, at the very least, the result of a failed experiment. But the preprint publicity is more ambiguous : as its links spread, it was severely criticized, in a very well-argued way, by colleagues. Moreover, biorXiv is one of the few preprint servers that has included a comment feature attached to the preprints it hosts. And this paper has received a lot of them! So much so that the preprint was retracted less than 2 days after its publication – or more exactly the authors withdrew it following all these comments, whereas previously the retraction of a preprint was envisioned only in case his published heir would have previously endured this exact fate.
The interpretation of this ultra-fast life cycle is of course contrasted: the creators of Retraction Watch see it as a victory for science in preprint mode, while K. Anderson and others consider that such an article would never have appeared in a top-level journal. But the outcome of this debate on journals vs. preprint servers quality should not obscure the profound transformations of preprints. The Harnad vision began to come into reality more than 20 years later, but in a twisted way. While preprint servers didn’t replace journals, preprints have become quasi-articles: used for priority, have a DOI, generate some scientific credit, read and cited, change through at least informal discussion processes, appear on CVs and are archived, generate media interest. And now even if by name they are pre-publications, they are submitted to the stringest post-publication peer review decision.
This literature is so vast and contradictory that Ben Wagner has made an annotated bibliography of it [↩]
In her groundbreaking 1988 book, Sharown Traweek stated that publications were not important for them, as they were only archives, record-keepingof the things that really matters [↩]
“bioRxiv: Trends and analysis of five years of preprints.” Learned Publishing (2019). [↩]
see Ritson, Sophie. “‘Crackpots’ and ‘active researchers’: The controversy over links between arXiv and the scientific blogosphere.” Social studies of science 46.4 (2016): 607-628. [↩]
This blog is part of a vast research program on the political economy of scientific publication, which has been strongly transformed over the last twenty years by the electronic dissemination of journals. It considers publishers, editorial committees and journals as socio-political actors to be studied in three complementary aspects detailed below.
Firstly, they are analysed as economic actors defining publishing markets. The conditions under which these markets were created have been the subject of much criticism, and strong transnational mobilisations around open access have been deployed, which has influenced the construction of public policies that are contrasted internationally. New economic models have emerged, of which direct payment by the authors (APC), is only the most visible, but not the most frequent. The multiplication of coloured labels (Green, Gold, Platinum, Bronze, Diamond) to designate these models does not fully account for their subtle differences, nor for the sustainability of the associated business model, compared to the classic subscription model which has led to a “serial crisis” over the last 20 years, with the massive increase in the cost of access to publications for libraries
Secondly, journals and publishers are studied as places of production, including innovations in evaluation technologies (open peer review, technical soundness based review…). In particular, it is the growing debate on post-publication peer review policies, including withdrawing articles, that will be examined, as well as the emergence of platforms for public discussion of their validity such as PubPeer. The question of the centrality of journals for peer review or their marginalization (overlay journals, recommendations…) will also be addressed.
Thirdly, journals are treated as places of valorisation, seeking to attract authors and promote their position through the use of different measures (citation, referencing, uses…), which they highlight or criticise. In addition to the recurring debates on the Journal Impact Factor, a measure that is currently much decried, there will be discussions on alternative metrics, or even on responsible metrics, which are supposed to better represent academic production and its uses.
These three aspects aim in particular at sheding light on new forms of self-regulation by academic actors (systematisation of advertising for the withdrawal of articles, generalisation of post-publication peer review, stigmatisation of predatory publishers, uses of creative commons licenses…), the innovative and argumentative work of publishers and platforms, whether public, para-public or private, and the redefinition of public policies in the field of academic publication.