Elsevier is a felon, that is a given. This company epitomizes all the crimes, misdemeanors and petty theft that can be accomplished by a publisher. Its wikipedia page is so full of affairs, scandals and raunchy stories that it would be enough to read it to give a talk to an academic congress. And yet Elsevier still find new ways to extract value from academic communities, which both produces new profits and new critiques. This post is the story of the publisher becoming a data company.1
An endless list of academic misdemeanors
In 1880, It all begin with a “borrowing” as we say in academic life, which may also be named an hommage, a plagiarism or a steal, depending on the point of view. When the company was founded, it took over a logo from an ancestral famous printing Dutch family, whose name was Elzevier (yes, with a Z).
As a Dutch publisher, they first put out journals in the language of the country, but the need to go into exile in England in 1940, and no doubt a rather specific vision of scholarly communication, led the company to launch English-language journals in the post-war period. Along with Pergamon, Elsevier is certainly the inventor of the concept of the ‘international journal’ and created a global market for scientific writings, with customers all over the world and, what is more, a profitable market. This led to cycles of development and acquisitions, which continue to this day. But as we know in the world of superheroes, “with great power comes great responsibilities”.
And indeed they are responible. The list of “problems” attributed to Elsevier can be categorised into three different groups: firstly, a propensity to act in a “sloppy and dirty” manner, for example in copyediting failures, by selling closed articles for which authors have already paid an APC, or by not acting in front of legitimate requests for retracting articles, as in the very recent following example.
— Retraction Watch (@RetractionWatch) July 6, 2022
Secondly, its constant pursuit of profit leads it to bend academic rules. Above and beyond offering researchers Amazon vouchers to write reviews on products, one of the most famous examples is the publication of journals in Australia that were de facto advocacy media for Merck pharmaceutical products, through a subsidiary that is cited by Sergio Sismondo as an example of ghost management2.
Third, its concern for protecting its intellectual property leads it to numerous actions opposing open access, unlimited text and data mining, or even metadata sharing. Elsevier thus funds numerous lobbying actions, and one aiming at the US Congress led to the “Cost of Knowledge” petition in 2010. This petition called for a boycott to write, review or make editorial work for the company. It was signed by tens of thousands of academics and led to some mocking of Elsevier logo.
To sum it up, If the kids of Bruno Latour had been STS PhD students in 2010, they would probably have authored a paper entitled “Portrait of a publisher as a wild capitalist”. But that wouldn’t have predicted what happened next.
From academic publisher to data company: a very public transition
In fact, Elsevier continued to thrive as a publisher despite the tens of thousands of petitionners. But the company has changed its core business and has significantly expanded its range of services until it no longer appears as a publisher. Take two exemplary acquisitions: in 2013, Elsevier purchased Mendeley, a library management service, and for some, it was as if the Empire had bought the rebels.
Elsevier had two objectives: on the one hand, to extend its information retrieval ecosystem, and on the other, to collect data on Mendeley users, potentially authors and reviewers. These same objectives were reflected in the acquisition two years later of SSRN, a preprint platform then specialising in the social sciences.
“Elsevier is now getting closer and closer to researchers with business models that don’t involve libraries,” says Joe Esposito, a publishing consultant in New York City. “The positioning is well thought out: lock up revenues to the legacy publishing business, move into areas where piracy is not much of an issue, create deeper relationships with researchers and become more and more essential to researchers even as librarians become less so.”3.
The series of purchases aim to control the bricks directly used by researchers whose research projects, results, research data, texts read, cited, reviewed, tweeted, etc were linked and. Elsevier being able to identify them. But as the 2019 comprehensive diagram below shows, they are not the only target of the “new Elsevier”.
Chen, G., Posada, A., & Chan, L. 2019. Vertical Integration in Academic Publishing : Implications for Knowledge Inequality. In Chan, L., & Mounier, P. (Eds.), Connecting the Knowledge Commons — From Projects to Sustainable Infrastructure : The 22nd International Conference on Electronic Publishing – Revised Selected Papers. Marseille : OpenEdition Press.
In fact, Elsevier’s other target market is higher education and research institutions, and even governmental institutions. The enclosure of the Elsevier ecosystem has, for example, guaranteed the company a position as a subcontractor in the construction of the first European open access monitor, which has been deemed scandalous by open access activists4. When a consortium of Dutch universities signed a transforment agreement with the publisher in 2019, this included the joint development of projects involving all kinds of data, being a Faustus pact with Lucifer where open science becomes sustaining Elsevier data infrastructure in exchange for open access papers5
In a decade, Elsevier had become a data company, selling them to numerous clients, both academic and non-academic and defining itself in corporate documents as “a global leader in information and analytics, (which) helps researchers and healthcare professionals advance science and improve health outcomes for the benefit of society.”, while making videos on the perfect world of information it designs with a product called PURE.
Naming a new supervilain : surveillance publishing
As of 2019, this transformation of Elsevier and, to a lesser degree, other big publishers was a wake-up call for various institutions and authors. They started to formalise the list of new dangers that the construction of data tracking and information aggregation systems constituted, some of them specific to the academic world and others similar to those created by GAFAM-like companies. For example, a commission of the DFG published a briefing paper where they acted as alarm raisers for the following concerns6:
entail a violation of academic freedom and the freedom of research and teaching;
constitute a violation of the right to the protection of personal data;
pose a potential threat to scientists, as the data could also become accessible to foreign governments and authoritarian regimes;
constitute an encroachment of competition law, as new participants barely have a chance to enter the market;
favour a reduction in the value of public research investment, since data on research activity can be collected by commercial research competitors or made available to them in return for payment in connection with industrial espionage.
These fears may seem hypothetical, but the fact, for example, that Elsevier’s parent company, RELX, has signed a huge contract to supply personal data to the US Immigration and Customs Enforcement agency has givn some weight to their warnings. But what data are we talking about? Two facetious colleagues used the provisions of the GDPR to ask Elsevier for their data and documented their findings on the traces of their stay in the Elsevier Hotel. It contains a number of directly personal data (phone numbers, bank details, addresses), but above all a great deal of usage data on the opening of e-mails sent by the company, the most basic operations on Mendeley and Science Direct or, more amusingly or worryingly, the trace of persons consents and non-consents:
A new petition, twelve years after The Cost of knowledge, calls to “Stop Tracking Science“, which actually means stop tracking academics. In the new configuration, libraries are still a passage point between the big publishers and the researchers though not exclusive anymore. But the data that goes for these exchanges is now considered in a differnt manner : “they are even attempting to persuade libraries to install trackers inside university networks: the research behavior of all of us is being recorded in real time.” While identification has for long been presented as a necessary security to provide access to closed texts, it is now a source of concern, in a very similar manner as cell phone or internet tracking. To describe this phenomenon, several labels have been proposed, such as”platformization of science”7 or “surveillance publishing”8The big publishers are trying through various legal actions to present Sci-hub and Libgen not only as intellectual property offenders, but also as dangerous hackers for the security of research institutions., while it is the same accusation that is directed towards them. So, in the end, for you, which ones are the supervilains threatening academic communities?
this post is based on a communication given at the 2022 EASST Conference in Madrid at the same date [↩]
If you wish to buy this mug (no conlict of interest, I get no money if you click).
Retraction Watch has celebratied its 10th anniversary and its creators have grown from a small blog to a reputable entity, funded by numerous donors, source of academic publications, run by the Center for Scientific Integrity and manager of a database acknowledged for its quality. With the COVID-19 epidemic, the retraction of scientific articles (and even preprints) has become a mainstream media object, fully public beyond the academic communities directly concerned.
The institutionnalization of the website mirrors the one of the retractions themselves, which have become partly normalized into the publshing process as a key part of post-publication peer review. In this post written for the Peer Review 2020 week, which theme is “Trust in peer review”, we wil briefly look at journal policies and how they change the actual trust given to published articles1.
Flagging published articles. Don’t trust what your read
“Certified”, ” peer-validated”, “peer-reviewed”, all these notions are aimed at different practices but with the same objective: to assert that the text you are reading is not the simple product of authors’ reflections and their exploration of a phenomenon, of theories and observations, but that of a more or less complex process of evaluation of a manuscript by others, not recognised as co-authors but sufficiently knowledgeable about the subject, the methods, the literature, that they indicate to you that this content is valid.
Then, of course, scandals and other fraud cases multiplied, science stars falling one after another, but you could always believe that these were exceptions, special cases, that almost all the articles contained true and proven statements… at least until 2009. That year, the COPE organisation published its first standards2 on retracted articles, showing that it was not only normal, but expected that journals would plan to remove from the scientific canon articles they had previously published. To be more precise, it was a matter of flagging different articles according to the situation:
Journals editors should consider issuing an expression of concern if:… Journals editors should consider issuing a correction if:… Journals editors should consider consider retracting a publication if…
In this system, an “expression of concern” casts doubt about an article and warns readers that its content raise some issues. In most cases, it describes information that has been given to the journal, which led it to alert its readers about an ongoing investigation, but does not directly state about the validity of the work. On the contrary, when it comes to “correction”, it is always stated that the core validity of the original article remains, some parts of its content being lightly or extensively modified. In some cases, the transformations have been carried to such an extent (e.g. every figure have been changed) that some actors have ironically coined the term “mega-correction“ to characterize them. Contrary to an expression of concern, the authors of the article are fully aware of these modifications and, even if they have not written it, do necessarily validate them before the publication of the so-called (mega)correction. If they don’t, journals sometimes publish editorial notes instead of corrections. Finally, a “retraction” aims at to inform readership that the article validity and/or reliability and/or ethcal background and/or authorship does not stand anymore. Far from being an erasure, it is conceived of as the final step of the publishing record of the original article, as the notice of retraction “should be linked to the retracted article”. A retraction is either conducted in close collaboration with the authors, or against them upon the request of someone else who is explicitly named (e.g. a journal editor-in-chief, a colleague, a funding body…). Ten years later, COPE produced a second version3 of its guidelines, in which the grounds for retraction were lengthened, such as the use of prohibited material or copyright infrigements. Two motives are of particular interest:
It has been published solely on the basis of a compromised or manipulated peer review process
The author(s) failed to disclose a major competing interest (a.k.a. conflict of interest) that, in the view of the editor, would have unduly affected interpretations of the work or recommendations by editors and peer reviewers.
It is no longer only the conditions of production of articles that are targeted or its content, but the very processes of evaluation that can be pirated or simply distorted if the relationship of the authors to their object is not revealed. Not only you can’t trust the content of the paper, but you can’t anymore trust the process by which journals certify this content. You can only trust them when they certify they have failed… and these new motives were quickly put to the test.
An Epidemic of retractions? COVID-19 as a public discussion on papers status
A month after the publication of these guidelines the COVID-19 epidemic began, with the adoption of open science as borders closed. We have already dealt with articles about the HCQ treatment and the Lancetgate that followed, i.e. an ultra-fast but complex case of retraction, which moreover recently led to the commercial journal to change its peer review process. The editors of the Lancet group conclude their op-ed “Learning from a retraction” with unintended irony: “As trusted sources of information, the Lancet journals are committed to ensuring that our editorial processes will continue to be as robust as possible.” Who needs learning from a failure if robustness has always been there?
This highly visible example refers to another phase in the institutionalisation of the retraction object: its public debate, beyond academic circles, The existence of the Retractionwatch database has been acknowledged and the commonness of retraction has become a public concern, for example in this canadian article:
Similarly, a 2019 Leger poll for the Ontario Science Centre found 29 per cent of respondents said that because scientific theories are fluid, they can’t be trusted.. What’s more important than the erosion in trust, says Caulfield, “is a polarization where people are gravitating toward conspiracy theories or messaging (including misinformation) that is trying to increase distrust because those messages either appeal to their ideological leanings or preconceived notions. “My fear is if people don’t trust the good science, don’t trust science from these respected journals, it’s going to be increasingly difficult to fight misinformation because people aren’t going to trust the correction.”
Simultaneously, the same database lead to discussion and even papers in certain scientific communities. Thus, some authors have calculated retraction rates for different topic, in order to assert that the Covid-19 was not only leading to two epidemics: the one on human bodies and another one in l retractions.
From An alarming retraction rate for scientific publications on Coronavirus Disease 2019 (COVID-19) Nicole Shu Ling Yeo-Teh & Bor Luen Tang (2020): An alarming retraction rate for scientific publications on Coronavirus Disease 2019 (COVID-19), Accountability in Research, DOI: 10.1080/08989621.2020.1782203
The founders and employees of Retractionwatch gave a reply themselves in the same journal. Apart from technical remarks about the limitations of the corpus and the inclusion of preprints, the main explanation for these respondents is the speed with which the journals have intervened, where it usually takes years to produce a retraction, not days or weeks.
The institutionalisation of retractions, combined with the focus and urgency of the COVID-19 epidemic, therefore leads to seemingly virtuous behaviour, as journals no longer drag their feet in admitting problems and even communicate widely about retractions, no longer shameful but proud of their professionalism, like The Lancet group journals did. At the risk of giving the articles and more generally the scientific discourse a perfume of permanent reversibility far from the idea of incremental self-correction fo science.
Yesterday’s truth is today’s ignorance. Living in a post-truth academic world
Far away from the COVID-19 epidemic urgency, what happens to flagged papers through time? Beyond knee-jerk reactions, corrections can be later themselves corrected, retractions can be “unretracted“, expression of concern be itself retracted after 15 years, and some have proposed that “good faith” retractions could be combined with the publication of “replacement” papers4 , while the other ones would be permanent. Besides, there is life after death for scientific publications: retracted papers are still cited, and most of their citations do not take notice of their “zombie” status5.
Instead of incorrectly equating the prevalence of retractions with that of misconduct, some consider the proliferation of flagged articles as a positive trend6. In this vision, the very concrete effects of post-publication peer review do reinforce scientific facts already built through peer review, publication and citation. Symmetrically, as every published article is potentially correctable or retractable, any scientific information rhymes with uncertainty. The visibility given to these flags and policies undermine the very basic components of the economy of science: How long can we collectively trust peer review and consider peer-reviewed knowledge should be the anchor to face a “post-truth” world?
Wager, E., Barbour, V., Yentis, S., & Kleinert on behalf of COPE Council, S. (2010). Retractions: guidance from the Committee on Publication Ethics (COPE) [↩]
And yet another agreement! While it was celebrated over the ocean as “the largest OA deal ever signed in the US” or a “milestone” for OA, we Europeans are now used to these “groundbreaking” contracts announcements every other week. So much that I have already written one in March on the German Springer/DEAL and another one in May on the Faustian Elsevier/Dutch consortium. So all things come in threes, and for a good reason, as Californians give us some food for thought on the financial side of the agreement.
First of all, it should be noted that the contract between Springer Nature (SN) and the University of California (UC) has not yet been written, but that only the Memorandum of Understanding (MoU) was made public this week1. This publication derives from a clear commitment on the part of the universities to make the negotiation processes and the principles governing the choice of subscription, support or no deal transparent to the local academic communities, but also more broadly to all stakeholders interested in these issues.
As we are almost in the middle of the year, the fact that the agreement has been signed for the years 2020 to 2023 has a first important consequence: all the mechanisms necessary for the identification of authors, for the various payments and for monitoring will probably not be in place before the end of the year (SN is committed to this by 1 January 2021). In practice, UC will pay in 2020 an undisclosed amount name “UC 2020 spend” for a Read & Publish in which the Publish part will be free of charge. It is only over the next three years that mechanisms will appear, which combined originality is at the heart of this post.
The Muti-payer model. Getting authors and funders involved
One of the originalities of this contract with Springer is the adoption of a model first experimented in the UC/PLOS agreement, with the splitting of an APC into two distinct blocks: the first 1000 dollars which will be systematically paid by the university and the rest which will be paid by the authors if they have the possibility to do so. This mechanism smells like a device invented by economists, and it is one, a professor at UC Berkeley, who describes its purpose in The Scientist:
“In the US, there already were multiple funding sources—libraries paid for subscriptions, and when authors wanted to publish open access, they paid a surcharge on top of that out of their funds,” says MacKie-Mason. “The key thing here is that we’re integrating those into a single contract. That creates cost control for the institutions and the researchers [during the transition to open access], which is critical because the cost of scholarly publishing has been exploding.”
So the solution to the “new serial crisis” would be to imply authors as UC people have repeatedly stated2, but aren’t they already with classical “one shot APCs”? The idea to combine APC with institutionnal support in a contract is here pushed to the limit as we will see. In some “transformative agreements”, there is no way for a third party to understand who in the end pays what and from which source, especially in consortiia stteings. Here, it is quite the opposite as in the who MoU, a clear separation is made between two sources:
The UC – would it be California Digitaly Library or UC itself – takes in charge a 750,000$ reading fee, 1000$ for each APC and, as we will detail, more if authors can’t pay. All these will be counted apart in “UC Fully OA Spend”, “UC Hybrid Spend” and of course the reading fee.
The authors would pay the “APC remainder”, whoever is the original funder, and these sums play a very limited role into the contract, are not agregated under specific names.
So the splitting is not only made for each article, but for the total contract as “cost regulation” supported by Mackie-Manson but in fact only on the UC side, authors could spend whatever they wish on APC, and benefit from the UC participation. In consequence, as authors shall pay, they have the possibility to opt out of OA in hybrid journals, which is the default option. Consequently, the deal does not guarantee that all UC corresponding authors articles will be OA, but only those who wish so and, to some extent, that are ready to pay, favorable to hybrid journals, or APC gold open access supporters. The division and authors’ choice are highly visible in an exception in the contract. If, despite very short deadlines, SN was able to implement the entire workflow before the end of 2020, then it could start invoicing APCs. Under no circumstances would UC have anything to pay, but authors could be solicited:
Should Springer Nature implement the Multi-payer Model before January 1, 2021, Springer Nature may begin collecting the APC Remainder under the terms of the model […]. If the corresponding author does not have research funds available to cover the APC Remainder, then Springer Nature shall not collect an APC for those articles. No UC Fully OA or Hybrid Spend payments will be charged during this time (article 3.8.2).
It is hard to imagine a corresponding author who can get free APC deciding to pay, unless their grant is nearing completion and they cannot spend it otherwise. But this provision does indeed support the idea of two decoupled payers, as the rules applying to them may differ, the first (UC) not paying in 2020 before being obliged to contribute, the second remaining in a logic of choice throughout the contract. But what exactly are the amounts to be paid?
Price, Volume, Participation : an equation to determine an Hybrid bill
The price calculation formulas are not yet complete, since the agreement is not signed, but the foreseeable variations are known throughout the contract. For full OA journals, there will be a base price in 2020, with a maximum increase of 3.5% per year. This base price is certainly not the catalog price, since it is specified that ” If at any time during the agreement the then-current list price APC is lower than the APC to be charged under the agreement, the current, lower APC will be charged instead” (art. 3.3). The issue of prices and volumes is most complex when it comes to hybrids APC. First of all, unit pricing is almost constant with the same prices in 2020, 2021 and 2022, and a maximum increase of 2% in 2023. But while the paid volume published in full OA appears unlimited, the paid volume published in Hybrid journals is very constrained.
First the number of articles published in Hybrid by the corresponding authors in 2019 and 2020 is calculated, and the smallest of the values is taken, which becomes the Base article number. The minimum volume of articles is then simply defined as 85% of this number, over time. On the other hand, the maximum number depends on two variables: first, an “inflation” of the authorized volume, of 5% per year, then a calculation that depends on the effective participation of the authors in the publication scheme. Indeed, the parties expect that between 30% and 40% of the authors of articles will choose to publish in hybrid AO rather than revert to a paywalled publication. (orange curve) If the program is successful, more than 60% of the authors adhere, then the red curve defines the maximum number of articles; symmetrically, in case of failure – less than 30% – it is the yellow curve that defines this maximum number.
In a close fashion to the agreement with DEAL, Springer defines a volume control on Hybrid, which can lead up to a third more articles published than the current Hybrid APC. But the consequences of going over this limit are very different than the German counterpart : UC is not anymore paying its 1000$ above the maximum, but authors – if they chose so, must pay the APC remainder. On the other end, if the minimum is not reached, UC shall pay “the average hybrid APC for UC corresponding authors from the previous year for the number of articles necessary to bring the total to the minimum. In 2021, the average hybrid APC from 2019 ($3208) shall be used.” So Springer Nature is sure to have (almost) its money back and UC has a control mechanism which prevents a high rise of its Hybrid spend by volume control.
Hard capping the total costs. Will UC pay less in the end?
Until now, it seems that we analyse another “cost-neutral” agreement that in practice could absolutely become a high rise contract : APC individual price inflation, unlimited payment for full OA articles, controlled max rise of hybrid OA would contribute to a larger bill for UC. Then comes the most original point of the UC/SN contract : a hard cap on the sum of fluctuating bills. In fact, some agreements, typically the JISC ones, include a price control that says “we will pay this, period”. Of course, the trade off is most often a defined, limited volume. Here, as we read it in article 3.6.
In each year of the contract, the Total UC Spend shall be subject to a fee control mechanism, as set out below. All fee control mechanisms are computed in relation to the license fees paid by UC for Springer journals, Adis Journals, Palgrave journals, andacademic journals on nature.com in 2020 (“UC 2020 Spend”).
So the starting “subscription” – ie Read & Publish – set price caps the whole price of the contract, once again in a very precise and shall I write, twisted way. Starting from the “UC 2020 spend”, in 2021 you can not exceed 95% of that sum: if it is the case, then UC gets some reading fee part, and if it is not enough, refunding from SN. So the max is clear and -5% compared to the starting year. But in 2022 and 2023, you can not exceed 98% of that sum ; if it is the case you get only the Reading fee back and nothing else. In other words, there is in fact no fixed maximum payment, and certainly not a garantuee that UC would pay less in 2022 and 2023 than in 2020, and as we don’t know what were the different bills, even more less than 20193. The UC part is very confident on the result as the associate executive director of the California Digital Library, Ivy Anderson, stated : “The new agreement is expected to save the system money overall, but the exact cost will depend on the number of articles UC researchers publish”.
Whatever the final outcome, and one can think, given the complexity of the provisions that the UC part has run many simulations on its final bill, there are three lessons to be learned from this MoU. First, in the absence of price transparency, it is difficult for outsiders to determine whether an agreement is really financially interesting or whether it mechanically leads, as with subscription formulas, to higher prices paid by higher education institutions. Secondly, this agreement builds a link between the payment of authors and that of the university: it therefore allows the direct inclusion of research funders, while ensuring traceability and monitoring of flows for each of the parties. It also contains incentives on the behaviour of authors, who would benefit from using the UC workflow to partially or totally reduce their own payment. But it is the ability to capture money from funders, third parties to the contract, that is striking, with certainly Coalition S members in mind.
Consequently, thirdly, it is the de facto guarantee of Springer’s revenues by encouraging new spending in the form of APC in subsidizing them. Making new provisions to turn the Nature journals into a hybrid goes in the same direction. In a similar way to “Pure Publish” agreements that goes with a discount on APC, the UC agreement is a transformative one as it explicitly changes universities from fund providers to fund collectors for publishers, with the hope of a diminishing or stable bill in exchange for that service.
We saw on the Dutch case that there could be quite significant differences between an MoU and the actual contract [↩]
See this piece on Impact of Social Sciences LSE Blog [↩]
I previously wrongly tweeted that they would pay less, as I thought the reference was UC 2019 spending [↩]
Disclaimer: this post does not address the merits of the treatments proposed by the IHU team nor their risks, and even less the fact that Prof. Raoult would be a genius, a madman or a top scientist who got lost along the way.
It all started with a video, posted on February 25th, 2020, then entitled “Covid-19: endgame”, and put by IHU Méditerranée-Infection on Youtube. In that less than 2 mn video clip, extracted from the end of a seminar, Didier Raoult states that COVID-19 is “probably the easiest respiratory infection to treat” and that chloroquine (CQ) is effective and already “recommended for all clinically positive cases” in China. It wasn’t the first time this infectious disease star recommended CQ and its cousin molecule, Hydroxychloroquine (HCQ) to fight viral infections. Indeed, as early as 2007, he presented these drugs as “an interesting weapon to face present and future infectious diseases worldwide” in the International Journal of Antimicrobial Agents. (IJAA). Framed as a recycling of these antimalarial drugs, the article constituted a literature review, mainly of in vitro studies, and was part of the scientific and medical strategy of the IHU, the repositioning of old molecules, free of rights, towards new uses. And this possibility of reuse was taken up in a letter sent on February 11th, 2020 to the same journal (IJAA), accepted the same day and published on Februray 15th.
The series of IJAA publications continued. The day after the Youtube video, a new article was submitted, specifically dedicated to the use of CQ as a treatment for the COVID-19 epidemic. Accepted the next day, February 27th and published a week later, it repeated the efficacy claims observed by the Chinese and as a result of clinical recommendation. This assertion is based in particular on one of the strangest references I have ever encountered. Indeed, it is a letter of exactly ten lines published in BioSciences Trend, which body is copied below :
The coronavirus disease 2019 (COVID-19) virus is spreading rapidly, and scientists are endeavoring to discover drugs for its efficacious treatment in China. Chloroquine phosphate, an old drug for treatment of malaria, is shown to have apparent efficacy and acceptable safety against COVID-19 associated pneumonia in multicenter clinical trials conducted in China. The drug is recommended to be included in the next version of the Guidelines for the Prevention, Diagnosis, and Treatment of Pneumonia Caused by COVID-19 issued by the National Health Commission of the People’s Republic of China for treatment of COVID-19 infection in larger populations in the future.
Defined as an “abstract” on the journal site, but without any other body of text, this “article” doesn’t seem to be fully supported by the 7 references listed. It relies mainly an in vitro study from early February, already widely cited, which indicates that CQ could be effective. In fact, It wasn’t until February 29 that the results of a CQ clinical study were submitted to a Chinese journal, before being published on March 6. But let’s go back to the IHU timeline.
Ten days later, a second video was put on Youtube, presenting the results of an observational study made in Marseille and showing the effects of HCQ alone and in combination with an antibiotic, azithromycine (AZ). So there was a slight shift: going from CQ to HCQ and adding an antibiotic. The main result is only the absence of virus in nose and throat, so it is not clinical results but Didier Raoult drew from results to tell his audience their consequences for the clinical institution he manages:
“The fact that you no longer have the virus changes the prognosis. Actually, that’s what infectious diseases are all about. If you don’t have the germ anymore, you’re saved… You have a right to be tested here, and if you’re tested, you have a right to be treated here. That is what we will do.“
So basically, for him, results were so good that you HAD to treat people when they are tested positive. No more trials or research needed, the time for clinical medicine had come, hoping other places would follow his lead. Slides were available on the same webpage but no link to an existing paper, though the same day, but not mentionned in the video, a preprint was submitted to MedrXiv Simultaneously, as it is often the case with biomedical preprints, it was submitted to a journal… the ever-welcoming IJAA, who accepted it, as usual, one day later and published it on March, 20th. Before we come to the extraordinary fate of this paper, let us go back to the title of this post and its interest at this point.
From preprints to preprints: the life and dearth of the Ingelfinger rule
We can observe from the two examples above a pattern of scientific communication: the IHU first posts videos, then produces preprints and finally publishes articles in academic journals – here IJAA. This is very unusual, at least in contemporary times, but happened in various ways during centuries of scholarly communication. The idea that you had first to communicate with your peers through a journal before getting to “the public” is neither constant nor dominating in all disciplines. In our era, it was pushed at a key moment in the mid-1960s. Back then, a first wave of preprints was being supported by NIH and was gaining momentuml in some biomed communities through Information Exchange Groups (IEG) that would circulate by air mail printed copies of unpublished manuscripts1. Nature started a campaign against the “preprint galore” and a few European and US biology and biochemistry journals editors-in-chief met in Vienna in 1966 to get rid of them by stating that : “The journals listed below will not consider manuscripts for publication if preprints, of essentially identical content, are to be distributed, in substantial numbers, by an agency independent of the author or of the publisher of the journal. “2
That led to the termination of the IEG experiment by the NIH in 1967. Two years later, the New England Journal of Medicine (NEJM) editor-in-chief, Franz J. Ingelfinger, coined the rule of acceptance of a paper, based on his interpretation of “sole contribution”, de facto forbidding even “circulation-controlled journals” to print something ahead of the NEJM3. In the same sentence, he remarkably included “news media”: he therefore aimed not only at the exclusive circulation of the article within scientific communities, but also to the prohibition of dissemination of its content to journalists and other medical news enthusiasts. In the early 1970s, his work to promote this exclusivity had a double effect: this practice was given the name Ingelfinger Rule, and many high-profile journals adopted it explicitely. While at the beginning of the 21st century the Ingelfinger Rule was often interpreted as a means to fight against the duplication of papers, its aims were more about controlling the circulation of knowledge in order to protect the newsworthiness of “general medical journals”4 and to organize communication about medical academic papers in a specifc way, favorable to a limited number of journals.
Indeed, as Vincent Kiernan beautifully described in his 1997 article5, the Ingelfinger Rule had become prevalent in Anglo-American journals. It is in particular the efforts of the International Committe of Journal Medical Editors (ICMJE) that built it as a “publishing standard”, which effect was for these journals and their editors-in-chief to simultaneously operate a double control:
control on the authors by requiring them not to reveal the content of their articles, and even less so share the figures and other synthetic representations of results.
control on journalists by providing them with preprint copies of articles in advance, while imposing an embargo on them until actual publication by the journal.
As a result, the general press (free of charge) advertises the content of the journals – it is not an article by Dr. X & Y., but an article from the NEJM or The Lancet – and organizes the dissemination of “medical discoveries” by strengthening the influence of these journals both within academic communities and within press professionnals and the general public. To conclude his paper, Kiernan questions the durability of such practics in the Internet-era and points out the effect of ArXiv preprints, citing the efforts of the ICMJE to extend the Ingelfinger rule to e-prints, with the argument of the direct consequences of biaised or false medical knowledge for the public.
The biomedical field resisted 15 more years to preprints and the Ingelfinger Rule largely stood6, even if it was adapted to emergency contexts, such as the AIDS epidemic. But Kiernan’s forecast came into reality, notably with the creation of BiorXiv in 2013 and the subsequent success of preprints in biology and biomedicine, until preprints became quasi-articles. Consequently, the Ingelfinger rule was dropped by numerous journals and publishers, even if NEJM itself keeps a case by case policy.
Prof. Raoult and his videos, possibly including slides with the figures so dear to the NEJM, thus live in a post-Ingelfinger world, in which academics can directly ensure their communication, not only in terms of content, but also in terms of comments, criticism, reporting or response. Indeed, we will see that the primary communication is not the only one modified by the abandonment of this rule, but the complete organization of the journal’s centrality in the whole chain of scientific communication.
Chaos and creation around one paper
Let us go back to this first publication by Raoult’s team on the effects of HCQ on viral porting, published in the IJAA on March 20, 2020. At the time of writing this post, the article has received 1124 citations according to Google Scholar but also thousands of tweets, blog posts and other references in press articles according to PlumX, a company owned by Elsevier, itself the IJAA publisher. The early circulation of the article was not based on a press release of the IJAA, but on Raoult’s own video and that of his various networks. As Wired recounts, with the help of a lawyer, a retired doctor, a shared google doc and an interview to Fox News, an heterogeneous assemblage à la Bruno Latour, the study published in the IJAA won a quote in a Tweet from the President of the United States the day after its publication:
HYDROXYCHLOROQUINE & AZITHROMYCIN, taken together, have a real chance to be one of the biggest game changers in the history of medicine. The FDA has moved mountains – Thank You! Hopefully they will BOTH (H works better with A, International Journal of Antimicrobial Agents)…..
That Trump endorsement of course had enormous consequences on the HCQ market, the launching of clinical trials, self-medication HCQ practices and the scope of public discussion on the efficacy and dangers of such a treatment. We won’t directly treat these important questions here, but keep on following the exotic trajectory of the publication itself. Simultaneouly to the Trump tweet, a PubPeer thread was lauched on the famous post-publication comment platform, but contrary to the Voinnet affair7, most of the first commentators signed their critiques. Among other topics, the communication trajectory of the paper helped the critique: for example, Leonid Schneider noticed the discrepancies between the figures attached to the video and the ones drawn in the published paper.
Above and beyond Pubpeer, three reviews were quickly published, questionning many aspects of the IJAA paper. The first one is a twitter thread by a master student on March, 22nd ; the second one is a zenodo 18-pages paper by three British/Irish statisticians on March, 23rd ; the third one was a blog post by a very famous Dutch microbiologist and scientific misconduct specialist, Elizabeth Bik on March, 24th. So only four days after publication – still four times the actual reviewing IJAA delay – the paper is being trounced online. Among the many points, let us note that the publishing history was being questioned, some noticing the differences between the first “preprint” on IHU website and the final paper, others underlying the lack of changes, an hint for them on how tenuous the peer review process has been., the 24h delay being surprising to every commentator. The fact that one of the authors was also the editor-in-chief of IJAA was underlined, as well as the “vanishing” of 6 patients (among 26 treated by the combined drugs), which could completly change the statistical value of the results.
While Prof. Raoult was fighting for HCQ to be authorized for general physicians in France, the online discussion kept on going until the learned society, the International Society of Antimicrobial Chemotherapy (ISAC) behind the journal, made a troubling press relase on April 3rd:
“ISAC shares the concerns regarding the above article published recently in the International Journal of Antimicrobial Agents (IJAA). The ISAC Board believes the article does not meet the Society’s expected standard, especially relating to the lack of better explanations of the inclusion criteria and the triage of patients to ensure patient safety. Despite some suggestions online as to the reliability of the article’s peer review process, the process did adhere to the industry’s peer review rules. Given his role as Editor in Chief of this journal, Jean-Marc Rolain had no involvement in the peer review of the manuscript and has no access to information regarding its peer review. Full responsibility for the manuscript’s peer review process was delegated to an Associate Editor. Although ISAC recognises it is important to help the scientific community by publishing new data fast, this cannot be at the cost of reducing scientific scrutiny and best practices. Both Editors in Chief of our journals (IJAA and Journal of Global Antimicrobial Resistance) are in full agreement.”
So the paper has a lot of problems, but stuck by the peer review rules. This cryptic PR became even more troubling a week later as it was “replaced” by an ISAC and Elsevier press release. In fact, the journal is not owned by the learned society, but by the Publisher, only being an “official society journal”. This second PR is streamlined compared to the first one as the “not meeting standard” sentence has disappeard and an announcement of post-publication peer review audit. Through this example, we measure how much different is the situation from what was prevalent under the Ingelfinger Rule. But it is with another Raoult’s team paper that science communication came back to its 17th century roots.
From presidential visit to media frenzy: the marginalization of journals in scholarly communication
After a follow-up study published at the end of March which made less headlines and as some HCQ trials on diverse patient groups were starting to being published, it is with another observationnal study that Prof. Raoult showed the world how he was really managing scholarly communication. On April 9th, the French president, Emmanuel Macron unexpectidely visits IHU Mediterrannée and meets with Prof. Raoult, who presents him the results of its ongoing study. There was no press, but members of the IHU had recorded the arrival of Macron and posted it, making it available to the whole French media.
Here we need to go back to the origins of scientific communication, even before journals were born, when the quality of witnesses – meaning mostly royalty kinship – were an important element of the credit given to the narrative of an experiment or an observation8. In our times, it became a two-way credit flux: Macron was showing his will to base public health on evidence-based, all the more given by a star scientist, while Raoult was legitimizing his position in the French public health landscape, where critics of his methods and results were numerous.
The next day, Raoult made public his first results, not in the form of a preprint or slides with an associated video, but as a simple tweet with the abstract and a summary table.
L'abstract et le tableau récapitulatif des données de notre article portant sur le traitement de 1061 patients sont en ligne ! The abstract and the summary table of our paper on the treatment of 1061 patients are online !https://t.co/mTWj6aGpTkhttps://t.co/lNXZK91etIpic.twitter.com/PLdygNolxG
This tweet was of course massively picked up, commented on and aroused strong media interest, all the more so as the results reinforced those of the previous study by moving from a purely biological effect to a clinical effect: “The HCQ-AZ combination, when started immediately after diagnosis, is a safe and efficient treatment for COVID-19, with a mortality rate of 0.5%, in elderly patients. It avoids worsening and clears virus persistence and contagiosity in most cases. ” Four days later, Prof. Raoult was invited in Dr Oz show, a famous TV host in the US, harshly criticized for his often unproven medical advice.
https://www.youtube.com/watch?v=uy1cPT1ztko
At the day of the interview, there was no preprint and the paper was not even submitted to a journal. Yet, Prof. Raoult presents his results as facts. It was only on the 20th that the manuscript was sent to Travel Medicine and Infectious Disease,9, with 10 days for peer review and a publication on May, 5th. Tens of thousands of comments on Facebook and tweets have followed according to PlumX,10 though media as much endorsed the results as they reported the methodological limits os the study – mostly the absence of a control group.
This study is undoubtedly a borderline case in the marginalization of journals, with communication aimed primarily at peers being out of step with announcements to political leaders and media outlets. Nevertheless, the massive availability of preprints, abstracts or other materials on topics such as the effectiveness of masks or tests, the persistence of coronavirus on this or that surface, or cases of cure, has led to significant media coverage. From the point of view of the public authorities and the general public, it could have strengthened the authority of academic journals, again in a position to assert their necessity as a obligatory passage point for public dissemination. But this return to grace assumed that the journal peer review is an effective barrier against “bad science”, an hyptohesis which has been dismissed by thirty years of studies and literature.
Prestige journals in epidemic times: an economy of reputation crumbling down?
Indeed, prestige journals are bad for methodology: they don’t follow their own standards on reporting clinical trials, and more generally disicplinary standards. Yet they remain prized places to publish, even during the pandemic where preprints are so trendy because of the urgency to share results and knowledge. And some HCQ papers have been quietly published in such journals, until one observationnal study seemed to close the dabate on this treatment efficacy and risks.
For this study, there was no advance communication, no preprint but a straight article published in The Lancet by 4 authors. Oh, yes, there is a little gem still there on Twitter : two days before online publication, the “first author” answered a tweet by Richard Horton, editor-in-chief of The Lancet:
The reaffirmation of their confidence in the journal peer review system, even in times of health emergency, is comforting. And their trust is shared by the highest health authorities. On May 22nd, the study was published and asserted on the basis of a gigantic aggregation of almost worldwide patient databases that HCQ is not only inefficient, but also a very dangerous for COVID-19 patients. This announcement came at a time when many ongoing trials are displaying HCQ treatment arms. As a result, the WHO decided the next day to evaluate the continuation of its Solidarity study and announced its position on May 25th:
“Having met on 23 May 2020, the Executive Group of the Solidarity Trial decided to implement a temporary pause of the hydroxychloroquine arm of the trial, because of concerns raised about the safety of the drug. This decision was taken as a precaution while the safety data were reviewed by the Data Safety and Monitoring Committee of the Solidarity Trial. “
Nevertheless, in a manner similar to Prof. Raoult’s article, statisticians then look at the content of the article, the data it provides, and begin to point out obvious errors. But for some it was more a police investigation than data re-analysis: how can there be only 4 authors (and no acknowledgements) for such a study? Why are the hospitals involved not mentioned? What is this mysterious enterprise – Surgisphere – unknown until recently, which provides this data? What is the career of its manager and co-author of the paper? Putting apart questions about the company, 6 days after publication, they end up writing an open letter to the authors and the journal, signed by 201 colleagues and endorsed by James Watson11. They mainly point out the necessity to open the data, even more considering the extraordinary results, and describe obvious errors, questionning the quality of the database and the way (including ethics) data was gathered.
The Lancet and the authors were very prompt in responding to these criticisms: in fact, on May 30 a correction was published, covering very minor aspects. : “the numbers of participants from Asia and Australia should have been 8101 (8·4%) and 63 (0·1%), respectively. One hospital self-designated as belonging to the Australasia continental designation should have been assigned to the Asian continental designation.” Of course, the conclusion was a classic in those corrections : “There have been no changes to the findings of the paper.” But critics keep on pushing on the problems, would they be HCQ supporters, Prof. Raoult himself stating “fake data” or “manipulated data” on Twitter or clinicians trying to find coherence between the papers’ data and their own. So, only 3 days after the correction, The Lancet puts an expression of concern on the paper:
“Although an independent audit of the provenance and validity of the data has been commissioned by the authors not affiliated with Surgisphere and is ongoing, with results expected very shortly, we are issuing an Expression of Concern to alert readers to the fact that serious scientific questions have been brought to our attention”.
The paper was still saveable, thanks to the independant impeding audit. Alas, another 2 days and the 3 authors who do not belong to Surgisphere threw in the towel by stating they haven’t seen the data, and demanded the retraction of the article. The Lancet officialized it, provoking expression of outrage, the questioning of the seriousness of the journal and… the reactivation of the suspended trials. Thus, in less than a week, the worldwide study published in what many consider to be “one of the best medical journals in the world” has been awarded the 3 labels commonly used in post-publication peer review – Correction, Expression of Concern, Retraction12 – nullifying the evidence claimed on May, 22nd. But the Surgisphere story goes beyond that article: another paper, published by NEJM on the “same kind of data” was retracted on the same day. Moreover, there are at last two regions – South America and Africa – which have and will suffer from public health policies being developed on preprints and data published by Surgisphere. While #LancetGate was trending on twitter, in-depth inquiries were being made on Surgisphere and the 4th author of study who, ironically, coauthored a paper entitled : “Combating Fraud in Medical Research’ in 2013 !
Science at its best: boring, negative results
To conclude this story on scholarly communication, we have to add that most HCQ articles have not been given the same media treatment and have not been communicated in fancy ways by authors: a preprint on BiorXiv or MedrXiv, then an article with often no spectacular results and limitations because of the number of patients, their previous health conditions, incomparability between groups, etc. One day before the retractions, the same NEJM published the first randomized-control trial on post-exposition use of HCQ, so close to the “Raoult treatment” – AZ not being included. Here is part of the abstract published: “Side effects were more common with hydroxychloroquine than with placebo (40.1% vs. 16.8%), but no serious adverse reactions were reported.After high-risk or moderate-risk exposure to Covid-19, hydroxychloroquine did not prevent illness compatible with Covid-19 or confirmed infection when used as postexposure prophylaxis within 4 days after exposure.”
What do we get from this abstract? That the article is a typical example of those “negative results” that fail to be published, leading to significant biases in the evaluation of treatments in clinical trials through a “publication bias”13. And yet, not because of its own interest, originiality, breakthrough knowledge, but because of its relevance to public health in an epidemic situation, this trial has been published by the other “world’s best medical journal”.
While predictions of “really bad science to come” have sounded true for most commenters and supported by a high number of retractions, the COVID-19 academic publication landscape has also shown a massive uptake on preprints, public education on scientific controversies, conflict of interest and statistical analysis and furthermore… yes, publication of null results in prestige journals. Whether you think this is a total mess and you prefered the Ingelfinger rule depends on the way you conceive academic research and scholarly communication. Back then, preprints were non-existent in biology and social networks had to be invented, but The Lancet published the Wakefield paper on the link between MMR vaccine and autism. Was it a better time?
See the classic book Shapin, S., & Schaffer, S. (1985). Leviathan and the air-pump: Hobbes, Boyle, and the experimental life (Vol. 109). Princeton University Press [↩]
A journal in which one of the authors is an associate editor have underlined Raoult’s critics [↩]
The story is quite different within the academic world with “only” 21 citations until now, far much less than the March study. In fact, many observationnal studies and trials were competing with this study [↩]
EDIT June 9th: James Watson made a fantastic interview on an australian radio where he gets into detail about how he started and run this 5-days inquiry, hear it there [↩]
There is a huge literature on this topic in the last 30 years, see as an example this The Lancet article, Easterbrook, P. J., Gopalan, R., Berlin, J. A., & Matthews, D. R. (1991). Publication bias in clinical research. The Lancet, 337(8746), 867-872. [↩]
“On these conditions following: First, that Faustus may be a spirit in form and substance. Secondly, that Mephistophilis shall be his servant and at his command. Thirdly, that Mephistophilis shall do for him, and bring him whatsoever. Fourthly, that he shall be in his chamber or house invisible. Lastly, that he shall appear to the said John Faustus at all times, in what form or shape soever he please.
I, John Faustus of Wittenberg, Doctor, by these presents do give both body and soul to Lucifer, Prince of the East, and his minister Mephistophilis, and furthermore grant unto them, that twenty-four years being expired the articles above written inviolate, full power to fetch or carry the said John Faustus body and soul, flesh, blood, or goods, into their habitation, wheresoever. By me, John Faustus.
The legend of Faust has known many versions, but that of Christopher Marlowe, highlighted above, is no exception to the common rule: it is the absolute thirst for knowledge that drives the scientist to conclude this pact, while the evil or deceptive nature of Lucifer does not play a major part in its making1. So to call this reference to the signing of an agreement between scholarly institutions, by definition producers of knowledge, and a publishing house, however powerful it may be, normally only responsible for disseminating it, may seem counter-intuitive. Yet, as we shall see, it is the one that is required, as the relationship between the two parties may be potentially inverted. With this new agreement, Elsevier will try to become the knowledge-producing entity, the one that will give these institutions and their authors what information they think they absolutely need.
From subscription to a Read & Publish pilot to a full Publish & Read agreement
The relationship between the Dutch universities, represented here by SURFmarket B.V., and the publisher Elsevier is very old and has mainly consisted of the supply of journals in the form of paper subscriptions, then by electronic access from the end of the 20th century until 2015. In March 2016, if a new contract is signed, it contains not only subscription services but also provisions for the open access publication of a limited number of articles, originally 3600 over 3 years. This agreement was not necessarily as successful as expected, as for example 1300 articles were not “consumed” at the end of this first agreement. Nevertheless, from amendment to amendment – 7 in total, the contract was extended in terms of the journals concerned (Cell Press) and temporally until 20 April 2020.
In contemporary classifications, this agreement could therefore be considered as a Read & Publish, with a subscription fee, open access publications being produced without additional payment. The first parts of the new contract show a reversal of this logic by displaying a unified cost for all the services provided by Elsevier: reading is no longer separated from the publication in the pricing, even though the provisions of the former are much more complex and pages long than those of the latter
Indeed, as is often the case in subscription contracts, numerous provisions govern the rights to access and read content, but also the duties of the publisher in terms of document supply and the scope of services. But, as we saw in the case of the Springer/DEAL agreement, the provisions of publication services can be relatively complex. This is not the case here: no financial exchange linked to each publication, no limit on the number of articles, no separation between publication in hybrid and full open access journals, so only two pages define the conditions of publication. Beyond the description of the workflow, one article should be highlighted:
Both parties are committed to reach 100% Open Access during the term of this Agreement, In line with this joint ambition, Elsevier offers Corresponding Authors the possibility to publish Gold Open Access in the widest possible range of Elsevier journals under the Terms of this Schedule 4. As per the effective date of this Agreement 95% of the journal articles by the Corresponding Authors are eligible to be published Open Access. For the remainder of the journal articles, Elsevier will continue to strive for sustainable immediate open access options across its journal portfolio to support the 100% Open Access goal.
As in a large number of technologies, lack of success is not necessarily an obstacle. Whereas in spite of more than four years of possible publication under the previous agreement, only a fraction of Dutch authors had chosen this route, Dutch universities this time aim for 100% open access, and Elsevier promises them that almost all the journals it distributes will meet this end. While at the same time, authorizing authors to not chose Open Access (p. 45), pushing further away this objective of 100% OA for corresponding authors papers.
The whole scheme is close to the one signed by Elsevier and Bibsam, the Swedish consortia, after they spent almost 2 years with no deal. But the Swedes claimed they are actually paying less than before in total costs in a recently published article2 while signing an agreement where Swedish authors are almost mandated to go for an OA publication.
On this OA publication part, the Dutch contract is therefore not just a continuation of the previous one since new journals are involved and technical provisions are made to publish “by default” in open access in CC-BY. Moreover, the volume of publishable articles – even if it was previously never fully consumed – is now unlimited. This expansion of the service is accompanied by a sharp increase in costs. If we take the amounts listed in the various amendments to the 2016-2020 contract and report the new amounts, we obtain the following graph, quite different from the Swedish one3 :
Over a “long period” (9 years), we therefore observe a 40% increase in costs, meaning an inflation of more than 4,3% every year. Far from the assertion of “cost neutrality” as in the OA2020 text of 2015 and the initial hypotheses of the Coalition S, the simply potential transformation of all Dutch publications into open access articles is therefore extremely costly in this case and renews the observations of serial crisis already made by SPARC 25 years ago. If the amount paid is constant between 2021 and 2024, there is no guarantee that it will not sharply rise again after the end of the current contract. Financial information was not surprisingly completly absent of the press release, Dutch institutions touting the new agreement objectives as if they were already realised:
NWO President Stan Gielen said: “Enabling Open Access to research results has been a core mission for NWO since 2003. This agreement is a giant step in our collective ambition to provide 100 percent Open Access for all publicly funded research in the Netherlands.” NFU / CEO of Amsterdam UMC Hans Romijn, said: “This is definitely a game changing agreement in open access publishing in medicine from both national and international perspectives, considering the large impact and the volume of Elsevier journals. This will certainly contribute considerably to the advancement of research, and, most importantly, better treatments for our patients.”
The same assertions have been made over the last 10 years about the agreements signed by different consortia, highlighting the open access part of such deals. They are however very different from the “revolutionary idea” proposed by Elsevier in Automn 2019 about data. In fact, it was so revolutionary that it leaked out :
As Sarah de Rijcke, a distinguished science and technology studies scholar, underlines it, Elsevier then tried to directly exchange open publications for data, continuing Big Publishers strategy in investing scholarly infrastructures in order to maintain their profits while adopting open access for publications4. That led to a public discussion of ongoing negociations and a VSNU communication that denied “selling” metadata and research data to Elsevier. In December 2019, a press release reaffirmed that data remained the propriety of universities and that some principles were taken to avoid vendor lock-in. Let us now see how it has been dealt in the final agreement.
Elsevier as a data company and how you will be willing to pay for it
Apart from the introduction pages, one has to reach page 102 to deal with data and “Open Science Services for Research Intelligence and Scholarly communication” that are part of the agreement. The first and second page of this section describe the collaborative principles that were quoted in the December 2019 press release, which look very consensual.
interoperability and vendor neutrality
transparency, inclusion and collaboration
access to research data and metadata
data portability
If we add to this the common governance structure specified in the last pages and the fact that each party retains its data at the end of the agreement, this part of the agreement can be considered as a true joint collaboration. Nevertheless, Mephistopheles drapes itself in detail, and a full reading of the articles on page 104 underlines how Elsevier now considers itself a data company. Firstly, by default, everything belongs to Elseiver, except what is directly “provided” by the institutions. Secondly, under no circumstances can intellectual property resulting from the development of services be shared. Thirdly, if a common intellectual property were to be created, a new agreement would be needed in which Elsevier would have ownership and the institutions a free but non-exclusive right of use. Fourthly, all existing openly licensed data provided by the institutions are directly reusable by Elsevier. Fifthly, even in the absence of such data, Elsevier may develop equivalent or similar services with other partners. Finally, sixthly, if sensitive data or data belonging to third parties were to be at included in the services, the responsibility would of course only be that of the signatory institutions.
The contrast is therefore striking: on the one hand, Elsevier is (finally) ready to release the publications of all its journals under Publish & Read agreements in return for a fee; on the other hand, the publisher locks all the data and does not wish to share them under any circumstances, thus underlining how much they are now considered to be the real valuable object of the academic world5.
But what pilot services are implemented in the agreement? For the time being, and contrary to the subscription and open access publication services, none are specified. These are simply examples that are given in a table on page 103, reproduced in the FAQ and below:
USE CASE
DESCRIPTION
Aggregation and deduplication service based on CRIS systems
Improves findability and visibility of NL research outputs by aggregating and deduplicating separate CRIS systems into a Pure Community module available to all institutions which can serve as a building block to a NL open knowledge base.
2. NL Research data
Link research data from member institutes
affiliated researchers in subject or domain specific
repositories into Dutch knowledge base
3. Funding information
Link NL research outputs to grants and funders (EC,
ERC, NWO, RVO, ZonMw), to allow for improved
tracking / assessment of impact of funded research.
4. Health Data Management
Link NL health ‘data silos’ in a secure HDM platform
5. OA compliance as a service
A proposed service to better use knowledge base OA
publication reminders, meet funder requirements,
collect assets + reporting
6. Fair recognition and reward
A proposed service to integrate a wider array of
metrics and success stories for a better, wider
recognition of academics. Inclusion of teaching,
society outreach, management, etc.
This list contains extremely different objects: some of them look like pure IT services that could be provided by companies operating outside of the academic world, with the building of shared data infrastructures. Others are based on the crossing and enrichment of very specific data of the academic world, and therefore likely to feed even more the Elsevier databases, for example to build its own Open Science Monitor for diverse institutions. Finally, the last item on the list is quite staggering since it is no more or less the project of delegating to Elsevier a service for the individual evaluation of researchers, including of course open science dimensions.
Whether these pilots come true or not, this last part of the agreement underlines the extent to which it embodies a dystopian vision of Open Science, portrayed by Philip Mirowski as an extension of platform capitalism6. It strengthens Elsevier’s position as owner of scholarly infrastructure, provides the company with potential models for new services and organizes digital labor to enrich the data it already owns. All that while continuing to pay huge sums for access to its publications and in exchange of the “liberation” of some thousands open access articles which will of course drive web traffic to its servers. Maybe the new services will never see the light of day and this agreement will just be another Publish & Read. But if not, Faustus will have not only increased its dependence on the publisher, but will have empower it to the point it becomes the real information provider in their relationship, as publications would be reduced to “raw data”.
Olsson, L., Lindelöw, C. H., Österlund, L., & Jakobsson, F. (2020). Cancelling with the world’s largest scholarly publisher: lessons from the Swedish experience of having no access to Elsevier. Insights, 33(1), 13. DOI: http://doi.org/10.1629/uksg.507 [↩]
EDIT: part of the rise could also be attributed to the inclusion of new Dutch institutions in the agreement [↩]
On a side note: It remains unclear whether article metadata will be released on a CC0 license in Crossref, continuing or not the anti-open citations Elsevierpolicy [↩]
This post should not have come into existence. In fact, for a long time, “contracts” and “agreements” between publishers and higher education and research consortia have not only been proprietary texts, but filled with confidentiality clauses that prevented them to be disclosed. This culture of secrecy is still there, as the agreement between Springer and DEAL states this on its 45th page1.
” Disclosure of agreement It is Publisher’s position that the terms of this Agreement are proprietary, however the Parties have agreed in this case that the Agreement is placed under a Creative Commons CC-BY-ND 4.0 license and may be made public under this license. “
Indeed, the pursuit of transparency accompanying the open access movement has led in recent years to disclosing these contracts, highlighting the very large financial sums involved in accessing scientific literature2. But beyond the figures, the nature of the contracts and their concrete provisions are little discussed, outside of limited circles, notably in library & information sciences3.
The purpose of this post is therefore to propose a first analysis of the structure of this agreement before focusing on its financial part, the most original one, which is supposed to drive the transition to open access. But first we need to describe the two partners of the agreement. On the publisher side, we have Springer, or rather Springer Nature Customer Service Center GmbH. In practice, this means an entity that covers not only Springer and Nature publications, but also BioMed Central and Palgrave McMillan, i.e. more than 2,800 journals. On the customer side, it’s a bit more complicated: the negotiator is an intermediary, MPDL Services GmbH , which acts on behalf of the Projekt Deal, which is a consoritum initiated by the Alliance of German Science Organizations to negotiate nationwide transformative “publish and read” agreements with the largest commercial publishers of scholarly journals. The consortium structure therefore complicates the terms of the agreement with Eligible Institutions that can become Members with associated rights and duties.
Before entering into the agreement, it is important to add how much the writing itself shows the intensive interpretative work on its terms. As in any contract the key terms are of course defined: “eligible articles” “publishing services” or “open access license” among many others. But one also finds in the agreement no less than 18 occurrences of “For the avoidance of doubt” and 48 of “For clarity”, redundancies aimed at limiting the ambivalence of written proposals and injunctions and hints of the carefulness of both parties to limit the risks generated by the agreement.
From a simple preamble to a complex folded agreement
At first, things seem really simple, as the preamble states the common aim of the two organizations. In fact, they share the rise of Open Access publications in the BOAI meaning, with its known advantages and underline the scope of this agreement, compared to previous ones.
“The parties enter this contract with the goal to enable open access publishing of articles from German- funded researches in Springer Nature journals, to make these articles available to the public worldwide, and to provide access for German-funded researchers to most of Springer Nature content. At time of signing, the contract becomes the world’s largest transformative open access agreement, making it possible for over 13,000 articles annually from German-funded researchers to be made immediately available Open Access for use and reuse from the moment of publication, bringing the benefits of maximum visibility, increased usage and citations, and greater and broader impact to researchers across Germany.”
Yet, the summary of the agreement depicts a complex set of successive services, which highlights the concrete constraints of a “Publish and Read” agreement for such a large consortium. The actual starting date of the agreement is far away, since the institutions have in practice several months to adhere to the terms of the contract and to put in place the necessary infastructures to carry it out. It is only from August 2020 that centralized funding for open access publishing will really kick in. However, researchers from affiliated institutions can already access Springer content from now on. This paradox is resolved if one considers that the R&P agreement is in fact one contract which overlays four contracts between the parties, named as follows :
Fully Open Access Publishing
Hybrid Publishing
DEAL Journal Archives
Reading Access
Let’s start by looking at the last two, which are the simplest in financial terms. Reading access (p. 31-41) defines the conditions of access to Springer’s content, provides for cases in which this service is discontinued – in particular non-payment in connection with the other components, but does not itself contain any financial elements. Reading is therefore provided free of charge for researchers at the member institutions of the DEAL project, as this deal is really a “Publish & Read“. The “DEAL journal archives” (p. 27-30) is charged, but for a fixed sum of €3,75 million. It allows the “upgrading” of all the institutions on the journal legacy, a little over 3 million articles, and the constitution of a “dark archive” that can be used during and after the contract.
Still, there are some interesting articles in these parts, for example the fact that DEAL can tell Springer to cease reading access to Member institutions if these institutions fail to pay the DEAL operating entity (p. 32). We can also read that the English-language agreement is the one that prevails (p. 40) ; considering that both parties are German and that German Law in Heidelberg applies in case of disagreement, it is very intriguing. Finally, at the opposite of the philosophy of Open Access, there are very strong limitations to the uses of the Archive or current content : access, download and very strict usage in academic courses. In particular, text and data mining for a given Member institution should only be authorized after an addendum is signed (p. 34). It is therefore clear that the already closed content remains paywalled and that the transformational will only applies to future publications.
Controlled Gambling on future open access publishing
But how can this transformational aspect be translated into a contract? As we shall see, there is a form of gambling – with certain limits – carried out by both parties in the two contracts at the heart of the scheme, the Fully Open Access Publishing (p. 7-14) and the Hybrid Publishing (p. 15-26). The first has become quite standard – and very close to the contract signed by DEAL with Wiley at the beginning of 2019. It is a centralized payment system with corresponding author recognition and verification, sharing of metadata and financial reporting, all in exchange for some deduction on the price of APCs (p. 14).
“For the purposes of calculation of the APC Rates, the list price increases for any Article Processing Charges under these Product Terms will not exceed 3.5% per journal title per year (“Cap”); increases will be calculated based on the 2020 list price. For BMC and certain other Springer titles which are included in the Open Access Journals, Publisher will apply in addition to the Cap a 20% discount, the journals being eligible for such discount will be identified accordingly in the DEAL Journal List.“
Price control is therefore very limited: although the reduction on the ‘public price’ is not negligible, it can quickly be offset by the foreseeable inflation of full OA APC costs charged by Springer. On the one hand, price rise at the 3.5% limit is almost certain, given the “natural” evolution of APCs prices; on the other hand, the current APC price insensitivity pushes us to predict that the number of articles published in full OA APCs will increase4. But this is precisely Springer’s gamble in signing this type of deal, by quickly making up for the quantity of articles in exchange for a limited reduction in the unit price. And this gamble is all the bigger here, given that its other source of income, under the Hybrid Publishing agreement, may fall in 2021, 2022, or 2023.
That is the biggest surprise of this Springer-DEAL agreement. Reading the announcement of the agreement on January 9, 2020, one would have thought that this part of the deal would once again be a copy of the Wiley agreement. Indeed, the fee5 of €2,750 for any research article in a hybrid journal published by Springer, signed without limit with Wiley, was communicated6. However, it is a very different expenditure scheme that was accepted by both parties (p. 25), represented in the following image.
For the year 2020, the amount is based on a “Reference Value » (RF) as the product of the number of articles estimated to be published by €2750, that is €26,125,0007. The RF does not move during the contract and so very much look like a “subscription price” from the point of view of Springer. Nevertheless, there is a complex real price paid that only partly takes into account the actual number of articles published. In 2020, the minimum invoice is the RF, if more articles are published, the price can go up to 5% more. In 2021, it is a minimum 95% of the RF and up to 10% more than the RF; then, 2022, it is 85% and up to 20% and finally, at DEAL’s option, for 2023, it is 75% and up to 30%.
On the upper side of the RF, from Springer’s point of view, the risk is to publish “too many” Hybrid OA articles. In such a situation, they would “miss” some revenue which would have hypothetically been generated by individual “Open Choice” APC. From DEAL’s point of view, it is litteraly an insurance against a growing cost generated by the capture of publications by Springer journals8.
“For the avoidance of doubt , Publisher will continue to publish Eligible articles even if the Upper Threshold is met or exceeded. Publisher will never charge any part of the Calculated Total PAR Fee exceeding the Upper Threshold, irrespective of the actual Calculated Total PAR Fee and/or number of Published Articles.”
If we now look on the other side of the RF, roles are reversed: the minimum invoice is an insurance for Springer if, for whatever reason, German authors don’t use the agreement to go on Hybrid OA, that it gets some value back now that reading is free of direct charge. From DEAL’s point of view, there is the risk to “pay for nothing” and it could be an incentive to push researchers to use Hybrid OA as it is “already charged”, rather than choosing the Full OA road, discounted but limitless as far as costs are concerned.
How transformative is the DEAL deal?
We can point to four potential or actual transformations from the agreement which runs until the end of 2022 with an option at the discretion of DEAL for 2023. First, obviously, it is the construction of a demanding workflow to regulate all the exchanges of authorship, institutionnal and financial information not only between Springer and DEAL, but also between DEAL operating entity and the Member institutions. Indeed, as with other Publish & Read type contracts, the sums actually paid by the research intensive institutions will be much higher than in the past. and conversely, more teaching or practice-oriented institutions would pay less. What is the cost of such a workflow for both entities? Is it easily scalable for other publishers/consoria? How would some institution react to their growing costs?
Second, this agreement raises the issue of researchers’ enrolment to open access publishing, even if the money does not seem to come from their own pockets or grants in this case9. Will they agree to publish in hybrid OA? Will they, on the other hand, remain insensitive to the total cost of APCs? Will they assume the position of correspondent author more than their foreign colleagues? What will be the associated institutional policies: more obligation to publish in open access or, on the contrary, a logic of individual choice? Answering these questions will make it possible to observe whether, indeed, open access is becoming the norm for German researchers in their publications at Springer.
Third, in direct connection with the previous transformation, the parties took calculated risks by signing this agreement. Springer may see its sales fall by between 15% and 20% in 2022 (APC discount at constant volume, minimum Hybrid Publishing price) in the event of failure with researchers, workflow problems or major disagreements within DEAL. Symmetrically, DEAL members risk a significant increase in the total price with a maximum of 20% Hybrid Publishing price and an explosion of full APC OA if production is moved to these journals. Transformative action at constant cost, because there is “enough money in the system”as OA2020 stated in 2015, is therefore not at all guaranteed.
Finally remains the question of the state of things at the end of the contract. If all goes well in their view, DEAL will validate the 2023 option, but what happens beyond that? And if they don’t, what will be their negotiating power? Will Springer be happy if both OA deals don’t have enough success to maintain their currents profits? Will the use of the “flagship journal” listed in the Wiley agreement to put some competition on Springer? Will Springer journals still be predominantly hybrid journals? Will the coalition S ultimatum on the lack of funding for APCs for this type of journals in 2024 be credible? There is nothing in the agreement to give answers to those questions, and in particular there is no commitment from Springer to flip its journals then. So, contrary to the recent ACM Open Model , this agreement does not constitute an irreversible transformation to open access. If things go south, subscriptions could be back at the very heart of the next agreement..
The agreement is availabe on the Projekt Deal dedcated webpage with its own DOI. Announced at the beginning of the year by both parties, the full agreement was discreetly added in mid-February. Thanks to Quentin Dufour for flagging this document [↩]
According to this presentation by the European University Association, more than one billion euros a year for its members, including 700 millions for journals [↩]
Typically the section “business models” of the Scholarly Kitchen website. [↩]
Technically, it is not an APC as stated in the FAQ page: “different from an Article Processing Charge (APC), the PAR fee, paid centrally by participating institutions for each article to appear under the DEAL agreement, covers the cost of the open access publishing services rendered and, to a lesser degree, reading access in Springer Nature subscription journals.” [↩]
In the Wiley deal, if I understood it correctly, the baseline payment is guaranted, unless it is shown that Wiley technically limits the actual publication of Hybrid OA ; but there is no max limit for the payment of €2,750. per article [↩]
I do not go into detail here about the type of article and in particular “Non Research Articles”, the price of which is €917 [↩]
Notably by the shift of corresponding author from a foreign researcher to a German one. [↩]
The actual source of money for APCs is not addressed at all in the contract, it is probably part of DEAL’s internal financial mechanics which are not public to my knowledge [↩]
This blog is part of a vast research program on the political economy of scientific publication, which has been strongly transformed over the last twenty years by the electronic dissemination of journals. It considers publishers, editorial committees and journals as socio-political actors to be studied in three complementary aspects detailed below.
Firstly, they are analysed as economic actors defining publishing markets. The conditions under which these markets were created have been the subject of much criticism, and strong transnational mobilisations around open access have been deployed, which has influenced the construction of public policies that are contrasted internationally. New economic models have emerged, of which direct payment by the authors (APC), is only the most visible, but not the most frequent. The multiplication of coloured labels (Green, Gold, Platinum, Bronze, Diamond) to designate these models does not fully account for their subtle differences, nor for the sustainability of the associated business model, compared to the classic subscription model which has led to a “serial crisis” over the last 20 years, with the massive increase in the cost of access to publications for libraries
Secondly, journals and publishers are studied as places of production, including innovations in evaluation technologies (open peer review, technical soundness based review…). In particular, it is the growing debate on post-publication peer review policies, including withdrawing articles, that will be examined, as well as the emergence of platforms for public discussion of their validity such as PubPeer. The question of the centrality of journals for peer review or their marginalization (overlay journals, recommendations…) will also be addressed.
Thirdly, journals are treated as places of valorisation, seeking to attract authors and promote their position through the use of different measures (citation, referencing, uses…), which they highlight or criticise. In addition to the recurring debates on the Journal Impact Factor, a measure that is currently much decried, there will be discussions on alternative metrics, or even on responsible metrics, which are supposed to better represent academic production and its uses.
These three aspects aim in particular at sheding light on new forms of self-regulation by academic actors (systematisation of advertising for the withdrawal of articles, generalisation of post-publication peer review, stigmatisation of predatory publishers, uses of creative commons licenses…), the innovative and argumentative work of publishers and platforms, whether public, para-public or private, and the redefinition of public policies in the field of academic publication.