Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Dealing with the grey zone of publishing or… how I will never be an editorial board member of MDPI Publications.

The “predatory publisher” category raises more questions than answers. Just like “academic fraud”, it tends to validate a black & white world in which rules and norms are clear-cut and universally shared through time, disciplines and countries. There is now an extensive literature presenting lists, criteria and even automatic detection for such publishers or their journals, most of it being written without questioning the label “predatory”1. More interestingly, there are a few papers describing the point of view of authors publishing in such vilified outputs, showing both the deceptions performed by the publisher and the good faith of most authors2.

Such results could be downplayed on the grounds of the authors’ peripheral position or the low power of the studies. On the opposite, everyday stories show that separating the wheat from the chaff is rather complex because a huge and diverse “grey zone” exists, even for scholars well versed in the arcane of publishing. This post aims to describe such an example by making the story personal rather than abstract, using testimonies, personal opinions and statements. In this world, the choice to review, write or edit for a given journal or publisher remains tricky, based on existing alternatives, personal ethics and situated decisions

It’s become an almost daily ritual: an invitation to present at a conference, to submit a paper to a journal, or even to join an editorial committee, sent by people you don’t know, with a vague personalisation of the message through your name and the reproduction of the title of an article you’ve written, with zero relevance to the issuing conference or journal. On August, 9th 2023, I received one of these messages, entitled: “[Publications] (ISSN 2304-6775) Invitation to Serve as an Editorial Board Member”. It caught my attention for three reasons: firstly, it’s the time of year with the lowest e-mail volume; secondly, there was an apparent MDPI account in the copy of the mailing; and thirdly, I know this journal, some of its articles I found relevant or enlightning, and others cite some of my work (which is not a sign of quality but could explain this invitation). So let’s read it again:

We would like to invite you to join the Editorial Board of Publications (ISSN 2304-6775, https://www.mdpi.com/journal/publications). Publications is open access and peer-reviewed, covering all aspects of scholarly publishing and communication. You can find the proposed scope of the journal here: https://www.mdpi.com/journal/publications/about/.
Publications is abstracted and indexed by Scopus, ESCI, DOAJ.

This presentation is not typical of « predatory » emails that begin by flattering the recipient of the message, emphasising the importance of their work and what they can contribute to the journal, and at the same time inviting them to join the editorial board and submit a manuscript.

Editorial Board Members will be responsible for final decisions on  manuscripts in their field of expertise and may be invited to review manuscripts. The initial term lasts for 2 years, and entails:
• Pre-screening and making decisions on new submissions related to your research interests;
• Providing input or feedback regarding journal policies;
• Helping to promote the journal among your peers or at conferences;
• Attending Board Meetings to suggest journal development strategies;
• Reviewing manuscripts.
• Helping to attract suitable expert authors.

The job description for editorial board members is typical of a journal owned by a commercial publisher: in a nutshell, you run the journal, you promote it without being the decision-maker on its policies and, of course, without compensation except in what many colleagues name a prestige economy.((For example, Tennant, Jonathan P., et al. “Ten hot topics around scholarly publishing.” Publications 7.2 (2019): 34.https://doi.org/10.3390/publications7020034))

If you accept our invitation, please provide your contact information and a list of keywords reflecting your expertise, in accordance with the entry examples at https://www.mdpi.com/journal/publications/editors/. If possible, please also send, for our records, a CV or an official website with your biographical data (including a list of your publications). If this is of interest, but comes at an inopportune time, you may have a recommendation for a senior expert to serve as an Editorial Board member. If you have any questions, suggestions or recommendations, please let us know.

This detailed procedure is another hint to add in the direction of a genuine invitation for a position in the committee of a standard commercial publisher journal, which is rarely considered as a “grey zone” output. Nevertheless, there is another surprising piece of information about the ‘benefits’ attached to the position of editorial board member. I don’t know if it is a standard practice in APC-based journals4. Yet this three-line paragraph looks extremely problematic to me for three cumulative reasons. Once again, others could consider it completely benign, due to shared ethics and practices, which would prevent everything that follows.

Additionally, you are welcome to publish with the journal—this will be free of charge once accepted for publication. The term for the Editorial Board membership lasts for two years and can be renewed.

This paragraph alone could have made me cringe and refuse the invitation. But it also comes from a journal and a publisher which is piling up greyish stories.

At the end of the last century, a Chinese chemist woking in Switzerland invented a preservation practice: to deposit a sample of compounds associated with an article, and to do so confounded the la Molecular Diversity Preservation International (MDPI). He then created a journal, Molecules, published by Springer, became editor-in-chief and explained his aim:

Tweet

Norwegian interview: https://www.khrono.no/truer-med-a-flykte-fra-tidsskrift-etter-at-redaktoren-ble-kastet/794389

  1. On the geopolitical consequences of that position, see Taşkın, Zehra, Franciszek Krawczyk, and Emanuel Kulczycki. “Are papers published in predatory journals worthless? A geopolitical dimension revealed by content-based analysis of citations.” Quantitative Science Studies 4.1 (2023): 44-67, https://doi.org/10.1162/qss_a_00242 []
  2. Boukacem-Zeghmouri, Chérifa, Lucas Pergola, and Hugo Castaneda. “Profiles, motives and experiences of authors publishing in predatory journals: OMICS as a case study.” (2023) []
  3. This is a theoretical division, actual tasks performed are another story see on the Diamond journals case, Dufour, Quentin, David Pontille, and Didier Torny. “Supporting Diamond Open Access journals.” Nordic Journal of Library and Information Studies 4.2 (2023): 35-55., 10.7146/njlis.v4i2.140344 []
  4. To my knowledge there is no data on these policies towards editors, except on the question of paid editorial board members, especially in biomedicine []
  5. Teixeira da Silva, Jaime A. “The Conceptual ‘APC Ring’: Is There a Risk of APC-Driven Guest Authorship, and Is a Change in the Culture of the APC Needed?.” Journal of Scholarly Publishing 55.3 (2024): 404-425.https://doi.org/10.3138/jsp-2023-0060 []
  6. Lin, SK. Editorial: A Good Yield and a High Standard. Molecules 1, 1–2 (1996). https://doi.org/10.1007/s007830050001)).

    Soon, the relationship with the publisher turned sour, and the chemist, Sun-Kun Lin, decided to add to MDPI operations the publication of a journal entitled… Molecules. Springer threatened to sue him claiming it owned the title but finally did not. The lone rebel quickly developed a successful business formula, sometimes labelled “low-cost full open access journals”. It was a striking example of the “new journals” that the open access activists pushed for in the BOAI. Later, MDPI was rebranded MultiDisciplinary Publishing Initiative, became one of the top 5 academic publishers as far as volume is concerned, with most journal titles being as generic as possible from Acoustics to Youth, passing by Genes, Physics, Societies and Software. So, the initial declaration of indépendance made by one academic grew into a worldwide success story, but went along with dubious reputation and negative narratives.

    Donnons quelques exemples de ces récits. Un neurochirurgien reçoit une demande d’évaluation provenant du Journal of Clinical Medicine et y répond en 2 jours, très négativement. Pour lui, des problèmes méthodologiques importants empêchent la publication, notamment des écarts entre le protocole décrit et la réalité de l’essai clinique. Deux jours après avoir envoyé sa review, il reçoit une demande pour l’article révisé. Ce nouveau manuscrit est très différent du premier :
    “The manuscript had indeed undergone extensive revisions. The biggest change, however, was also the biggest red flag. Without any explanation the study had lost almost 20% of its participants. An additional problem was that all the issues I had raised in my previous review report remained unaddressed. I sent my newly written feedback report the same day, exactly one week after my initial rejection.”

    En dépit de ces problèmes, deux autres reviewers avaient accepté le manuscrit

    Reviewer experience at MDPI : https://deevybee.blogspot.com/2024/08/guest-post-my-experience-as-reviewer.html

    Editors incentive :

    Editors incentives:

    “Editors get one performance point for every published manuscript they handle, but only half of a point for rejecting a manuscript. Staffers who reach a certain number of points get a monthly bonus.” ((Young employee’s death puts workplace culture in spotlight at publisher MDPI, Retractionwatch, 22nd October 2024 []

Who wins after a divorce?… or how to interpret the DEAL-Elsevier new agreement

Imagine that you are a young researcher in Germany, having started your thesis in September 2018. For the last 5 years, you have had no legal access to articles published by the world’s largest publisher, Elsevier. Your institution has saved hundreds of thousands or even millions of euros, but you don’t really know where that money has gone. By contrast, on a day-to-day basis, then as a PhD student, now as a post-doc, you tinker with your access by writing to authors, asking your colleagues abroad if they can send you this article, requesting your library to buy that crucial paper, scanning preprints, using the unpaywall button or, late at night from home, typing the full combination of letters and signs to reach the platform whose name you must never utter or write.

To my knowledge, this divorce between a major publisher and a national consortium, DEAL, folowed by a reconciliation, has been the longest for a very rich country,. This post analyses how the separation happened, what is known of a long period of divorce in which no German institution had a subscription to ScienceDirect, and finally moving on to the reconciliation agreement published on September 6th, 2023 and validated in January 2024.

From harsh talks to full divorce (2016-2018)

Indeed, it was not for the lack of money that DEAL did not sign with Elsevier, but because the conditions of a signing were not met. By contrast, reading DEAL’s agreement with Springer-Nature, analysed at length 3 years ago, shows what was expected: an agreement including subscription and open access publication, all at a cost deemed reasonable by the German consortium. So how did they get to a “no deal”? As often when trying to rest on past information with institutional sites and changing policies, I shall say that most documents cited below have disappeared from the DEAL website and, therefore are captures made by the Internet archive.

At Elsevier, serving research is our paramount goal. We have therefore chosen to continue providing access to Elsevier journals for dozens of German institutions that cancelled their individual subscriptions at the end of 2016. They did so anticipating that a new Germany-wide license agreement would be in place by January this year, which we regret so far has not been achievable. We strongly believe that access to high-quality research is important for German science. The continuing access for the affected institutions will be in place while good-faith discussions about a nationwide contract carry on. This reflects our support for German research and our expectation that an agreement can be reached.”1

I hope one day some colleagues will systematically study the rhetoric of big publishers PR. Anyway, the one above is typical of a service industry which makes believe its aims are totally aligned with the ones of its clients. Imagine the reverse situation, where DEAL would state : “at DEAL, assuring service providers profit is our paramount goal…”. Back to our main topic: the unconditionnal reconnection decided by Elsevier is not something unusual: at the same time, it happened for example for Taiwanese institutions in a similar situation2. But Elsevier hopes for a soon-to-be new German agreement would not be fulfilled. Indeed, after these back and forths, the negociations stalled, leading to a full divorce by mid-2018, as stated by the German Rectors Conference, which had “no choice”:

“The excessive demands put forward by Elsevier have left us with no choice but to suspend negotiations between the publisher and the DEAL project set up by the Alliance of Science Organisations in Germany.” That was the verdict of the lead negotiator and spokesperson for the DEAL Project Steering Committee, Prof Dr Horst Hippler, the President of the German Rectors’ Conference, speaking in Bonn, where the last discussion took place this week.”3

At this point, we shall note that all cited documents are written in English, while negociations surely happened in German. DEAL had the clear intention of making its moves very public and widelly known beyond the Federal German space and Mitteleuropa.

Learning to work without simple legal access (2019-2022)

So Elsevier pulled the plug in July 2018 and everything went quiet after almost two years of turmoil. That was not a given: you could think that protest letters, petitions or lobbying from unsatisfied lay researchers would multiply as a whole nation of scientists were cut from at least a fifth of the published literature. To lift the veil on the actual frustrations and losses resulting from the switch-off, it was… Elsevier, which commissioned a survey in the summer of 2019, the summary results of which can still be seen on the pages of one news agency.

Most German researchers agree that losing access to ScienceDirect made their research activities less efficient (61%) and delayed the production of the research output (54%). High-quality research further required access to current, international research results. However, the survey shows that 49% of the scientists surveyed believed that the lack of access to new research findings leads researchers to miss current developments or to become aware of them only with a delay. 44% of respondents fear that this will have a negative impact on the quality of their research. All in all, 84% of researchers surveyed think ScienceDirect was important or somewhat important while 76% supported or strongly supported the restoration of full access to ScienceDirect in Germany.

Of course, no raw data has been published and the study itself has not been shared beyond this PR. Nevertheless, in the body of the text, Elsevier mentions another ‘independent’ study carried out by the University of Münster. Like the previous one, this is not an actual academic study, but a library survey, published only on their blog, in German. Despite its limitations (size, a single institution), it presents some interesting, and most probably unique, results on the representations of German researchers one year after the cut. In particular, the following graph should be highlighted:

Extract form Münster Univeristät Survey, which results are presented here (in German).

The orange answers indicate respondents’ agreement, and the statements have been ranked in descending order of positive responses. They show a mixed picture in terms of opinions, both across the population as a whole and for many respondents themselves. from one question to the other. Though the vast majority, namely two-thirds (66%), agreed with the statement “I need more time to get the literature” and 58% thought that the right thing to do was to put pressure on Elsevier to give in, also the option with the fewest disagreeing votes (5%). That does not imply support for the shutoff: in fact, 55% agreed that “No deal is no option – negotiations should be resumed as soon as possible”, and 46% that the lack of access was “a serious competitive disadvantage”.

While 43% agreed that “Elsevier as a profit-orientated company would only harm science”, and only 11% disagreed, only 29% would “refrain from writing or review articles for Elsevier journals” against 40% who would still perform it. After some questions on the importance of Elsevier journals and the use of spared funds, the last question shows another divisive view on the resuming of negotiations, with only 16% in favour of it – which of course was not addressed in the Elsevier PR mentioned.

These two surveys are the only public manifestations of a debate in Germany during this period. If opinions remain relatively unpublic, what about practices? Does the impossibility of immediate legal reading actually have an impact on the way German academics write, their choice to publish in Elsevier journals or their productivity? To my knowledge and through the extensive use of Matilda, only two academic articles have addressed these issues The first is counterfactual, in that it looks at the behaviour of affiliated authors in Germany in chemistry for Springer and Wiley with which DEAL has signed an agreement. Published in 2021 in economics, it only considers the first year of the agreement (2020), in comparison with the previous period and with a control group with no agreement of this type. Nevertheless, the authors are already measuring some effect :

“researchers’ submission behavior in the field of chemistry has changed to some degree, as eligible researchers have increased their publications in Wiley and Springer Nature journals at the cost of other journals. While the effect is not overly large yet, it is statistically significant, and it may increase over time, as the agreements become even more well-known among scientists. Hence, journals covered by the DEAL agreements appear to have a competitive advantage in attracting authors”.4

If agreements signed raise attractivity, then unsigned ones shoud diminish it. The second one deals with the latter by considering the evolution of publication and referencing activities of the whole population of German authors in Elsevier journals, with no control group.  Published in 2023 in scientometrics, it is based on more than 400,000 articles and more than 33M references:

“We also observe year-on-year decreases in the proportion of citations, although the decrease is smaller. We conclude that negotiations with Elsevier and access restrictions have led to some reduced willingness to publish in Elsevier journals, but that researchers are not strongly affected in their ability to cite Elsevier articles, implying that researchers use other methods to access scientific literature.”5

The two studies therefore show that the structure of publications is affected by the agreements; whether signed or not, but only marginally, at least over a short period. Furthermore, reading seems to be remarkably unaffected by the lack of legal and rapid access to the literature. To enable simple and legal reading, It is likely that other internal work has been produced by the consortium or that self-support systems have been put in place, similar to what the Swedish libraries deployed during their own breakup with Elsevier6. Beyond this study, there is anecdotal evidence, given by colleagues, but also an interview of a member of the negociation team, Dr. Bernhard Mittermaier, head of Forschungszentrum Jülich’s Central Library, which tends to show that they were following the rate of publications:

“The option to publish with Elsevier was not affected. Some scientists, however, asked me whether a publishing boycott would make sense in view of the fact that many editors from Germany – including Prof. Wolfgang Marquardt – had discontinued their work for the publisher with reference to the stalled DEAL negotiations. In fact, Elsevier’s share of all Jülich publications decreased from 26 % in 2018 to 18 % in 2022. Across Germany, there was a decline from 19 to 15 %. This may also be a reason why Elsevier returned to the negotiating table.”

In the end, it is reasonable to consider that German researchers have adapted to a life without ScienceDirect over the long term, still reading articles published by Elsevier, but publishing less in journals disseminated by it . What the French and British did not dare to attempt after lengthy negotiations, the Germans did, with very substantial savings and a diminished dependance to the biggest commercial publisher. But what happens afterwards, when the time comes for one or other of them to consider recontracting?

Dealing again… on different terms (2023-2024)

2023 began, as in previous years, without ScienceDirect for German researchers. Im Westen nichts Neues, as a fellow economist lamented :

 

Bartosz Bartkowski tweet

In fact, Elsevier had returned to the negotiating table in autumn 2022 and, after a four-year drought, seemed ready to make concessions that would have been unthinkable four years earlier. The negotiations took place behind closed doors, until the sudden announcement of their success at the beginning of September 2023, followed by the publication of the contract itself. Let’s dive into it, as DEAL has always been transparent on their agreements (nice PDF, full text and monetary information,…), published under a CC-BY-ND license7.

We will not delve into the details of the usual characteristics of this type of agreement (definition of the parties, services expected, users authorised to read, corresponding author limitations, etc.), but will instead focus on the most central elements and on some unique features compared to the bodies of agreements analysed elsewhere.8. This agreement is a “classic” Read & Publish, which includes in its core payment articles published in hybrid journals, but not articles in full open access journals, for which the fee is simply reduced by 15% or 20%. It also includes a back catalogue upgrade for all institutions, at a total cost of €10m. It is a “pay as you publish” agreement, with a PAR fee for each article, depending whether they are in a “regular journal” (2,500 €) or a Cell Press/The Lancet journal (6,450€), with an inflation rate of 3% and 4% respectively9.

This payment model has two consequences that are quite specific to this agreement. Firstly, with the exception of the back catalog, institutions have no front money to commit. Whereas in the past some agreements offered “tokens” or “waivers” for publication, the opposite is now true: you only start to pay after publication. Secondly, this provision would encourage free riding: as withalmost all agreements of this type, the corresponding author is offered, as a priority, to publish in open access under the CC-BY licence, but he or she can refuse. There is also a provision in the contract that prevents this refusal to publish in open access from being organised by counting all the publications:

“For the avoidance of doubt, the applicable PAR fee for Core Hybrid journals for the year of the acceptance date will be applied to both open access and subscription articles in these journals and to subscription articles published in Cell Press and The Lancet journals.”

So, despite the diminishing share of articles observed during the absence of agreement and the lack of front money, Elsevier has a certain guarantee of revenue as 18, 19% or 20% of the German research production will end in one of its disseminated journals. In exchange, the company had to accept very harsh conditions on the data generated by German users. A full page (section 7.6) is dedicated to Data Privacy in the agreement, with reminders of legal provisions derived from the GDOR European regulation. DEAL and Elsevier will co-supervise the whole data processing, the latter refraining from using any personal data without the consent of users. On this point, a loophole was anticipated by forbidding any general opt-in device: German colleagues will be able to fully use ScienceDirect without signing any consent. Of course, all data will be stored in one of the Member States of the European Union. The matter is so sensitive that a future workshop is planned during the first year of the contract, where part of the IP addresses would be automatically erased when IPs are not located in professional settings.

Without doubt, Elsevier’s transformation into a data company and the growing controversy surrounding its new business models on reselling user data10 has been closely observed in a country so keen on privacy. Still, despite these worries, DEAL signed the deal and did not include any fines in case these limits would be trespassed11. But what about the signing of German iHER nstitutions?

Conclusion : which savings, for which uses?

In fact, there was still a little uncertainty when the agreement was unveiled, as a four-month period was about to begin during which the institutions would each have to indicate whether they would sign the agreement. It could only be ratified if at least 70% of the institutions approved it, and fees were lower if 90% did. On 15 January 2024, DEAL announced that this second threshold had been exceeded as “nearly all of Germany’s major universities and research institutions are now participating“. Elsevier has now joined Wiley & Springer in the DEAL family, with very similar agreements focused on hybrid open access. But what does it mean from the point of view of German HER institutions? Let’s go back to Dr. Bernhard Mittermaier’s interview, who talks about his own instiution costs and the global German ones:

“Taken together, Jülich institutes will now save around € 100,000 per year on fees for hybrid open access that were previously paid to Elsevier. For Forschungszentrum Jülich as a company, the costs for Elsevier will even decrease by about 40 % than was the case under the former agreement, assuming publication figures remain the same. This corresponds to about € 300,000 per year that can be saved compared to 2018, the last year of our previous agreement with Elsevier. Elsevier’s fees per article are now much lower than they were in 2018 and similar to those charged by Wiley and Springer Nature.”Compared to 2023, however, when hybrid open access, document delivery, and pay-per-view each cost around € 100,000, additional expenditure of € 200,000 will now be incurred.

Let’s try to do the math (which does not add up), based on that paragraph in the following table, with three references, the last year of the former (local) agreement, the shut-off period and the first year of the new agreement.

Expenditures/Year 2018 2021 2024
Total 600,000€
500,000€
100,000€ 300,000€
Forschungszentrum Jülich Elsevier expendures.

The previous total cost is 500K if you follow the 40% reduction and 600K€ if you add the total savings mentioned. Whatever the case, the new deal is far below the older ones, in which German institutions were known for paying “much more” than similar institutions in Netherlands or France. Let’s now project the costs nationally:

Year pre-2018 2021 2024
Expenitures 70M-100M€
in mostly reading agreements
5-10 M€ max in Hybrid OA publishing? 30-40M€ in P&R agreement
Extimated Elsevier revenue for Germany

The first figure was never made public, but I have heard estimiations in between these two markings, The second one is very maximalistic as OpenAPC counts between, 1M€ and 1,3M€ for Elsevier in Germany for the years 2020 to 2022. The thrid one is based on the number of expected publications and the different fees defined in the agreement. So the savings have been huge during the shutdown and Elsevier lost probably at least 300M€ before resuming negiotiations. And despite losing probably around 50% of its 2018 revenue, the company prefered to sign rather than leaving almost all the money on the table.

While, for example, French institutions have made a major commitment to using some of the resources saved for OA initiatives and by replenishing the National Open Science Fund, this does not seem to be the case in Germany. The national research funder DFG has recently announced the launch of a Diamond OA publishing platform… with a maximum budget of 1.5M€ per year. I let you figure out what it would have been with just 30% of the money spared. So the German HER institutionswon won a lot, Elsevier stalled, but the dependence from big commercial publishers has not been halted, or even reinforced.

  1. Harald Boersma, Continued Elsevier access in support of German science, 13th February 2017 []
  2. Schiermeier, Q., Mega, E. Scientists in Germany, Peru and Taiwan to lose access to Elsevier journals. Nature 541, 13 (2017). https://doi.org/10.1038/nature.2016.21223 []
  3. “DEAL and Elsevier negotiations: Elsevier demands unacceptable for the academic community”, 5 July 2018, German Rectors Conference press relase, https://web.archive.orga/web/20181219162556/https://www.projekt-deal.de/elsevier-news/ []
  4. Haucap, J., Moshgbar, N., & Schmal, W. B. (2021). The impact of the German ‘DEAL’ on competition in the academic publishing market. Managerial and Decision Economics, 42(8), 2027–2049. https://doi.org/10. 1002/mde.3493 []
  5. Fraser, N., Hobert, A., Jahn, N., Mayr, P., & Peters, I. (2023). No deal: German researchers’ publishing and citing behaviors after Big Deal negotiations with Elsevier. Quantitative Science Studies, 4(2), 325–352. https:// doi.org/10.1162/qss_a_00255 []
  6. Olsson, Lisa, et al. “Cancelling with the worlds largest scholarly publisher: lessons from the Swedish experience of having no access to Elsevier.” Insights-The UKSG Journal 33 (2020). 10.1629/uksg.507 []
  7. Elsevier B.V., & MPDL Services gGmbH, Max Planck Society (2023). Projekt DEAL – Elsevier Publish and Read Agreement. doi:10.17617/2.3523659 []
  8. Quentin Dufour, David Pontille, Didier Torny. Contracter à l’heure de la publication en accès ouvert. Une analyse systématique des accords transformants. [Rapport de recherche] 206 150, CNRS; Comité pour la science ouverte. 2021, pp.81. ⟨halshs-03203560⟩ []
  9. I won’t get here into some society journals excluded from the agreement, either because they won’t go hybrid or because they thought they won’t get paid enough by Elsevier. On the specific question of learned societies journals in such deals, see The Brief https://www.ce-strategy.com/the-brief/out-of-reach/ []
  10. Didier Torny. From paywall builders to data tracking moguls or… How the big publishers have put on a new super vilain costume. Politics of technoscientific futures, EASST, Jul 2022, Madrid, Spain. ⟨hal-03885480⟩ []
  11. Thanks to Björn Brembs for underlying this absence, see his plea for German institutions not to sign the new agreement https://bjoern.brembs.net/2023/09/no-evilsevier-deal/ []

Matilda is finally available… or how open academic search engines are a key part of open science

Matilda homepage, 6th Ocotber 2023.

There was a time, towards the end of the 20th century, when things were simple. If you just wanted to count the publications of an author, an institution, a country, you had to refer to the databases of the Institute for Scientific Information (ISI), created and directed by Eugene Garfield. The most famous of these, the Science Citation Index, was built on the idea of selecting the most relevant journals to capture the heart of science, in the already long tradition of bibliotheconomics. And these core journals wer sufficient to draw a relevant picture of the whole of scientific content. Taking the part for the whole raised many questions about the representativeness of the journals present, data and calculation errors, biases in favour of certain disciplines, languages and countries, but as Margaret Thatcher said about her economic world: ‘There is no alternative’

25 years later, commercial competition is fierce between Clarivate’s Web of Science, Elsivier’s Scopus and (almost) Springer’s Dimensions to capture the most money available from Higher Education & Research institutions . In another world, Google Scholar has woven its web, Google Scholar (GS), the only corporate service without advertising or direct tracking of usage. But these systems still have their drawbacks: the commercial databases still are still excluding machines, deciding what is “searchable” among the whole literature; GS services are restricted in their uses (eg no massive downloads) and its sources are neither described nor open.

This is the landscape in which Matilda was created, thanks to Huma-num and an ANR grant. If you want to know more about how it was envisionned in 2019, there is an “origins” paper1. For now, let’s get straight to the tutorial. The video below is all you need to use it, no API coding, no computer skills, just an idea of what you are searching for as an academic or someone avid to find academic sources.

Open citations at the heart, open data everywhere.

Matilda: is one the outcomes of the “open citations” movement: Originally, in 2010, it was a reference data corpus, the Open Citation Corpus (Pironi et al. 2015), before these remarkable precursors2 were joined by various organizations demanding the release of Crossref citation data to publishers. The I4OC collective has consequently obtained the availability, under a CC0 license, of the whole CrossRef database by default., But what to do with this pile of data? A number of tools, including the VOS Viewer developed by Leiden University, use them.. However, they hope that other actors would take them and build services on this new shared resource. Like the Open Citations databases3, they often presuppose professional users, either experts in API manipulation or interested in very advanced bibliometric developments. Matilda took a different approach by making the simplest tool possible,

Follow an author, make citation tracking of a core text in your field, search for texts with a given expression in their title, download full metadata to zotero, download a copy of the text if it is legally available, create an alert through a RSS feed that is publicly available, share it in your team project through a Zotero group, all this and more with just a few clicks. It is free, reusable so are results, because the metadata has been liberated thanks to these activists and to the collective movement that followed, including publishers.

Almost real time, always get freshest texts

Even if there is almost no literature on how academics practically search for their sources, we assume that when they know their field, they are searching for new information, that is texts that weren’t there yesterday but are available today. That has been the promise of many information devices, from the first academic journals to ISI Current Contents, from abstracts/review journals to contemporary Scopus/WoS alerts.

Beyond openness, one of the promises of Matilda is to offer you this freshness by going to the sources, applying YOUR search keys and deliver them to you in no time. In practice, that means that around 2 days after their creation in Crossref, RePeC, ArXiv, Pumed, you will get the relevant metadata in your Zotero RSS feed. As a mean, around 40,000 new texts appear in Matilda and some will probably interest you, that is discover the title, read the abstract, include it in your bibliography while other will be rejected.

What’s next and how you can help Matilda

The current version of Matilda is V. 2.0.2 and we have money to build the V3 with plenty of new features, the most spectacular being full-text search as we will index every found PDF so that you can add these results to those on the metadata. We also will add boolean operators for search – currently by default it is OR. In the long run, codes will be available – everything is open source software – and APIs will be open for direct reuse, for example instead of an uncheckable “WoS citations”, you will find a tracable “Matilda citations”.

We also think about adding new sources such as aggregated online archives as we wish to be inclusive as possible, so that YOU choose what is relevant for your research, not US.

The ultimate aim is clear: offer an alternative to current WoS/Scopus users, so that their institutions stop paying millions for tools that were not made for lay researchers – the bibliometrics uses of such platforms are debatable, though the Open Research Information spurring movement could also push them into history. Show that we need to decolonize scholarly metada that was for long limited to 1/ jorunals 2/ with articles written in English 3/ from Global North scholars 4/ and especially those owned or disseminated by big publishers. It also aims at providing an open alternative to Google Scholar, with open, tracable sources and enrichments and no limitations in download and uses. As everybody knows, Google can decide to shut down services in a day, so there is no long-term garuantee that GS will exist in the long run.

What can you do to help develop and sustain this open science platform? First, talk about it, create and share links, go to your institution head and show them that they could invest in open science rather than funding capitalistic vilains. Second, use it, test it, send us some feedback, good or bad, ask for features, explain what you need and expect from such a tool. Third, your IP addresses are not traced, but we have the aggregated image of RSS feeds, so even by just using it, you will help us.

  1. Didier Torny, Laurent Capelli, Lydie Danjean, Stéphane Pouyllau. Matilda: Building a bibliographic/metric tool for open citations and open science. ELPUB 2019 23rd edition of the International Conference on Electronic Publishing, Jun 2019, Marseille, France. ⟨10.4000/proceedings.elpub.2019.22⟩. ⟨hal-02141839⟩ []
  2. Disclaimer: I have been a member of the Advisory Board of Open Citations on behalf of the French Open Science Committeee since 2021 []
  3. see Heibi, I., Peroni, S. & Shotton, D. Software review: COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations. Scientometrics 121, 1213–1228 (2019). https://doi.org/10.1007/s11192-019-03217-6 []

The sustainability argument or… How academic journals economic models never really last

The starting point for this post is an article from Scholarly Kitchen in which, once again, the sustainability of Diamond journals and here the Subscribe to Open model, is questioned. This leads the author, Rick Anderson, to define sustainability:

“It’s a concept that gets invoked in many different contexts to mean a range of different things, but in this context its meaning is both basic and simple: a publisher’s business model is sustainable if it’s able to be sustained over time. […] What determines sustainability? For an ongoing and open-ended project like publishing, the baseline determinant of sustainability is simple: recurring, reliable revenue.”

This definition is interesting, though it stands on a muddy ground: how do we define “recurring reliable revenue”? What is the timeframe to judge reliability? My post will argue that there is no such thing as a stable business model, at least for a long time. Moreover, if Anderson is right to question the S20 future, the same questions should be asked to much-lightly considered “stable models”, starting with subscriptions.

Our present is not the continuation of the past: the short history of subscriptions

Over the three and a half centuries of scientific publication in journals, the economic relations between publishers of scientific outputs and their readers were far from stable. It was probably not until after the Second World War that the main relationships became those between publishers and academic libraries, on a national or international scale, not as part of a gift or exchange economy, but rather as a commodity.

As the number and budget of libraries increased and the number of published journals grew fast, a short golden age of subscriptions for journal producers, and notably commercial ones, began1. But by the 1970s, as budgets stagnated, harsh competition for libraries money was the first signal of what was later referred to as the serial crisis. This decades-long relationship, based on the sale of subscriptions and paper issues for each journal, has been profoundly transformed by the digitalization of journals.

In the late 1990s, three major events took place in the contractual relationship between libraries and scientific publishers. Firstly, in a relatively short period of time after the inception of the World Wide Web, the largest publishers put online not only their entire contemporary catalogue, but also part of their archival material. Secondly, publishers have been offering access to packages or bundles, not on a title-by-title basis, but to a long list or even all of their journals. Thirdly, to make this offer attractive, they favoured the emergence of library consortia which, by adding their singular needs, could constitute clients interested in this new plethoric offer. The combination of these three events gave rise to a new form of standard economic agreement, the big deal2.

As a result, the subscription business model has been changed from an audience-centered model – libraries purchase what readers want, title by title – to a model centered on the size of the publisher – libraries buy the most extensive offerings, leading to a much stronger oligopolization through buyouts of publishers and change of publishers for scholarly societies, very visible twenty years later.

Percentage of papers published by the five major publishers, by discipline in the Natural and Medical Sciences, 1973–2013.3

For most publishers – including self-publishing learned societies – subscription has only been profitable for a short time and is not anymore. It is not sustainable, since it now implies the disappearance of their autonomy or at least dependence on increasingly powerful players, likely to act unilaterally on their revenues. And even for the largest publishers, the threat of non-renewal of Big Deals is growing stronger from 2010 onwards, whether through the sudden drop in financial resources (Greece) or through the choice to no longer pay for a service that does not meet the needs of libraries (United States) or open access demands (Germany, Sweden). It is in this context that Elsevier has started to brand itslef as a data company, while new publishers are trying to make a new model last, based on Article Processing Charges.

The future will not be similar to nowdays: charging authors to the breakdown point

Charging authors is not a recent business model, there have been many examples of vanity publishing, targeted towards academia or outside of it4. In the US, from the 1930s on , an alternative funding model had already thrived, as subscription revenues were considered too low. It targeted authors and their funders, was based on per-page charges, first in Physics, then in other STM disciplines.5 But it was with electronification that the idea of paying a lump sum for authors – as opposed to a multitude of varying services (colour charges, page charges, cover charges…) emerged, soon to be known as Article Processing Charges (APC). Some new publishers have entirely adopted this new model, sooner or later being bought by legacy Big Publishers, like BMC by Springer or Hindawi by Wiley. But other ones have quickly become themselves global Big Publishers.

Retrospective statistics of the leading academic publishers in 20216

On selected Clarivate sources7, MDPI and Frontiers are now in the top 6 in volume published while added, they were making less than a fifth of ACS, Sage or OUP a decade ago! From the point of view of these new big players, APCs are so sustainable that they create journals almost every week. For example, in 2021 MDPI launched 84 new journals and only acquired two existing titles. As Dan Brockington has shown in his comprehensive analysis of MDPI data, this growth also comes from the lowering of rejection rates:

“Now, some 45% of the MDPI journals I analysed, have rejection rates of below 40% (Table 2). Papers in these journals account for nearly 38% of revenues from publication fees (Table 3). Conversely, the journals with rejection rates of over 50% account for just over 25% of revenues. Measures of esteem, such as listing in the Web of Science, did not seem to make a difference to rejection rates. Average rejection rate for WoS listed journals was 42.7%, and for unlisted journals 41.6%.”8

The incentive for publishers to accept a manuscript in the APC model has been discussed for a decade, and its link to the growth of vanity presses now dubbed “predatory publishers” is well established. Above and beyond what is often portrayed as a potential threat to the whole scholarly communication system, the APC business model is not sustainable from the authors and research organizations’ point of view. A large literature has constantly shown the rise of APC prices through time, would it be for open access journals or for those relying on the hybrid model. Whether they would name it “prestige prices” or “market power”, researchers describe an ever-growing number of APC articles and a rise in individual prices.9

Proponents of market regulation will argue that each author will adjust his or her willingness-to-pay to the audience and the supposed quality of the journal, but instead we see the exclusion of authors for lack of funds or the sale of places in the byline to pay APCs. And of course, the quasi-absence of success for such a business model in underfunded disciplines, like most of HSS ones.

Sustainable for whom? The durability of Diamond journals

The two most visible business models for disseminating journal content are therefore both not only at the mercy of default by their funders, but are unsustainable for both readers and authors and their respective institutions. The fact that they constitute today’s largest expenditure items in scholarly communication should not be taken as evidence of sustainability through the capture of recurrent reliable revenue. In research worlds subject to severe budgetary constraints and increasing visibility of expenditure lines, they are in fact the most threatened in their foundation.

That being said, what are the alternatives? They are well known, and have been running in some corners of the global journal market for decades without any structural sustainability problems, despite being underfunded. Their landscape has been described in a comprehensive study, showing small-scale non-profit community-owned arhcipelagoes10. Far from APC-based megajournals and publishers with huge portfolios, this ecosystem is sustained by learned societies, universities, research organizations, some research funders, but also large-scale technical infrastructures, the most obvious being PKP’s Open Journal Systems.

While it is certain that more funding and more support from different institutions is needed11, thousands of journals – and dozen of dissemination platforms – have shown their reliability as they passed the test of time.

Yet, most don’t pass the “Anderson sustainability test” as they don’t rely on “revenue” but rather support and funding as they have not been comodified. Moreover, the support come from the exact same sources that pay, in a way or another, the “unsustainable publishing models” described above. So, they are obviously sustainable for authors and for readers, but also for these supporting institutions. Though they don’t have a unified business model12 -Subscribe to Open being the latest & adequate to already commidified journals, they seem to thrive, each of them at their low-scale, but with an agregated population still larger than APC journals. After almost three decades of existence, resisting to several “serial crises”, haven’t they earn the right not to be questionned on their sutainibility, but rather considered as one of the most secure ways to build a sustanaible scholarly communication system, allied with institutional archives?

  1. Fyfe, Aileen. “From philanthropy to business: the economics of Royal Society journal publishing in the twentieth century.” Notes and Records (2022). Aileen Fyfe, Noah Moxham, Julie McDougall-Waters, and Camilla Mørk Røstvik , A History of Scientific Journals Publishing at the Royal Society, 1665-2015, UCL Press, 2022, chapter 14 []
  2. Frazier, Kenneth. “What’s the big deal?” The serials librarian 48.1-2 (2005): 49-59. []
  3. Larivière V, Haustein S, Mongeon P (2015) The Oligopoly of Academic Publishers in the Digital Era. PLoS ONE 10(6): e0127502. https://doi.org/10.1371/journal.pone.0127502 []
  4. see for a detailed history on books, Timothy Laquintano, The Legacy of the Vanity Press and Digital Transitions, Volume 16, Issue 1, Summer 2013, https://doi.org/10.3998/3336451.0016.104 []
  5. On the American Chemical Society example, see Noel, M. (2020). Back to disciplines: exploring the stability of publication regimes in chemistry: the case of the Journal of the American Chemical Society (1879–2010). Humanities and Social Sciences Communications, 7(1), 1-13; on the APS/AIP example, see Scheiding, T. (2009). Paying for knowledge one page at a time: The author fee in physics in twentieth-century America. Historical Studies in the Natural Sciences, 39(2), 219-247. []
  6. from Understanding the increasing market share of the academic publisher “Multidisciplinary Digital Publishing Institute”Dan Brockington in the publication output of Central and Eastern European countries: a case study of Hungary []
  7. That is much more restricted than Crossref, so more favourable to legacy publishers []
  8. Dan Brockington,MDPI Journals: 2015 -2021, 10 November 2022, https://danbrockington.com/2022/11/10/mdpi-journals-2015-2021/ []
  9. see for example, Budzinski, O., Grebel, T., Wolling, J. et al. Drivers of article processing charges in open access. Scientometrics 124, 2185–2206 (2020). https://doi.org/10.1007/s11192-020-03578-3 []
  10. Bosman, Jeroen, Jan Erik Frantsvåg, Bianca Kramer, Pierre-Carl Langlais, and Vanessa Proudman. “The OA diamond journals study. Part 1: Findings.” (2021) 10.5281/zenodo.4558704 []
  11. see recommandations from the mentioned study, Becerril, Arianna, Lars Bjørnshauge, Jeroen Bosman, Jan Erik Frantsvåg, Bianca Kramer, Pierre-Carl Langlais, Pierre Mounier, Vanessa Proudman, Claire Redhead, and Didier Torny. “The OA Diamond Journals Study. Part 2: Recommendations.” (2021), https://doi.org/10.5281/zenodo.4562790 []
  12. This diversity should be studied, as we will do it in the current European-funded Diamas project []

From paywall builders to data tracking moguls or.. How the big publishers have put on a new super vilain costume.

Elsevier is a felon, that is a given. This company epitomizes all the crimes, misdemeanors and petty theft that can be accomplished by a publisher. Its wikipedia page is so full of affairs, scandals and raunchy stories that it would be enough to read it to give a talk to an academic congress. And yet Elsevier still find new ways to extract value from academic communities, which both produces new profits and new critiques. This post is the story of the publisher becoming a data company.1

An endless list of academic misdemeanors

In 1880, It all begin with a “borrowing” as we say in academic life, which may also be named an hommage, a plagiarism or a steal, depending on the point of view. When the company was founded, it took over a logo from an ancestral famous printing Dutch family, whose name was Elzevier (yes, with a Z).

Elsevier logo 2019.svg
Elzevier/Elsevier logo

As a Dutch publisher, they first put out journals in the language of the country, but the need to go into exile in England in 1940, and no doubt a rather specific vision of scholarly communication, led the company to launch English-language journals in the post-war period. Along with Pergamon, Elsevier is certainly the inventor of the concept of the ‘international journal’ and created a global market for scientific writings, with customers all over the world and, what is more, a profitable market. This led to cycles of development and acquisitions, which continue to this day. But as we know in the world of superheroes, “with great power comes great responsibilities”.

And indeed they are responible. The list of “problems” attributed to Elsevier can be categorised into three different groups: firstly, a propensity to act in a “sloppy and dirty” manner, for example in copyediting failures, by selling closed articles for which authors have already paid an APC, or by not acting in front of legitimate requests for retracting articles, as in the very recent following example.

Secondly, its constant pursuit of profit leads it to bend academic rules. Above and beyond offering researchers Amazon vouchers to write reviews on products, one of the most famous examples is the publication of journals in Australia that were de facto advocacy media for Merck pharmaceutical products, through a subsidiary that is cited by Sergio Sismondo as an example of ghost management2.

Third, its concern for protecting its intellectual property leads it to numerous actions opposing open access, unlimited text and data mining, or even metadata sharing. Elsevier thus funds numerous lobbying actions, and one aiming at the US Congress led to the “Cost of Knowledge” petition in 2010. This petition called for a boycott to write, review or make editorial work for the company. It was signed by tens of thousands of academics and led to some mocking of Elsevier logo.

Michael Eisen, CC BY 3.0 https://creativecommons.org/licenses/by/3.0, via Wikimedia Commons

To sum it up, If the kids of Bruno Latour had been STS PhD students in 2010, they would probably have authored a paper entitled “Portrait of a publisher as a wild capitalist”. But that wouldn’t have predicted what happened next.

From academic publisher to data company: a very public transition

In fact, Elsevier continued to thrive as a publisher despite the tens of thousands of petitionners. But the company has changed its core business and has significantly expanded its range of services until it no longer appears as a publisher. Take two exemplary acquisitions: in 2013, Elsevier purchased Mendeley, a library management service, and for some, it was as if the Empire had bought the rebels.

https://twitter.com/tpoi/status/1543940630837022720

Elsevier had two objectives: on the one hand, to extend its information retrieval ecosystem, and on the other, to collect data on Mendeley users, potentially authors and reviewers. These same objectives were reflected in the acquisition two years later of SSRN, a preprint platform then specialising in the social sciences.

“Elsevier is now getting closer and closer to researchers with business models that don’t involve libraries,” says Joe Esposito, a publishing consultant in New York City. “The positioning is well thought out: lock up revenues to the legacy publishing business, move into areas where piracy is not much of an issue, create deeper relationships with researchers and become more and more essential to researchers even as librarians become less so.”3.

The series of purchases aim to control the bricks directly used by researchers whose research projects, results, research data, texts read, cited, reviewed, tweeted, etc were linked and. Elsevier being able to identify them. But as the 2019 comprehensive diagram below shows, they are not the only target of the “new Elsevier”.

Chen, G., Posada, A., & Chan, L. 2019. Vertical Integration in Academic Publishing : Implications for Knowledge Inequality. In Chan, L., & Mounier, P. (Eds.), Connecting the Knowledge Commons — From Projects to Sustainable Infrastructure : The 22nd International Conference on Electronic Publishing – Revised Selected Papers. Marseille : OpenEdition Press.

In fact, Elsevier’s other target market is higher education and research institutions, and even governmental institutions. The enclosure of the Elsevier ecosystem has, for example, guaranteed the company a position as a subcontractor in the construction of the first European open access monitor, which has been deemed scandalous by open access activists4. When a consortium of Dutch universities signed a transforment agreement with the publisher in 2019, this included the joint development of projects involving all kinds of data, being a Faustus pact with Lucifer where open science becomes sustaining Elsevier data infrastructure in exchange for open access papers5

In a decade, Elsevier had become a data company, selling them to numerous clients, both academic and non-academic and defining itself in corporate documents as “a global leader in information and analytics, (which) helps researchers and healthcare professionals advance science and improve health outcomes for the benefit of society.”, while making videos on the perfect world of information it designs with a product called PURE.

Naming a new supervilain : surveillance publishing

As of 2019, this transformation of Elsevier and, to a lesser degree, other big publishers was a wake-up call for various institutions and authors. They started to formalise the list of new dangers that the construction of data tracking and information aggregation systems constituted, some of them specific to the academic world and others similar to those created by GAFAM-like companies. For example, a commission of the DFG published a briefing paper where they acted as alarm raisers for the following concerns6:

  1. entail a violation of academic freedom and the freedom of research and teaching;
  2. constitute a violation of the right to the protection of personal data;
  3. pose a potential threat to scientists, as the data could also become accessible to foreign governments and authoritarian regimes;
  4. constitute an encroachment of competition law, as new participants barely have a chance to enter the market;
  5. favour a reduction in the value of public research investment, since data on research activity can be collected by commercial research competitors or made available to them in return for payment in connection with industrial espionage.

These fears may seem hypothetical, but the fact, for example, that Elsevier’s parent company, RELX, has signed a huge contract to supply personal data to the US Immigration and Customs Enforcement agency has givn some weight to their warnings. But what data are we talking about? Two facetious colleagues used the provisions of the GDPR to ask Elsevier for their data and documented their findings on the traces of their stay in the Elsevier Hotel. It contains a number of directly personal data (phone numbers, bank details, addresses), but above all a great deal of usage data on the opening of e-mails sent by the company, the most basic operations on Mendeley and Science Direct or, more amusingly or worryingly, the trace of persons consents and non-consents:

Eiko I. Fried, Robin Niels Kok, Welcome to Hotel Elsevier: you can check-out any time you like … not, 2022

A new petition, twelve years after The Cost of knowledge, calls to “Stop Tracking Science“, which actually means stop tracking academics. In the new configuration, libraries are still a passage point between the big publishers and the researchers though not exclusive anymore. But the data that goes for these exchanges is now considered in a differnt manner : “they are even attempting to persuade libraries to install trackers inside university networks: the research behavior of all of us is being recorded in real time.” While identification has for long been presented as a necessary security to provide access to closed texts, it is now a source of concern, in a very similar manner as cell phone or internet tracking. To describe this phenomenon, several labels have been proposed, such as”platformization of science”7 or “surveillance publishing”8 The big publishers are trying through various legal actions to present Sci-hub and Libgen not only as intellectual property offenders, but also as dangerous hackers for the security of research institutions., while it is the same accusation that is directed towards them. So, in the end, for you, which ones are the supervilains threatening academic communities?

  1. this post is based on a communication given at the 2022 EASST Conference in Madrid at the same date []
  2. Sismondo S (2007) Ghost Management: How Much of the Medical Literature Is Shaped Behind the Scenes by the Pharmaceutical Industry? PLoS Med 4(9): e286 []
  3. Van Noorden, R. Social-sciences preprint server snapped up by publishing giant Elsevier. Nature (2016). []
  4. see for example Jonathan Tennant, & Björn Brembs. (2018, October 26). RELX referral to EU competition authority. Zenodo []
  5. This deal has been the object of a very long post. See also SPARC analysis []
  6. Data tracking in research: aggregation and use or sale of usage data by academic publishers. A briefing paper of the Committee on Scientific Library Services and Information Systems of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) 28 October 2021 []
  7. Kunz, Raffaela: Threats to Academic Freedom under the Guise of Open Access: The Power of Publishers, Data Tracking in Science, and the Responsibilities of Public Actors, VerfBlog, 2022/3/18 []
  8. Pooley, J., (2022) Surveillance Publishing, The Journal of Electronic Publishing 25(1) []

The fair price of an open access article or… how Nature relaunched a long lasting conversation

If you ask the open access community what happened in October 2003, chances are they will cite the Berlin Declaration as an important moment of consolidation of international mobilisation. At the same time, however, there was a large-scale attempt to charge authors for the publication of their open access research. Indeed, this was the time when the publisher Biomedcentral announced the switch of all its journals to a then little known financial model: Article Processing Charges. Let’s take the example of two journals passing on this announcement when they discuss the price of the service:

Although some authors may consider US$525 expensive, it must be remembered that The Journal of Translational Medicine does not levy additional page or colour charges on top of this fee, which can easily exceed US$525. With the article being online only, any number of colour figures and photographs can be included, at no extra cost.

There is no remuneration of any kind provided to the Editors-in-Chief, to any members of the editorial board, or to peer reviewers; all of whose work is entirely voluntary. Although some authors may consider US$525 expensive, it must be remembered that Journal of Neuroinflammation does not levy any additional page or color charges on top of this fee. Because we are an online-only journal, any number of color figures, photographs, and ‘extra’ pages can be included at no extra cost. Such color and page charges, as assessed by more traditional journals, can easily exceed our flat US$525 per-article APC. Another common expense with traditional journals is the purchase of reprints for distribution, and the cost of these reprints is also frequently greater than our APCs. The Journal of Neuroinflammation provides free, publication-quality pdf files for distribution, in lieu of reprints.

Three elements emerge from these excerpts: firstly, their similarity indicates a copying of elements provided by BMC to justify this change in business model, the financing having previously had to rely on any source except the authors and in particular a support programme for research institutions; secondly, the price is related to the costs of making content and formats available free of charge to readers; and thirdly, the novelty of the payment for authors is minimised in favour of a continuous interpretation between page charges and article processing charges. Indeed, at least since the 1930s, in some disciplines, the authors’ contribution to publication costs – and not only to the cost of reprinting copies for personal circulation – has been documented. And a vast majority of science journals were still asking for such charges at the beginning of the 2010s1

This continuity is debatable, but the APC system put in place by BMC, like the one adopted at the official launch of PLOS Biology around the same time, is a partial legacy of these in print practices. As in the past, only accepted items are invoiced at a single “catalogue” price for defined services.

TO BE CONTINUED

  1. Curb, Lisa A., and Charles I. Abramson. “An examination of author-paid charges in science journals.” Comprehensive Psychology 1 (2012): 01-17. []

Readers in the Making of Scholarly Knowledge or… how article (e)valuation has become more democratic

The wonderful book entitled Reassembling Scholarly Communications. Histories, Infrastructures, and Global Politics of Open Access, edited by Martin Paul Eve and Jonathan Gray at MIT Press is finally out!

So to push you to read the work of diverse and enlightning contributors, I have remixed and shortened our chapter about readers and their empowerement in contemporary peer review. In the full version, we underlined one of the most decisive effect of open access: the accelerating rise to power of ordinary readers1.

Pre-Publication Peer Review as Reading

Throughout the history of peer review, the three judging instances (editors-in-chief, editorial committees, outside reviewers) that have gradually emerged were the first readers of submitted manuscripts. This may seem trivial, but the essential activity of evaluating an article – unlike other types of academic evaluatiion – is indeed the handling of a text. Admittedly, the peer review article can be considered to include many other things, such as checking that ethical rules are being followed or that data is actually being made available, but the question of taking into account the content of the article – whether in the form of a paper file or a computer file – has always been essential. The acts of reading are far from being simple, whether you consider “geographies of reading”2 (with whom, where, in what setting), what attracts the attention of readers, how texts are annotated,  how journals inform those practicies and what are the purposes of such acts.

Their respective importance and the way in which their readings are coordinated may be subject to local conventions at a journal, disciplinary, or historical level. They are also marked by profound divergences due to distinct issues in manuscript evaluation. The space of possibilities within which these readings are conducted is a subject for public debate that leads to the invention of labels and the stabilization of categories, and to the elaboration of procedural and moral norms. For example, on the respective anonymity of authors and referees, four labels have been coined since the 1980s

  Reviewers  
Authors Anonymized Identifed
Anonymized Double Blind Blind review
Identified Single Blind Open review

Source: David Pontille and Didier Torny, “The Blind Shall See! The Question of Anonymiity in Journal Peer Review,” Ada 4 (2014), https://doi.org/10.7264/N3542KVW.

These spaces of possibility currently coexist in each discipline, being attached to different scientific and moral values, pertaining to the responsi- bility of reviewers, objectivity of judgements, transparency of process, and equity toward authors. The different possibilities here show that Merton’s “organized skepticism” and the agonistic nature of the production of scientific facts described by Latour and Woolgar long ago are, indeed, not self-evident. The contemporary moment is characterized by reflexive readings of peer- review technologies: manuscript evaluation has itself become an object of systematic scientific investigation. Authors, manuscripts, reviewers, journals, and readers have been scrupulously examined for their qualities and competencies, as well as for their “biases,” faults, or even unacceptable behavior. The diverse arrangements of manuscript evaluation are thus themselves systematically subjected to evaluation procedures.

Post-Publication Peer Review
as Ordinary Readers empowerement?

Peer review in the twenty-first century can also be distinguished by a growing trend: the empowerment of “ordinary” readers as new key judging instances. If editors and reviewers produce judgments, it is through a reading within a very specific framework, as it is confined to restricted interaction, essentially via written correspondence, which aims at authorizing the dissemination of manuscripts-become-articles. Other forms of reading accompany publications and participate in their evaluation, inde-pendently of their initial validation.

Citing Articles : with the popularization of bibliometric tools, citation counting has become a central element of journal and article evaluation. But it also need a transformation of formats, an identification of references and a fundamental transformation: the act of referencing relates to a given author, whereas a citation is a new and perhaps calculable property of the source text, creating what Wouters called “citation culture”. Then, highly disparate forms of intertextuality are rendered commensurable: the measured or radical criticism of a thought or result, integration within a scientific tradition, reliance on a standardized method described elsewhere, existence of data for a literary journal or meta-study, simple recopying of sources or self-promotion. Citation thus points towards two complementary horizons of reading: science as a system for accumulating knowledge via a referencing operation, and research as a necessary discussion of this same knowledge through criticism and commentary.

Commenting texts: in a view of publication as explicitly dialogical or polyphonic, reader can become commentersTraditionally, before an article was published, comments were mainly directed toward the editor- in-chief or the editorial committee. Through open review, commenters enter into a dialogue with the authors and thus open up a space for direct confrontation. Prior to the emergence of electronic spaces for discussion, objects like “special issues” or “reports” in which a series of articles are brought together around a given theme to feed off one another after a short presentation. Post-publication commenting was also common through two elementary forms: by referring to the original article or by sending a letter to the editor. The electronic space led to many experiments of post-commenting: most of them met no success (PLOS, Nature, Pubmedcentral…), until the unexpected success of anonymized comments on PubPeer.

Sharing papers: until recently, readers other than citers and commenters remained very much in the shadows. Yet library users, students in classes, and col- leagues in seminars, as just a few examples, also ascribe value to articles; for example, through annotation. The existence of articles in electronic form has made their readers more visible. Persons who access an “HTML” page or who download a “PDF” file are now taken into account, whereas in the past it was only the distribution of journals and texts, mostly through libraries, which allowed one to assess potential readership. By inventorying and aggregating the audience in this way, it is possible to assign readers the capacity to evaluate articles. The creation of online academic social networks (e.g., ResearchGate, Academia .edu) has trivialized this figure of the public, not only by counting “academic users,” but also by naming them and offering contact. At the same time, online bibliographic tools (e.g., CiteULike, Mendeley, Zotero) that objectify the readers and taggers who introduce references and attached documents into their bibliographic databases. Without being citers them selves, these readers select publications by sharing lists of references, the pertinence of which is notified by the use of “tags.” These reader-taggers are also embedded in the use of hyperlinks within “generalist” social net-works (e.g., Facebook, Twitter), by alerting others to interesting articles, or by briefly commenting on their content, feeding the whole “article-level metrics” movement. Here the readers, tracked by number and diversity, revalidate articles in the place of the judging instances historically qualified to do so.

Examining Documents : This movement is even more significant in that these tools are applied not only to published articles but also to documents which have not been vali-dated through the growth of prerint servers. This flow of electronic manuscripts feeds the enthusiasm of the most visionary who, since the 1990s, have been announcing the end of journals. On the contrary, we obsersed new technologies have been built on these archives, suchas “overlay journals,” in which available manuscripts are later validated by reading peers in various ways. With a view to dissemination, advocates of readers as a judging instance tend to downplay the importance of prior validation. While the valida- tion process sorts manuscripts in a binary fashion (accepted or rejected), such advocates contend that varied forms of dissemination instead encour- age permanent discussion and argument along a text’s entire trajectory. In this perspective, articles remain “alive” after publication and are therefore always subject not only to various reader appropriations, but also to public evaluations, which can reverse their initial validation through flagging articles in official journal policies.

The Academic Closet
vs. The Readers Bazaar

Driven by a constant process of specialization, the extension of judging instances to readers may appear as a reallocation of expertise, empowering a growing number of people in the name of distributed knowledge. In an ongoing context of revelations of massive scientific fraud, which often implicates editorial processes and journals themselves, the dereliction inherent to judging instances prior to publication has transformed the mass of readers in a vital resource for unearthing error and fraud. As in other domains where public expertise used to be exclusively held by a few professionals, crowdsourcing has become a collective gatekeeper for science publishing. Thus peerdom shall be reshaped, as lay readers have now full access to a large part of the scientific literature and have become valued audiences as quantified end-users of published articles.

If open science has become a motto, it encompasses two different visions for journal peer review. The first one, which includes open identities, takes place within the academic closet, where the dissemination of manuscripts is made possible by small discourse collectives which shape consensual facts. This vision is supported by the validation processes designed by Robert Boyle during the emergence of modern scientific practices. By contrast, in a Hobbesian fashion, the second one urges an openness in multiple ways, building an academic democracy where each reading may litterally been accounted. The disentanglement of peer evaluation goes througdh the ability given to readers to comment on published articles, to produce social media metrics through the sharing of documents, and to observe the whole evaluation process of each manuscript. In this vision, scholarly communication not only relies on crowdsourced peer review but on a plurality of instances that generates a continuous process of judgment. The first vision has been at the heart of the scientific article as a genre, and a key component of the scientific journal as the most important channel for scholarly communication. Whether journals remain central in the second world has yet to be determined.

  1. We means David Pontille and myself. You can read the full chapter here. Of course, as readers, you are welcome to cite, comment, share & examine this chapter []
  2. Livingstone, David N. “Science, text and space: thoughts on the geography of reading.” Transactions of the institute of British geographers 30.4 (2005): 391-401. []

The institutionnalization of retraction … or how to reconsider the status of truth of published papers

If you wish to buy this mug (no conlict of interest, I get no money if you click).

Retraction Watch has celebratied its 10th anniversary and its creators have grown from a small blog to a reputable entity, funded by numerous donors, source of academic publications, run by the Center for Scientific Integrity and manager of a database acknowledged for its quality. With the COVID-19 epidemic, the retraction of scientific articles (and even preprints) has become a mainstream media object, fully public beyond the academic communities directly concerned.

The institutionnalization of the website mirrors the one of the retractions themselves, which have become partly normalized into the publshing process as a key part of post-publication peer review. In this post written for the Peer Review 2020 week, which theme is “Trust in peer review”, we wil briefly look at journal policies and how they change the actual trust given to published articles1.

Flagging published articles.
Don’t trust what your read

“Certified”, ” peer-validated”, “peer-reviewed”, all these notions are aimed at different practices but with the same objective: to assert that the text you are reading is not the simple product of authors’ reflections and their exploration of a phenomenon, of theories and observations, but that of a more or less complex process of evaluation of a manuscript by others, not recognised as co-authors but sufficiently knowledgeable about the subject, the methods, the literature, that they indicate to you that this content is valid.

Then, of course, scandals and other fraud cases multiplied, science stars falling one after another, but you could always believe that these were exceptions, special cases, that almost all the articles contained true and proven statements… at least until 2009. That year, the COPE organisation published its first standards2 on retracted articles, showing that it was not only normal, but expected that journals would plan to remove from the scientific canon articles they had previously published. To be more precise, it was a matter of flagging different articles according to the situation:

Journals editors should consider issuing an expression of concern if:…
Journals editors should consider issuing a correction if:…
Journals editors should consider consider retracting a publication if…

In this system, an “expression of concern” casts doubt about an article and warns readers that its content raise some issues. In most cases, it describes information that has been given to the journal, which led it to alert its readers about an ongoing investigation, but does not directly state about the validity of the work.
On the contrary, when it comes to “correction”, it is always stated that the core validity of the original article remains, some parts of its content being lightly or extensively modified. In some cases, the transformations have been carried to such an extent (e.g. every figure have been changed) that some actors have ironically coined the term “mega-correction“ to characterize them. Contrary to an expression of concern, the authors of the article are fully aware of these modifications and, even if they have not written it, do necessarily validate them before the publication of the so-called (mega)correction. If they don’t, journals sometimes publish editorial notes instead of corrections.
Finally, a “retraction” aims at to inform readership that the article validity and/or reliability and/or ethcal background and/or authorship does not stand anymore. Far from being an erasure, it is conceived of as the final step of the publishing record of the original article, as the notice of retraction “should be linked to the retracted article”. A retraction is either conducted in close collaboration with the authors, or against them upon the request of someone else who is explicitly named (e.g. a journal editor-in-chief, a colleague, a funding body…).
Ten years later, COPE produced a second version3 of its guidelines, in which the grounds for retraction were lengthened, such as the use of prohibited material or copyright infrigements. Two motives are of particular interest:

  • It has been published solely on the basis of a compromised or manipulated peer review process
  • The author(s) failed to disclose a major competing interest (a.k.a. conflict of interest) that, in the view of the editor, would have unduly affected interpretations of the work or recommendations by editors and peer reviewers.

It is no longer only the conditions of production of articles that are targeted or its content, but the very processes of evaluation that can be pirated or simply distorted if the relationship of the authors to their object is not revealed. Not only you can’t trust the content of the paper, but you can’t anymore trust the process by which journals certify this content. You can only trust them when they certify they have failed… and these new motives were quickly put to the test.

An Epidemic of retractions?
COVID-19 as a public discussion on papers status

A month after the publication of these guidelines the COVID-19 epidemic began, with the adoption of open science as borders closed. We have already dealt with articles about the HCQ treatment and the Lancetgate that followed, i.e. an ultra-fast but complex case of retraction, which moreover recently led to the commercial journal to change its peer review process. The editors of the Lancet group conclude their op-ed “Learning from a retraction” with unintended irony: “As trusted sources of information, the Lancet journals are committed to ensuring that our editorial processes will continue to be as robust as possible.” Who needs learning from a failure if robustness has always been there?

This highly visible example refers to another phase in the institutionalisation of the retraction object: its public debate, beyond academic circles, The existence of the Retractionwatch database has been acknowledged and the commonness of retraction has become a public concern, for example in this canadian article:

Similarly, a 2019 Leger poll for the Ontario Science Centre found 29 per cent of respondents said that because scientific theories are fluid, they can’t be trusted.. What’s more important than the erosion in trust, says Caulfield, “is a polarization where people are gravitating toward conspiracy theories or messaging (including misinformation) that is trying to increase distrust because those messages either appeal to their ideological leanings or preconceived notions. “My fear is if people don’t trust the good science, don’t trust science from these respected journals, it’s going to be increasingly difficult to fight misinformation because people aren’t going to trust the correction.”

Simultaneously, the same database lead to discussion and even papers in certain scientific communities. Thus, some authors have calculated retraction rates for different topic, in order to assert that the Covid-19 was not only leading to two epidemics: the one on human bodies and another one in l retractions.

From An alarming retraction rate for scientific publications on Coronavirus Disease 2019 (COVID-19)
Nicole Shu Ling Yeo-Teh & Bor Luen Tang (2020): An alarming retraction rate for scientific publications on Coronavirus Disease 2019 (COVID-19), Accountability in Research, DOI: 10.1080/08989621.2020.1782203

The founders and employees of Retractionwatch gave a reply themselves in the same journal. Apart from technical remarks about the limitations of the corpus and the inclusion of preprints, the main explanation for these respondents is the speed with which the journals have intervened, where it usually takes years to produce a retraction, not days or weeks.

The institutionalisation of retractions, combined with the focus and urgency of the COVID-19 epidemic, therefore leads to seemingly virtuous behaviour, as journals no longer drag their feet in admitting problems and even communicate widely about retractions, no longer shameful but proud of their professionalism, like The Lancet group journals did. At the risk of giving the articles and more generally the scientific discourse a perfume of permanent reversibility far from the idea of incremental self-correction fo science.

Yesterday’s truth is today’s ignorance.
Living in a post-truth academic world

Far away from the COVID-19 epidemic urgency, what happens to flagged papers through time? Beyond knee-jerk reactions, corrections can be later themselves corrected, retractions can be “unretracted“, expression of concern be itself retracted after 15 years, and some have proposed that “good faith” retractions could be combined with the publication of “replacement” papers4 , while the other ones would be permanent. Besides, there is life after death for scientific publications: retracted papers are still cited, and most of their citations do not take notice of their “zombie” status5.

Instead of incorrectly equating the prevalence of retractions with that of misconduct, some consider the proliferation of flagged articles as a positive trend6. In this vision, the very concrete effects of post-publication peer review do reinforce scientific facts already built through peer review, publication and citation. Symmetrically, as every published article is potentially correctable or retractable, any scientific information rhymes with uncertainty. The visibility given to these flags and policies undermine the very basic components of the economy of science: How long can we collectively trust peer review and consider peer-reviewed knowledge should be the anchor to face a “post-truth” world?

  1. This post is partly adapted from Pontille, David, and Didier Torny. “Beyond Fact Checking: Reconsidering the Status of Truth of Published Articles.“, EASST Review, 36, 1, 2017 []
  2. Wager, E., Barbour, V., Yentis, S., & Kleinert on behalf of COPE Council, S. (2010). Retractions: guidance from the Committee on Publication Ethics (COPE) []
  3. https://publicationethics.org/files/retraction-guidelines.pdf []
  4. Like this strange story about a paper on sexual practices during the pandemic []
  5. Bar-Ilan, Judit, and Gali Halevi. “Post retraction citations in context: a case study.Scientometrics 113.1 (2017): 547-565. []
  6. Fanelli, Daniele. “Why growing retractions are (mostly) a good sign.PLoS Med 10.12 (2013): e1001563. []

The absurd race for university rankings… or how publications are transformed into bad data

Lockdown or not, COVID-19 first wave in progress or over, universities open, teleworking or closed, it will still be out when August 15 comes. The AWRU Shanghai Ranking, like its cousins THE Rankings and QS Wold University Raankings and others of the same genre have their season, their inflexible communication, their teasers, their sports announcements of winners of the year and of emerging stars.

The recurrent criticisms that are made against them, the evisceration of their methods by the scientometers and rankings specialists1 will not change anything in their imperturbable march. So why write about it? Not to sum up their four decades history2, but because publications play a minor, but not negligible role, and their successive transformations into bad data is an interesting case study.. Let’s change this midsummer’s nightmare in an ironic and fun bedtime story for academics, mostly thanks to incredible French stories!

The tale of a French “emerging university”.
How to become ranked

The HER French system is incredibly ill-suited to fit these rankings. Indeed, not only do we have universities on one side and powerful research organisations on the other, but both share “joint research units” bringing together academics working for 2, 3, 4… up to 8 different employers. As a result, the signatures of scholarly publications are very long, contain multiple institutions for each author, and are subject to wide variation for a given lab or department.
Morever, from 2007 onwards, laws have defined the framework for “new universities”, regrouping institutions on a rather geographical basis. We will follow here the example of PSL3 (“Paris Sciences & Lettres”), whose name gives a foretaste of its diversity, with 11 establishments and 3 research organisations. Although there is only one university – Paris Dauphine – among all these institutions, some were already taken into account by rankers, such as the Ecole Normale Supérieure, very famous for its mathematics department. So how can they ensure that “PSL” becomes the brand name and that all academics outputs are counted under its name?

The only solution is to change the signature rules, homogenize them, and then measure their practical implementation by reluctant researchers. This is all the more difficult as the grouping of institutions did not make them disappear, and each one therefore retains its staff, budget, premises and laboratories. Thus, it took years of negotiations for an agreement to be signed by all the institutions, leading to about 70% of the publications signed in accordance with the model defined in 2015 three years later, where a “simplification” for the byline was carried out, to fashion the following model of affiliation :

Institution Name, PSL University, [institute or departement], Research Organization, [joint research unit number], University co-chair, Laboratory, [Team], [Address], Postal Code, Town, France

But this is not over as they have then to convince rankers and their underlying data sources (mostly Scopus from Elsevier and WoS from Clarivate) to pick the second place “PSL University” in this long list of possible affiliations in the byline. For that, they used the knowledge and work of a “ranking optimization policy officer”, Daniel Egret. former astrophysicist. This is no surprise: we have known for more than a decade that HER institutions react to rankings in different ways, trying to optimize their place if they think the goal is worth it4. Finally, like any other univeristy, they would cherry pick among all relevant rankings and boast about their amazing results on their website:

It’s somewhat ironic that PSL should be congratulating itself on its first place for “new universities” when most of its institutions are multi-secular, but PSL is just playing a game by pushing to the limit the “optimization” linked to changes in the signing of its affiliated researchers. However, as in fiscal matters, the limits between “opitimization” and “fraud” are tenuous and the management of the “Highly Cited Researcher” of the Web of Science shows us two examples of this.

It’s the authorship, stupid!
From private gaming to alternative facts

On this blog, we have already encountered this Web of Science tool, in the case of KC Chou, a serial peer review hacker who finally got caught after he became one of these HCR. At that time, like many, he had acquired a secondary affiliation: as Yves Gingras already noted six years ago, these “secondary institutions” tend to be concentrated in a few countries, and notably Saudi Arabia. However, by constructing a combination of indices, 20% of the Shanghai ranking derives directly from the number of HCR researchers at the university in question. In other words, it is no longer a question of directing publications one by one to the computing centre of a ranker just like PSL did, but of reassigning the most prolific research producers to a given university. In exchange for a given lumpsum, the researchers in question will therefore “sell” their authorship, possibly on the articles themselves, but especially in their response to the Web of Science’s query as to their affiliations. When HCR data became available, scientometers analyzed it and came to the conclusion that secondary affiliation should be left out to avoid massive ranking manipulation5.

The gaming had become too visible, forcing the ranker to change its methodology, as it stated that ” only the primary affiliations of new Highly Cited Researchers are considered in the calculation of an institution’s HiCi score for the new list.”. While this new method limits the direct interest of contractualisation between authors and institutions, it raises specific problems for French universities. Indeed, because of the multiplicity of affiliations described above, many authors dependent on research organizations indicate to Thomson Reuters/Clarivate/Web of Science group, that their main affiliation is CNRS, INSERM or INRA, which are not ranked. As a result, the research-intensive universities, united under the umbrella of the CURIF, have developed an extensive lobby to get these full-time researchers to indicate that University X is indeed their first affiliation and not the secondary one. After a long struggle, they obtained satisfaction in 2019 as the Higher Education and Research Minister herself, Frédérique Vidal, signed a letter which asked HCR researchers to pick their associated university rather than their own organization. This move has only one purpose, as detailed by Daniel Egret in the French economic press:

“The astrophysicist predicted a gain of 84 places at the University of Lorraine, 57 places for Toulouse-III and 26 places for Montpellier. The most “spectacular” effect of the new affiliation rules will mainly concern universities that are between the 200th and 300th place in Shanghai, according to Daniel Egret, because, at this level, “the scores are very close between universities, which can make them gain several dozen places”. The effect will be less visible for universities in the top 100, where the scores are “quite scattered”. Paris-Saclay would gain 3 places, and Sorbonne University, 5 places”. ((Les Echos, 28th Februray 2019))

There are no hidden practices, secret arrangements or small manipulations as in the reported data on former students’ salaries or employment levels, in the case of US management universities, even when it is UC Berkeley . It is just the very admission to fudge the data in order to skew the rankings so unfavourable to French universities and thus to produce alternative facts. Yes, these publications exist, they have been authored by Mr. X and Ms. Y., they just happen to work for a different organization, who cares?

Who are the users of these rankings?
Sold and actual markets

This example shows us how ambiguous university rankings are as a tool, not only because the measures they aggregate would be biased, false, or even falsified, but in their very objectives, whether on the side of their creators or their multiple users. Indeed, it is common knowledge that the purpose of mergers of French institutions was to move them up in the rankings, particularly that of Shanghai, even if the calculable effects were far from certain6. The reassignment of authorship is only the last adjustment, after signature changes, to obtain these good rankings through publications. Other strategies, such as lobbying voters for reputation rankings, follow the same logic. But why, how and to whom is it useful to be “top ranked”?

In theory and in the public display of the rankers, rankings are supposed to participate in the formation of students choice in order to pick their higher education institution. Supported by an audit society vision and the idea that something like a global market for universities actually exists, rankers are supposed to be third-party certified information providers. Which leads to two questions : on one hand, is it useful information for students, on the other hand, to which other stakeholders could it be? The first question is not easy to be answered, as Ellen Hazelkorn has recalled: the existing literature shows no clear sign of massive use by students in order to choose their places, except in very specific cases like some US schools7 I would add the absence of testimonies and anecdotal evidence in rankers ads and commmunication: you would never see Jim, Liu or Penelope videos on how they pick their wonderful university by examining their rankings. In fact, rankers probably don’t care so much about students, their target is elswhere. Or when they happen to make one such video, like THE below, the sound is so unprofessionnal that it is embarassing and, of course, students barely mention rankings as a factor in their decision.

If students are not the focus of the rankers’ attention, who are the rankings for? At least three types of users and uses can be listed:

  1. content that is easy to publish and comment on by the media
  2. direct objectives for the universities themselves and policymakers
  3. sources of revenue for rankers organizations

Without mentioning in detail sites specialising in this form such as Buzzfeed, Topito or WatchMojo, the general media have since the beginning of the 21st century integrated the dissemination of rankings produced by third party “rating agencies” on objects as diverse as holiday resorts, hospitals, public personalities or television series. So why not universities? More than complex information, the description and commentary of a ranking are objects that are easy to produce and can be appropriated by large audiences. On the other hand, the lack of popularisation of the modular European U-Multirank shows that classification must remain simple and “objective” in order to be widely disseminated.

We have already mentioned the study of the effects of rankings on the ranked organizations themselves. In addition to the question of optimisation and fraud, changes in practices in the recruitment and evaluation of academics have been observed, the presentation of universities themselves, the construction of coordinated regional, national and supra-national policies on the basis of indicators, etc.8. Whether these transformations are just visual makeup or whether they have a profound impact on universities is a subject of debate in many articles. As far as publications are concerned, China has proved to be a particularly fertile ground for observing the effects of rankings9.

Finally, we should conclude on something that is both obvious and not often discussed. The first stakeholders interested in rankings are rankers themselves. They not only organize conversations on their (free) productions and sell themselves as certified audit firms, but pave the way for services markets. Indeed, rankers offer services such as detailed rankings, training and help to get better ranked, communication or recruitment services to universities. Global rankings is less a market for students or universities than it is for service providers. You can mine for gold or you can sell pickaxes.

  1. as an example, see Billaut, Jean-Charles, Denis Bouyssou, and Philippe Vincke. “Should you believe in the Shanghai ranking? An MCDM view.” Scientometrics 84.1 (2010): 237-263. []
  2. A short version can be read here, Kehm, Barbara M. “Global University rankings–impacts and applications.” Gaming the Metrics (2020): 93. []
  3. Disclaimer: PSL is my “official affiliation” for publications []
  4. see this seminal article, Espeland, Wendy Nelson, and Michael Sauder. “Rankings and reactivity: How public measures recreate social worlds.” American journal of sociology 113.1 (2007): 1-40. []
  5. Bornmann, Lutz, and Johann Bauer. “Which of the world’s institutions employ the most highly cited researchers? An analysis of the data from highlycited. com.” Journal of the Association for Information Science and Technology 66.10 (2015): 2146-2148. []
  6. see Docampo, Domingo, Daniel Egret, and Lawrence Cram. “The effect of university mergers on the Shanghai ranking.” Scientometrics 104.1 (2015): 175-191. []
  7. see Hazelkorn, Ellen. “The impact of league tables and ranking systems on higher education decision making.” Higher education management and policy 19.2 (2007): 1-24. []
  8. See for example Stack, Michelle. Global university rankings and the mediatization of higher education. Springer, 2016. []
  9. Xu, Xin. “Performing under ‘the baton of administrative power’? Chinese academics’ responses to incentives for international publications.” Research Evaluation 29.1 (2020): 87-99. []

“You pay less, I earn more”… or how UC and Springer Nature made a seemingly win-win agreement

Win Win 306/365
CC-BY-ND Dennis Skley

And yet another agreement! While it was celebrated over the ocean as “the largest OA deal ever signed in the US” or a “milestone” for OA, we Europeans are now used to these “groundbreaking” contracts announcements every other week. So much that I have already written one in March on the German Springer/DEAL and another one in May on the Faustian Elsevier/Dutch consortium. So all things come in threes, and for a good reason, as Californians give us some food for thought on the financial side of the agreement.

First of all, it should be noted that the contract between Springer Nature (SN) and the University of California (UC) has not yet been written, but that only the Memorandum of Understanding (MoU) was made public this week1. This publication derives from a clear commitment on the part of the universities to make the negotiation processes and the principles governing the choice of subscription, support or no deal transparent to the local academic communities, but also more broadly to all stakeholders interested in these issues.

As we are almost in the middle of the year, the fact that the agreement has been signed for the years 2020 to 2023 has a first important consequence: all the mechanisms necessary for the identification of authors, for the various payments and for monitoring will probably not be in place before the end of the year (SN is committed to this by 1 January 2021). In practice, UC will pay in 2020 an undisclosed amount name “UC 2020 spend” for a Read & Publish in which the Publish part will be free of charge. It is only over the next three years that mechanisms will appear, which combined originality is at the heart of this post.

The Muti-payer model.
Getting authors and funders involved

One of the originalities of this contract with Springer is the adoption of a model first experimented in the UC/PLOS agreement, with the splitting of an APC into two distinct blocks: the first 1000 dollars which will be systematically paid by the university and the rest which will be paid by the authors if they have the possibility to do so. This mechanism smells like a device invented by economists, and it is one, a professor at UC Berkeley, who describes its purpose in The Scientist:

“In the US, there already were multiple funding sources—libraries paid for subscriptions, and when authors wanted to publish open access, they paid a surcharge on top of that out of their funds,” says MacKie-Mason. “The key thing here is that we’re integrating those into a single contract. That creates cost control for the institutions and the researchers [during the transition to open access], which is critical because the cost of scholarly publishing has been exploding.”

So the solution to the “new serial crisis” would be to imply authors as UC people have repeatedly stated2, but aren’t they already with classical “one shot APCs”? The idea to combine APC with institutionnal support in a contract is here pushed to the limit as we will see. In some “transformative agreements”, there is no way for a third party to understand who in the end pays what and from which source, especially in consortiia stteings. Here, it is quite the opposite as in the who MoU, a clear separation is made between two sources:

  1. The UC – would it be California Digitaly Library or UC itself – takes in charge a 750,000$ reading fee, 1000$ for each APC and, as we will detail, more if authors can’t pay. All these will be counted apart in “UC Fully OA Spend”, “UC Hybrid Spend” and of course the reading fee.
  2. The authors would pay the “APC remainder”, whoever is the original funder, and these sums play a very limited role into the contract, are not agregated under specific names.

So the splitting is not only made for each article, but for the total contract as “cost regulation” supported by Mackie-Manson but in fact only on the UC side, authors could spend whatever they wish on APC, and benefit from the UC participation. In consequence, as authors shall pay, they have the possibility to opt out of OA in hybrid journals, which is the default option. Consequently, the deal does not guarantee that all UC corresponding authors articles will be OA, but only those who wish so and, to some extent, that are ready to pay, favorable to hybrid journals, or APC gold open access supporters. The division and authors’ choice are highly visible in an exception in the contract. If, despite very short deadlines, SN was able to implement the entire workflow before the end of 2020, then it could start invoicing APCs. Under no circumstances would UC have anything to pay, but authors could be solicited:

Should Springer Nature implement the Multi-payer Model before January 1, 2021, Springer Nature may begin collecting the APC Remainder under the terms of the model […]. If the corresponding author does not have research funds available to cover the APC Remainder, then Springer Nature shall not collect an APC for those articles. No UC Fully OA or Hybrid Spend payments will be charged during this time (article 3.8.2).

It is hard to imagine a corresponding author who can get free APC deciding to pay, unless their grant is nearing completion and they cannot spend it otherwise. But this provision does indeed support the idea of two decoupled payers, as the rules applying to them may differ, the first (UC) not paying in 2020 before being obliged to contribute, the second remaining in a logic of choice throughout the contract. But what exactly are the amounts to be paid?

Price, Volume, Participation :
an equation to determine an Hybrid bill

The price calculation formulas are not yet complete, since the agreement is not signed, but the foreseeable variations are known throughout the contract. For full OA journals, there will be a base price in 2020, with a maximum increase of 3.5% per year. This base price is certainly not the catalog price, since it is specified that ” If at any time during the agreement the then-current list price APC is lower than the APC to be charged under the agreement, the current, lower APC will be charged instead” (art. 3.3). The issue of prices and volumes is most complex when it comes to hybrids APC. First of all, unit pricing is almost constant with the same prices in 2020, 2021 and 2022, and a maximum increase of 2% in 2023. But while the paid volume published in full OA appears unlimited, the paid volume published in Hybrid journals is very constrained.

First the number of articles published in Hybrid by the corresponding authors in 2019 and 2020 is calculated, and the smallest of the values is taken, which becomes the Base article number. The minimum volume of articles is then simply defined as 85% of this number, over time. On the other hand, the maximum number depends on two variables: first, an “inflation” of the authorized volume, of 5% per year, then a calculation that depends on the effective participation of the authors in the publication scheme. Indeed, the parties expect that between 30% and 40% of the authors of articles will choose to publish in hybrid AO rather than revert to a paywalled publication. (orange curve) If the program is successful, more than 60% of the authors adhere, then the red curve defines the maximum number of articles; symmetrically, in case of failure – less than 30% – it is the yellow curve that defines this maximum number.

In a close fashion to the agreement with DEAL, Springer defines a volume control on Hybrid, which can lead up to a third more articles published than the current Hybrid APC. But the consequences of going over this limit are very different than the German counterpart : UC is not anymore paying its 1000$ above the maximum, but authors – if they chose so, must pay the APC remainder. On the other end, if the minimum is not reached, UC shall pay “the average hybrid APC for UC corresponding authors from the previous year for the number of articles necessary to bring the total to the minimum. In 2021, the average hybrid APC from 2019 ($3208) shall be used.” So Springer Nature is sure to have (almost) its money back and UC has a control mechanism which prevents a high rise of its Hybrid spend by volume control.

Hard capping the total costs.
Will UC pay less in the end?

Until now, it seems that we analyse another “cost-neutral” agreement that in practice could absolutely become a high rise contract : APC individual price inflation, unlimited payment for full OA articles, controlled max rise of hybrid OA would contribute to a larger bill for UC. Then comes the most original point of the UC/SN contract : a hard cap on the sum of fluctuating bills. In fact, some agreements, typically the JISC ones, include a price control that says “we will pay this, period”. Of course, the trade off is most often a defined, limited volume. Here, as we read it in article 3.6.

In each year of the contract, the Total UC Spend shall be subject to a fee control mechanism, as set out below. All fee control mechanisms are computed in relation to the license fees paid by UC for Springer journals, Adis Journals, Palgrave journals, andacademic journals on nature.com in 2020 (“UC 2020 Spend”).

So the starting “subscription” – ie Read & Publish – set price caps the whole price of the contract, once again in a very precise and shall I write, twisted way. Starting from the “UC 2020 spend”, in 2021 you can not exceed 95% of that sum: if it is the case, then UC gets some reading fee part, and if it is not enough, refunding from SN. So the max is clear and -5% compared to the starting year. But in 2022 and 2023, you can not exceed 98% of that sum ; if it is the case you get only the Reading fee back and nothing else. In other words, there is in fact no fixed maximum payment, and certainly not a garantuee that UC would pay less in 2022 and 2023 than in 2020, and as we don’t know what were the different bills, even more less than 20193. The UC part is very confident on the result as the associate executive director of the California Digital Library, Ivy Anderson, stated : “The new agreement is expected to save the system money overall, but the exact cost will depend on the number of articles UC researchers publish”.

Whatever the final outcome, and one can think, given the complexity of the provisions that the UC part has run many simulations on its final bill, there are three lessons to be learned from this MoU. First, in the absence of price transparency, it is difficult for outsiders to determine whether an agreement is really financially interesting or whether it mechanically leads, as with subscription formulas, to higher prices paid by higher education institutions. Secondly, this agreement builds a link between the payment of authors and that of the university: it therefore allows the direct inclusion of research funders, while ensuring traceability and monitoring of flows for each of the parties. It also contains incentives on the behaviour of authors, who would benefit from using the UC workflow to partially or totally reduce their own payment. But it is the ability to capture money from funders, third parties to the contract, that is striking, with certainly Coalition S members in mind.

Consequently, thirdly, it is the de facto guarantee of Springer’s revenues by encouraging new spending in the form of APC in subsidizing them. Making new provisions to turn the Nature journals into a hybrid goes in the same direction. In a similar way to “Pure Publish” agreements that goes with a discount on APC, the UC agreement is a transformative one as it explicitly changes universities from fund providers to fund collectors for publishers, with the hope of a diminishing or stable bill in exchange for that service.

  1. We saw on the Dutch case that there could be quite significant differences between an MoU and the actual contract []
  2. See this piece on Impact of Social Sciences LSE Blog []
  3. I previously wrongly tweeted that they would pay less, as I thought the reference was UC 2019 spending []

Living in a post-Ingelfinger world or… The HCQ-COVID-19 publication show

Disclaimer: this post does not address the merits of the treatments proposed by the IHU team nor their risks, and even less the fact that Prof. Raoult would be a genius, a madman or a top scientist who got lost along the way.

It all started with a video, posted on February 25th, 2020, then entitled “Covid-19: endgame”, and put by IHU Méditerranée-Infection on Youtube. In that less than 2 mn video clip, extracted from the end of a seminar, Didier Raoult states that COVID-19 is “probably the easiest respiratory infection to treat” and that chloroquine (CQ) is effective and already “recommended for all clinically positive cases” in China. It wasn’t the first time this infectious disease star recommended CQ and its cousin molecule, Hydroxychloroquine (HCQ) to fight viral infections. Indeed, as early as 2007, he presented these drugs as “an interesting weapon to face present and future infectious diseases worldwide” in the International Journal of Antimicrobial Agents. (IJAA). Framed as a recycling of these antimalarial drugs, the article constituted a literature review, mainly of in vitro studies, and was part of the scientific and medical strategy of the IHU, the repositioning of old molecules, free of rights, towards new uses. And this possibility of reuse was taken up in a letter sent on February 11th, 2020 to the same journal (IJAA), accepted the same day and published on Februray 15th.

The series of IJAA publications continued. The day after the Youtube video, a new article was submitted, specifically dedicated to the use of CQ as a treatment for the COVID-19 epidemic. Accepted the next day, February 27th and published a week later, it repeated the efficacy claims observed by the Chinese and as a result of clinical recommendation. This assertion is based in particular on one of the strangest references I have ever encountered. Indeed, it is a letter of exactly ten lines published in BioSciences Trend, which body is copied below :

The coronavirus disease 2019 (COVID-19) virus is spreading rapidly, and scientists are endeavoring to discover drugs for its efficacious treatment in China. Chloroquine phosphate, an old drug for treatment of malaria, is shown to have apparent efficacy and acceptable safety against COVID-19 associated pneumonia in multicenter clinical trials conducted in China. The drug is recommended to be included in the next version of the Guidelines for the Prevention, Diagnosis, and Treatment of Pneumonia Caused by COVID-19 issued by the National Health Commission of the People’s Republic of China for treatment of COVID-19 infection in larger populations in the future.

Defined as an “abstract” on the journal site, but without any other body of text, this “article” doesn’t seem to be fully supported by the 7 references listed. It relies mainly an in vitro study from early February, already widely cited, which indicates that CQ could be effective. In fact, It wasn’t until February 29 that the results of a CQ clinical study were submitted to a Chinese journal, before being published on March 6. But let’s go back to the IHU timeline.

Ten days later, a second video was put on Youtube, presenting the results of an observational study made in Marseille and showing the effects of HCQ alone and in combination with an antibiotic, azithromycine (AZ). So there was a slight shift: going from CQ to HCQ and adding an antibiotic. The main result is only the absence of virus in nose and throat, so it is not clinical results but Didier Raoult drew from results to tell his audience their consequences for the clinical institution he manages:

The fact that you no longer have the virus changes the prognosis. Actually, that’s what infectious diseases are all about. If you don’t have the germ anymore, you’re saved… You have a right to be tested here, and if you’re tested, you have a right to be treated here. That is what we will do.

So basically, for him, results were so good that you HAD to treat people when they are tested positive. No more trials or research needed, the time for clinical medicine had come, hoping other places would follow his lead. Slides were available on the same webpage but no link to an existing paper, though the same day, but not mentionned in the video, a preprint was submitted to MedrXiv Simultaneously, as it is often the case with biomedical preprints, it was submitted to a journal… the ever-welcoming IJAA, who accepted it, as usual, one day later and published it on March, 20th. Before we come to the extraordinary fate of this paper, let us go back to the title of this post and its interest at this point.

From preprints to preprints:
the life and dearth of the Ingelfinger rule

We can observe from the two examples above a pattern of scientific communication: the IHU first posts videos, then produces preprints and finally publishes articles in academic journals – here IJAA. This is very unusual, at least in contemporary times, but happened in various ways during centuries of scholarly communication. The idea that you had first to communicate with your peers through a journal before getting to “the public” is neither constant nor dominating in all disciplines. In our era, it was pushed at a key moment in the mid-1960s. Back then, a first wave of preprints was being supported by NIH and was gaining momentuml in some biomed communities through Information Exchange Groups (IEG) that would circulate by air mail printed copies of unpublished manuscripts1. Nature started a campaign against the “preprint galore” and a few European and US biology and biochemistry journals editors-in-chief met in Vienna in 1966 to get rid of them by stating that : “The journals listed below will not consider manuscripts for publication if preprints, of essentially identical content, are to be distributed, in substantial numbers, by an agency independent of the author or of the publisher of the journal. “2

That led to the termination of the IEG experiment by the NIH in 1967. Two years later, the New England Journal of Medicine (NEJM) editor-in-chief, Franz J. Ingelfinger, coined the rule of acceptance of a paper, based on his interpretation of “sole contribution”, de facto forbidding even “circulation-controlled journals” to print something ahead of the NEJM3. In the same sentence, he remarkably included “news media”: he therefore aimed not only at the exclusive circulation of the article within scientific communities, but also to the prohibition of dissemination of its content to journalists and other medical news enthusiasts. In the early 1970s, his work to promote this exclusivity had a double effect: this practice was given the name Ingelfinger Rule, and many high-profile journals adopted it explicitely. While at the beginning of the 21st century the Ingelfinger Rule was often interpreted as a means to fight against the duplication of papers, its aims were more about controlling the circulation of knowledge in order to protect the newsworthiness of “general medical journals”4 and to organize communication about medical academic papers in a specifc way, favorable to a limited number of journals.

Indeed, as Vincent Kiernan beautifully described in his 1997 article5, the Ingelfinger Rule had become prevalent in Anglo-American journals. It is in particular the efforts of the International Committe of Journal Medical Editors (ICMJE) that built it as a “publishing standard”, which effect was for these journals and their editors-in-chief to simultaneously operate a double control:

  1. control on the authors by requiring them not to reveal the content of their articles, and even less so share the figures and other synthetic representations of results.
  2. control on journalists by providing them with preprint copies of articles in advance, while imposing an embargo on them until actual publication by the journal.

As a result, the general press (free of charge) advertises the content of the journals – it is not an article by Dr. X & Y., but an article from the NEJM or The Lancet – and organizes the dissemination of “medical discoveries” by strengthening the influence of these journals both within academic communities and within press professionnals and the general public. To conclude his paper, Kiernan questions the durability of such practics in the Internet-era and points out the effect of ArXiv preprints, citing the efforts of the ICMJE to extend the Ingelfinger rule to e-prints, with the argument of the direct consequences of biaised or false medical knowledge for the public.

The biomedical field resisted 15 more years to preprints and the Ingelfinger Rule largely stood6, even if it was adapted to emergency contexts, such as the AIDS epidemic. But Kiernan’s forecast came into reality, notably with the creation of BiorXiv in 2013 and the subsequent success of preprints in biology and biomedicine, until preprints became quasi-articles. Consequently, the Ingelfinger rule was dropped by numerous journals and publishers, even if NEJM itself keeps a case by case policy.

Prof. Raoult and his videos, possibly including slides with the figures so dear to the NEJM, thus live in a post-Ingelfinger world, in which academics can directly ensure their communication, not only in terms of content, but also in terms of comments, criticism, reporting or response. Indeed, we will see that the primary communication is not the only one modified by the abandonment of this rule, but the complete organization of the journal’s centrality in the whole chain of scientific communication.

Chaos and creation around one paper

Let us go back to this first publication by Raoult’s team on the effects of HCQ on viral porting, published in the IJAA on March 20, 2020. At the time of writing this post, the article has received 1124 citations according to Google Scholar but also thousands of tweets, blog posts and other references in press articles according to PlumX, a company owned by Elsevier, itself the IJAA publisher. The early circulation of the article was not based on a press release of the IJAA, but on Raoult’s own video and that of his various networks. As Wired recounts, with the help of a lawyer, a retired doctor, a shared google doc and an interview to Fox News, an heterogeneous assemblage à la Bruno Latour, the study published in the IJAA won a quote in a Tweet from the President of the United States the day after its publication:

That Trump endorsement of course had enormous consequences on the HCQ market, the launching of clinical trials, self-medication HCQ practices and the scope of public discussion on the efficacy and dangers of such a treatment. We won’t directly treat these important questions here, but keep on following the exotic trajectory of the publication itself. Simultaneouly to the Trump tweet, a PubPeer thread was lauched on the famous post-publication comment platform, but contrary to the Voinnet affair7, most of the first commentators signed their critiques. Among other topics, the communication trajectory of the paper helped the critique: for example, Leonid Schneider noticed the discrepancies between the figures attached to the video and the ones drawn in the published paper.

Above and beyond Pubpeer, three reviews were quickly published, questionning many aspects of the IJAA paper. The first one is a twitter thread by a master student on March, 22nd ; the second one is a zenodo 18-pages paper by three British/Irish statisticians on March, 23rd ; the third one was a blog post by a very famous Dutch microbiologist and scientific misconduct specialist, Elizabeth Bik on March, 24th. So only four days after publication – still four times the actual reviewing IJAA delay – the paper is being trounced online. Among the many points, let us note that the publishing history was being questioned, some noticing the differences between the first “preprint” on IHU website and the final paper, others underlying the lack of changes, an hint for them on how tenuous the peer review process has been., the 24h delay being surprising to every commentator. The fact that one of the authors was also the editor-in-chief of IJAA was underlined, as well as the “vanishing” of 6 patients (among 26 treated by the combined drugs), which could completly change the statistical value of the results.

While Prof. Raoult was fighting for HCQ to be authorized for general physicians in France, the online discussion kept on going until the learned society, the International Society of Antimicrobial Chemotherapy (ISAC) behind the journal, made a troubling press relase on April 3rd:

“ISAC shares the concerns regarding the above article published recently in the International Journal of Antimicrobial Agents (IJAA). The ISAC Board believes the article does not meet the Society’s expected standard, especially relating to the lack of better explanations of the inclusion criteria and the triage of patients to ensure patient safety. Despite some suggestions online as to the reliability of the article’s peer review process, the process did adhere to the industry’s peer review rules. Given his role as Editor in Chief of this journal, Jean-Marc Rolain had no involvement in the peer review of the manuscript and has no access to information regarding its peer review. Full responsibility for the manuscript’s peer review process was delegated to an Associate Editor. Although ISAC recognises it is important to help the scientific community by publishing new data fast, this cannot be at the cost of reducing scientific scrutiny and best practices. Both Editors in Chief of our journals (IJAA and Journal of Global Antimicrobial Resistance) are in full agreement.”

So the paper has a lot of problems, but stuck by the peer review rules. This cryptic PR became even more troubling a week later as it was “replaced” by an ISAC and Elsevier press release. In fact, the journal is not owned by the learned society, but by the Publisher, only being an “official society journal”. This second PR is streamlined compared to the first one as the “not meeting standard” sentence has disappeard and an announcement of post-publication peer review audit. Through this example, we measure how much different is the situation from what was prevalent under the Ingelfinger Rule. But it is with another Raoult’s team paper that science communication came back to its 17th century roots.

From presidential visit to media frenzy:
the marginalization of journals in scholarly communication

After a follow-up study published at the end of March which made less headlines and as some HCQ trials on diverse patient groups were starting to being published, it is with another observationnal study that Prof. Raoult showed the world how he was really managing scholarly communication. On April 9th, the French president, Emmanuel Macron unexpectidely visits IHU Mediterrannée and meets with Prof. Raoult, who presents him the results of its ongoing study. There was no press, but members of the IHU had recorded the arrival of Macron and posted it, making it available to the whole French media.

Here we need to go back to the origins of scientific communication, even before journals were born, when the quality of witnesses – meaning mostly royalty kinship – were an important element of the credit given to the narrative of an experiment or an observation8. In our times, it became a two-way credit flux: Macron was showing his will to base public health on evidence-based, all the more given by a star scientist, while Raoult was legitimizing his position in the French public health landscape, where critics of his methods and results were numerous.

The next day, Raoult made public his first results, not in the form of a preprint or slides with an associated video, but as a simple tweet with the abstract and a summary table.

This tweet was of course massively picked up, commented on and aroused strong media interest, all the more so as the results reinforced those of the previous study by moving from a purely biological effect to a clinical effect: “The HCQ-AZ combination, when started immediately after diagnosis, is a safe and efficient treatment for COVID-19, with a mortality rate of 0.5%, in elderly patients. It avoids worsening and clears virus persistence and contagiosity in most cases. ” Four days later, Prof. Raoult was invited in Dr Oz show, a famous TV host in the US, harshly criticized for his often unproven medical advice.

https://www.youtube.com/watch?v=uy1cPT1ztko

At the day of the interview, there was no preprint and the paper was not even submitted to a journal. Yet, Prof. Raoult presents his results as facts. It was only on the 20th that the manuscript was sent to Travel Medicine and Infectious Disease,9, with 10 days for peer review and a publication on May, 5th. Tens of thousands of comments on Facebook and tweets have followed according to PlumX,10 though media as much endorsed the results as they reported the methodological limits os the study – mostly the absence of a control group.

This study is undoubtedly a borderline case in the marginalization of journals, with communication aimed primarily at peers being out of step with announcements to political leaders and media outlets. Nevertheless, the massive availability of preprints, abstracts or other materials on topics such as the effectiveness of masks or tests, the persistence of coronavirus on this or that surface, or cases of cure, has led to significant media coverage. From the point of view of the public authorities and the general public, it could have strengthened the authority of academic journals, again in a position to assert their necessity as a obligatory passage point for public dissemination. But this return to grace assumed that the journal peer review is an effective barrier against “bad science”, an hyptohesis which has been dismissed by thirty years of studies and literature.

Prestige journals in epidemic times:
an economy of reputation crumbling down?

Indeed, prestige journals are bad for methodology: they don’t follow their own standards on reporting clinical trials, and more generally disicplinary standards. Yet they remain prized places to publish, even during the pandemic where preprints are so trendy because of the urgency to share results and knowledge. And some HCQ papers have been quietly published in such journals, until one observationnal study seemed to close the dabate on this treatment efficacy and risks.

For this study, there was no advance communication, no preprint but a straight article published in The Lancet by 4 authors. Oh, yes, there is a little gem still there on Twitter : two days before online publication, the “first author” answered a tweet by Richard Horton, editor-in-chief of The Lancet:

https://twitter.com/MRMehraMD/status/1263034198870429696

The reaffirmation of their confidence in the journal peer review system, even in times of health emergency, is comforting. And their trust is shared by the highest health authorities. On May 22nd, the study was published and asserted on the basis of a gigantic aggregation of almost worldwide patient databases that HCQ is not only inefficient, but also a very dangerous for COVID-19 patients. This announcement came at a time when many ongoing trials are displaying HCQ treatment arms. As a result, the WHO decided the next day to evaluate the continuation of its Solidarity study and announced its position on May 25th:

“Having met on 23 May 2020, the Executive Group of the Solidarity Trial decided to implement a temporary pause of the hydroxychloroquine arm of the trial, because of concerns raised about the safety of the drug. This decision was taken as a precaution while the safety data were reviewed by the Data Safety and Monitoring Committee of the Solidarity Trial. “

Nevertheless, in a manner similar to Prof. Raoult’s article, statisticians then look at the content of the article, the data it provides, and begin to point out obvious errors. But for some it was more a police investigation than data re-analysis: how can there be only 4 authors (and no acknowledgements) for such a study? Why are the hospitals involved not mentioned? What is this mysterious enterprise – Surgisphere – unknown until recently, which provides this data? What is the career of its manager and co-author of the paper? Putting apart questions about the company, 6 days after publication, they end up writing an open letter to the authors and the journal, signed by 201 colleagues and endorsed by James Watson11. They mainly point out the necessity to open the data, even more considering the extraordinary results, and describe obvious errors, questionning the quality of the database and the way (including ethics) data was gathered.

The Lancet and the authors were very prompt in responding to these criticisms: in fact, on May 30 a correction was published, covering very minor aspects. : “the numbers of participants from Asia and Australia should have been 8101 (8·4%) and 63 (0·1%), respectively. One hospital self-designated as belonging to the Australasia continental designation should have been assigned to the Asian continental designation.” Of course, the conclusion was a classic in those corrections : “There have been no changes to the findings of the paper.” But critics keep on pushing on the problems, would they be HCQ supporters, Prof. Raoult himself stating “fake data” or “manipulated data” on Twitter or clinicians trying to find coherence between the papers’ data and their own. So, only 3 days after the correction, The Lancet puts an expression of concern on the paper:

“Although an independent audit of the provenance and validity of the data has been commissioned by the authors not affiliated with Surgisphere and is ongoing, with results expected very shortly, we are issuing an Expression of Concern to alert readers to the fact that serious scientific questions have been brought to our attention”.

The paper was still saveable, thanks to the independant impeding audit. Alas, another 2 days and the 3 authors who do not belong to Surgisphere threw in the towel by stating they haven’t seen the data, and demanded the retraction of the article. The Lancet officialized it, provoking expression of outrage, the questioning of the seriousness of the journal and… the reactivation of the suspended trials. Thus, in less than a week, the worldwide study published in what many consider to be “one of the best medical journals in the world” has been awarded the 3 labels commonly used in post-publication peer review – Correction, Expression of Concern, Retraction12 – nullifying the evidence claimed on May, 22nd. But the Surgisphere story goes beyond that article: another paper, published by NEJM on the “same kind of data” was retracted on the same day. Moreover, there are at last two regions – South America and Africa – which have and will suffer from public health policies being developed on preprints and data published by Surgisphere. While #LancetGate was trending on twitter, in-depth inquiries were being made on Surgisphere and the 4th author of study who, ironically, coauthored a paper entitled : “Combating Fraud in Medical Research’ in 2013 !

Science at its best:
boring, negative results

To conclude this story on scholarly communication, we have to add that most HCQ articles have not been given the same media treatment and have not been communicated in fancy ways by authors: a preprint on BiorXiv or MedrXiv, then an article with often no spectacular results and limitations because of the number of patients, their previous health conditions, incomparability between groups, etc. One day before the retractions, the same NEJM published the first randomized-control trial on post-exposition use of HCQ, so close to the “Raoult treatment” – AZ not being included. Here is part of the abstract published:
Side effects were more common with hydroxychloroquine than with placebo (40.1% vs. 16.8%), but no serious adverse reactions were reported.After high-risk or moderate-risk exposure to Covid-19, hydroxychloroquine did not prevent illness compatible with Covid-19 or confirmed infection when used as postexposure prophylaxis within 4 days after exposure.”

What do we get from this abstract? That the article is a typical example of those “negative results” that fail to be published, leading to significant biases in the evaluation of treatments in clinical trials through a “publication bias”13. And yet, not because of its own interest, originiality, breakthrough knowledge, but because of its relevance to public health in an epidemic situation, this trial has been published by the other “world’s best medical journal”.

While predictions of “really bad science to come” have sounded true for most commenters and supported by a high number of retractions, the COVID-19 academic publication landscape has also shown a massive uptake on preprints, public education on scientific controversies, conflict of interest and statistical analysis and furthermore… yes, publication of null results in prestige journals. Whether you think this is a total mess and you prefered the Ingelfinger rule depends on the way you conceive academic research and scholarly communication. Back then, preprints were non-existent in biology and social networks had to be invented, but The Lancet published the Wakefield paper on the link between MMR vaccine and autism. Was it a better time?

  1. See Cobb, Matthew., 2017. “The prehistory of biology preprints: A forgotten experiment from the 1960s.” PLoS biology 15.11 []
  2. Thorpe, W. V. (1967). International Statement on I nformation Exchange Groups. Science, 155(3767), 1195-1196. []
  3. Ingelfinger, Franz. “Definition of” sole contribution”.” N Engl J Med 281 (1969): 676-677. []
  4. Ingelfinger, F. J. (1977). The general medical journal: for readers or repositories?. New England Journal of Medicine, 296(22), 1258-1264. []
  5. Kiernan, V. (1997). Ingelfinger, embargoes, and other controls on the dissemination of science news. Science Communication, 18(4), 297-319. []
  6. See as an example this defense of the rule by Nature in 2010, five years after having written they were ok with preprint servers []
  7. see Torny, Didier. “Pubpeer: vigilante science, journal club or alarm raiser? The controversies over anonymity in post-publication peer review.” 2018 and Guaspare, Catherine, and Emmanuel Didier. “The Voinnet Affair: Testing the Norms of Scientific Image Management.Gaming the Metrics: Misconduct and Manipulation in Academic Research (2020): 157. []
  8. See the classic book Shapin, S., & Schaffer, S. (1985). Leviathan and the air-pump: Hobbes, Boyle, and the experimental life (Vol. 109). Princeton University Press []
  9. A journal in which one of the authors is an associate editor have underlined Raoult’s critics []
  10. The story is quite different within the academic world with “only” 21 citations until now, far much less than the March study. In fact, many observationnal studies and trials were competing with this study []
  11. EDIT June 9th: James Watson made a fantastic interview on an australian radio where he gets into detail about how he started and run this 5-days inquiry, hear it there []
  12. On the standization of journal policies, see Pontille, D., & Torny, D. (2017). Beyond Fact Checking: Reconsidering the Status of Truth of Published Articles. []
  13. There is a huge literature on this topic in the last 30 years, see as an example this The Lancet article, Easterbrook, P. J., Gopalan, R., Berlin, J. A., & Matthews, D. R. (1991). Publication bias in clinical research. The Lancet, 337(8746), 867-872. []

Faustus pact with Lucifer or… How Open Science becomes sustaining Elsevier data infrastructure in exchange for open access papers


“On these conditions following:
First, that Faustus may be a spirit in form and substance.
Secondly, that Mephistophilis shall be his servant and at his command.
Thirdly, that Mephistophilis shall do for him, and bring him whatsoever.
Fourthly, that he shall be in his chamber or house invisible.
Lastly, that he shall appear to the said John Faustus at all times,
in what form or shape soever he please.

I, John Faustus of Wittenberg, Doctor, by these presents do give both body and soul to Lucifer, Prince of the East, and his minister Mephistophilis, and furthermore grant unto them, that twenty-four years being expired the articles above written inviolate, full power to fetch or carry the said John Faustus body and soul, flesh, blood, or goods,
into their habitation, wheresoever. By me, John Faustus.

Faustus
CCBY Bart Everson

The legend of Faust has known many versions, but that of Christopher Marlowe, highlighted above, is no exception to the common rule: it is the absolute thirst for knowledge that drives the scientist to conclude this pact, while the evil or deceptive nature of Lucifer does not play a major part in its making1. So to call this reference to the signing of an agreement between scholarly institutions, by definition producers of knowledge, and a publishing house, however powerful it may be, normally only responsible for disseminating it, may seem counter-intuitive. Yet, as we shall see, it is the one that is required, as the relationship between the two parties may be potentially inverted. With this new agreement, Elsevier will try to become the knowledge-producing entity, the one that will give these institutions and their authors what information they think they absolutely need.

From subscription to a Read & Publish pilot
to a full Publish & Read agreement

The relationship between the Dutch universities, represented here by SURFmarket B.V., and the publisher Elsevier is very old and has mainly consisted of the supply of journals in the form of paper subscriptions, then by electronic access from the end of the 20th century until 2015. In March 2016, if a new contract is signed, it contains not only subscription services but also provisions for the open access publication of a limited number of articles, originally 3600 over 3 years. This agreement was not necessarily as successful as expected, as for example 1300 articles were not “consumed” at the end of this first agreement. Nevertheless, from amendment to amendment – 7 in total, the contract was extended in terms of the journals concerned (Cell Press) and temporally until 20 April 2020.

In contemporary classifications, this agreement could therefore be considered as a Read & Publish, with a subscription fee, open access publications being produced without additional payment. The first parts of the new contract show a reversal of this logic by displaying a unified cost for all the services provided by Elsevier: reading is no longer separated from the publication in the pricing, even though the provisions of the former are much more complex and pages long than those of the latter

Indeed, as is often the case in subscription contracts, numerous provisions govern the rights to access and read content, but also the duties of the publisher in terms of document supply and the scope of services. But, as we saw in the case of the Springer/DEAL agreement, the provisions of publication services can be relatively complex. This is not the case here: no financial exchange linked to each publication, no limit on the number of articles, no separation between publication in hybrid and full open access journals, so only two pages define the conditions of publication. Beyond the description of the workflow, one article should be highlighted:

Both parties are committed to reach 100% Open Access during the term of this Agreement, In line with this joint ambition, Elsevier offers Corresponding Authors the possibility to publish Gold Open Access in the widest possible range of Elsevier journals under the Terms of this Schedule 4. As per the effective date of this Agreement 95% of the journal articles by the Corresponding Authors are eligible to be published Open Access. For the remainder of the journal articles, Elsevier will continue to strive for sustainable immediate open access options across its journal portfolio to support the 100% Open Access goal.

As in a large number of technologies, lack of success is not necessarily an obstacle. Whereas in spite of more than four years of possible publication under the previous agreement, only a fraction of Dutch authors had chosen this route, Dutch universities this time aim for 100% open access, and Elsevier promises them that almost all the journals it distributes will meet this end. While at the same time, authorizing authors to not chose Open Access (p. 45), pushing further away this objective of 100% OA for corresponding authors papers.

The whole scheme is close to the one signed by Elsevier and Bibsam, the Swedish consortia, after they spent almost 2 years with no deal. But the Swedes claimed they are actually paying less than before in total costs in a recently published article2 while signing an agreement where Swedish authors are almost mandated to go for an OA publication.

More services means more costs

On this OA publication part, the Dutch contract is therefore not just a continuation of the previous one since new journals are involved and technical provisions are made to publish “by default” in open access in CC-BY. Moreover, the volume of publishable articles – even if it was previously never fully consumed – is now unlimited. This expansion of the service is accompanied by a sharp increase in costs. If we take the amounts listed in the various amendments to the 2016-2020 contract and report the new amounts, we obtain the following graph, quite different from the Swedish one3 :

Over a “long period” (9 years), we therefore observe a 40% increase in costs, meaning an inflation of more than 4,3% every year. Far from the assertion of “cost neutrality” as in the OA2020 text of 2015 and the initial hypotheses of the Coalition S, the simply potential transformation of all Dutch publications into open access articles is therefore extremely costly in this case and renews the observations of serial crisis already made by SPARC 25 years ago. If the amount paid is constant between 2021 and 2024, there is no guarantee that it will not sharply rise again after the end of the current contract. Financial information was not surprisingly completly absent of the press release, Dutch institutions touting the new agreement objectives as if they were already realised:

NWO President Stan Gielen said: “Enabling Open Access to research results has been a core mission for NWO since 2003. This agreement is a giant step in our collective ambition to provide 100 percent Open Access for all publicly funded research in the Netherlands.”
NFU / CEO of Amsterdam UMC Hans Romijn, said: “This is definitely a game changing agreement in open access publishing in medicine from both national and international perspectives, considering the large impact and the volume of Elsevier journals. This will certainly contribute considerably to the advancement of research, and, most importantly, better treatments for our patients.”

The same assertions have been made over the last 10 years about the agreements signed by different consortia, highlighting the open access part of such deals. They are however very different from the “revolutionary idea” proposed by Elsevier in Automn 2019 about data. In fact, it was so revolutionary that it leaked out :

https://twitter.com/sarahderijcke/status/1190610725250764800

As Sarah de Rijcke, a distinguished science and technology studies scholar, underlines it, Elsevier then tried to directly exchange open publications for data, continuing Big Publishers strategy in investing scholarly infrastructures in order to maintain their profits while adopting open access for publications4. That led to a public discussion of ongoing negociations and a VSNU communication that denied “selling” metadata and research data to Elsevier. In December 2019, a press release reaffirmed that data remained the propriety of universities and that some principles were taken to avoid vendor lock-in. Let us now see how it has been dealt in the final agreement.

Elsevier as a data company
and how you will be willing to pay for it

Apart from the introduction pages, one has to reach page 102 to deal with data and “Open Science Services for Research Intelligence and Scholarly communication” that are part of the agreement. The first and second page of this section describe the collaborative principles that were quoted in the December 2019 press release, which look very consensual.

  1. interoperability and vendor neutrality
  2. transparency, inclusion and collaboration
  3. access to research data and metadata
  4. data portability

If we add to this the common governance structure specified in the last pages and the fact that each party retains its data at the end of the agreement, this part of the agreement can be considered as a true joint collaboration. Nevertheless, Mephistopheles drapes itself in detail, and a full reading of the articles on page 104 underlines how Elsevier now considers itself a data company. Firstly, by default, everything belongs to Elseiver, except what is directly “provided” by the institutions. Secondly, under no circumstances can intellectual property resulting from the development of services be shared. Thirdly, if a common intellectual property were to be created, a new agreement would be needed in which Elsevier would have ownership and the institutions a free but non-exclusive right of use. Fourthly, all existing openly licensed data provided by the institutions are directly reusable by Elsevier. Fifthly, even in the absence of such data, Elsevier may develop equivalent or similar services with other partners. Finally, sixthly, if sensitive data or data belonging to third parties were to be at included in the services, the responsibility would of course only be that of the signatory institutions.

The contrast is therefore striking: on the one hand, Elsevier is (finally) ready to release the publications of all its journals under Publish & Read agreements in return for a fee; on the other hand, the publisher locks all the data and does not wish to share them under any circumstances, thus underlining how much they are now considered to be the real valuable object of the academic world5.

But what pilot services are implemented in the agreement? For the time being, and contrary to the subscription and open access publication services, none are specified. These are simply examples that are given in a table on page 103, reproduced in the FAQ and below:

USE CASE DESCRIPTION
Aggregation and deduplication service based on CRIS systems Improves findability and visibility of NL research outputs by aggregating and deduplicating separate CRIS systems into a Pure Community module available to all institutions which can serve as a building block to a NL open knowledge base.
2. NL Research data Link research data from member institutes affiliated researchers in subject or domain specific repositories into Dutch knowledge base
3. Funding information Link NL research outputs to grants and funders (EC, ERC, NWO, RVO, ZonMw), to allow for improved tracking / assessment of impact of funded research.
4. Health Data Management Link NL health ‘data silos’ in a secure HDM platform
5. OA compliance as a service A proposed service to better use knowledge base OA publication reminders, meet funder requirements, collect assets + reporting
6. Fair recognition and reward A proposed service to integrate a wider array of metrics and success stories for a better, wider recognition of academics. Inclusion of teaching, society outreach, management, etc.

This list contains extremely different objects: some of them look like pure IT services that could be provided by companies operating outside of the academic world, with the building of shared data infrastructures. Others are based on the crossing and enrichment of very specific data of the academic world, and therefore likely to feed even more the Elsevier databases, for example to build its own Open Science Monitor for diverse institutions. Finally, the last item on the list is quite staggering since it is no more or less the project of delegating to Elsevier a service for the individual evaluation of researchers, including of course open science dimensions.

Whether these pilots come true or not, this last part of the agreement underlines the extent to which it embodies a dystopian vision of Open Science, portrayed by Philip Mirowski as an extension of platform capitalism6. It strengthens Elsevier’s position as owner of scholarly infrastructure, provides the company with potential models for new services and organizes digital labor to enrich the data it already owns. All that while continuing to pay huge sums for access to its publications and in exchange of the “liberation” of some thousands open access articles which will of course drive web traffic to its servers. Maybe the new services will never see the light of day and this agreement will just be another Publish & Read. But if not, Faustus will have not only increased its dependence on the publisher, but will have empower it to the point it becomes the real information provider in their relationship, as publications would be reduced to “raw data”.


  1. this post was cowritten by Quentin Dufour []
  2. Olsson, L., Lindelöw, C. H., Österlund, L., & Jakobsson, F. (2020). Cancelling with the world’s largest scholarly publisher: lessons from the Swedish experience of having no access to Elsevier. Insights, 33(1), 13. DOI: http://doi.org/10.1629/uksg.507 []
  3. EDIT: part of the rise could also be attributed to the inclusion of new Dutch institutions in the agreement []
  4. see this wondeful conference paper Posada, Alejandro, and George Chen. “Inequality in knowledge production: The integration of academic infrastructure by big publishers.” 2018 []
  5. On a side note: It remains unclear whether article metadata will be released on a CC0 license in Crossref, continuing or not the anti-open citations Elsevierpolicy []
  6. Mirowski, Philip. “The future (s) of open science.Social studies of science 48.2 (2018): 171-203. []

Making a transformative deal with DEAL or… How 51 pages of contract are needed to replace subscriptions

This post should not have come into existence. In fact, for a long time, “contracts” and “agreements” between publishers and higher education and research consortia have not only been proprietary texts, but filled with confidentiality clauses that prevented them to be disclosed. This culture of secrecy is still there, as the agreement between Springer and DEAL states this on its 45th page1.

Disclosure of agreement
It is Publisher’s position that the terms of this Agreement are proprietary, however the Parties have agreed in this case that the Agreement is placed under a Creative Commons CC-BY-ND 4.0 license and may be made public under this license.

Indeed, the pursuit of transparency accompanying the open access movement has led in recent years to disclosing these contracts, highlighting the very large financial sums involved in accessing scientific literature2. But beyond the figures, the nature of the contracts and their concrete provisions are little discussed, outside of limited circles, notably in library & information sciences3.

The purpose of this post is therefore to propose a first analysis of the structure of this agreement before focusing on its financial part, the most original one, which is supposed to drive the transition to open access. But first we need to describe the two partners of the agreement. On the publisher side, we have Springer, or rather Springer Nature Customer Service Center GmbH. In practice, this means an entity that covers not only Springer and Nature publications, but also BioMed Central and Palgrave McMillan, i.e. more than 2,800 journals. On the customer side, it’s a bit more complicated: the negotiator is an intermediary, MPDL Services GmbH , which acts on behalf of the Projekt Deal, which is a consoritum initiated by the Alliance of German Science Organizations to negotiate nationwide transformative “publish and read” agreements with the largest commercial publishers of scholarly journals. The consortium structure therefore complicates the terms of the agreement with Eligible Institutions that can become Members with associated rights and duties.

Before entering into the agreement, it is important to add how much the writing itself shows the intensive interpretative work on its terms. As in any contract the key terms are of course defined: “eligible articles” “publishing services” or “open access license” among many others. But one also finds in the agreement no less than 18 occurrences of “For the avoidance of doubt” and 48 of “For clarity”, redundancies aimed at limiting the ambivalence of written proposals and injunctions and hints of the carefulness of both parties to limit the risks generated by the agreement.

From a simple preamble
to a complex folded agreement

At first, things seem really simple, as the preamble states the common aim of the two organizations. In fact, they share the rise of Open Access publications in the BOAI meaning, with its known advantages and underline the scope of this agreement, compared to previous ones.

The parties enter this contract with the goal to enable open access publishing of articles from German- funded researches in Springer Nature journals, to make these articles available to the public worldwide, and to provide access for German-funded researchers to most of Springer Nature content. At time of signing, the contract becomes the world’s largest transformative open access agreement, making it possible for over 13,000 articles annually from German-funded researchers to be made immediately available Open Access for use and reuse from the moment of publication, bringing the benefits of maximum visibility, increased usage and citations, and greater and broader impact to researchers across Germany.

Yet, the summary of the agreement depicts a complex set of successive services, which highlights the concrete constraints of a “Publish and Read” agreement for such a large consortium. The actual starting date of the agreement is far away, since the institutions have in practice several months to adhere to the terms of the contract and to put in place the necessary infastructures to carry it out. It is only from August 2020 that centralized funding for open access publishing will really kick in. However, researchers from affiliated institutions can already access Springer content from now on. This paradox is resolved if one considers that the R&P agreement is in fact one contract which overlays four contracts between the parties, named as follows :

  1. Fully Open Access Publishing
  2. Hybrid Publishing
  3. DEAL Journal Archives
  4. Reading Access

Let’s start by looking at the last two, which are the simplest in financial terms. Reading access (p. 31-41) defines the conditions of access to Springer’s content, provides for cases in which this service is discontinued – in particular non-payment in connection with the other components, but does not itself contain any financial elements. Reading is therefore provided free of charge for researchers at the member institutions of the DEAL project, as this deal is really a “Publish & Read“. The “DEAL journal archives” (p. 27-30) is charged, but for a fixed sum of €3,75 million. It allows the “upgrading” of all the institutions on the journal legacy, a little over 3 million articles, and the constitution of a “dark archive” that can be used during and after the contract.

Still, there are some interesting articles in these parts, for example the fact that DEAL can tell Springer to cease reading access to Member institutions if these institutions fail to pay the DEAL operating entity (p. 32). We can also read that the English-language agreement is the one that prevails (p. 40) ; considering that both parties are German and that German Law in Heidelberg applies in case of disagreement, it is very intriguing. Finally, at the opposite of the philosophy of Open Access, there are very strong limitations to the uses of the Archive or current content : access, download and very strict usage in academic courses. In particular, text and data mining for a given Member institution should only be authorized after an addendum is signed (p. 34). It is therefore clear that the already closed content remains paywalled and that the transformational will only applies to future publications.

Controlled Gambling
on future open access publishing

But how can this transformational aspect be translated into a contract? As we shall see, there is a form of gambling – with certain limits – carried out by both parties in the two contracts at the heart of the scheme, the Fully Open Access Publishing (p. 7-14) and the Hybrid Publishing (p. 15-26). The first has become quite standard – and very close to the contract signed by DEAL with Wiley at the beginning of 2019. It is a centralized payment system with corresponding author recognition and verification, sharing of metadata and financial reporting, all in exchange for some deduction on the price of APCs (p. 14).

For the purposes of calculation of the APC Rates, the list price increases for any Article Processing Charges under these Product Terms will not exceed 3.5% per journal title per year (“Cap”); increases will be calculated based on the 2020 list price.
For BMC and certain other Springer titles which are included in the Open Access Journals, Publisher will apply in addition to the Cap a 20% discount, the journals being eligible for such discount will be identified accordingly in the DEAL Journal List.

Price control is therefore very limited: although the reduction on the ‘public price’ is not negligible, it can quickly be offset by the foreseeable inflation of full OA APC costs charged by Springer. On the one hand, price rise at the 3.5% limit is almost certain, given the “natural” evolution of APCs prices; on the other hand, the current APC price insensitivity pushes us to predict that the number of articles published in full OA APCs will increase4. But this is precisely Springer’s gamble in signing this type of deal, by quickly making up for the quantity of articles in exchange for a limited reduction in the unit price. And this gamble is all the bigger here, given that its other source of income, under the Hybrid Publishing agreement, may fall in 2021, 2022, or 2023.

That is the biggest surprise of this Springer-DEAL agreement. Reading the announcement of the agreement on January 9, 2020, one would have thought that this part of the deal would once again be a copy of the Wiley agreement. Indeed, the fee5 of €2,750 for any research article in a hybrid journal published by Springer, signed without limit with Wiley, was communicated6. However, it is a very different expenditure scheme that was accepted by both parties (p. 25), represented in the following image.

For the year 2020, the amount is based on a “Reference Value » (RF) as the product of the number of articles estimated to be published by €2750, that is €26,125,0007. The RF does not move during the contract and so very much look like a “subscription price” from the point of view of Springer. Nevertheless, there is a complex real price paid that only partly takes into account the actual number of articles published. In 2020, the minimum invoice is the RF, if more articles are published, the price can go up to 5% more. In 2021, it is a minimum 95% of the RF and up to 10% more than the RF; then, 2022, it is 85% and up to 20% and finally, at DEAL’s option, for 2023, it is 75% and up to 30%.

On the upper side of the RF, from Springer’s point of view, the risk is to publish “too many” Hybrid OA articles. In such a situation, they would “miss” some revenue which would have hypothetically been generated by individual “Open Choice” APC. From DEAL’s point of view, it is litteraly an insurance against a growing cost generated by the capture of publications by Springer journals8.

For the avoidance of doubt , Publisher will continue to publish Eligible articles even if the Upper Threshold is met or exceeded. Publisher will never charge any part of the Calculated Total PAR Fee exceeding the Upper Threshold, irrespective of the actual Calculated Total PAR Fee and/or number of Published Articles.

If we now look on the other side of the RF, roles are reversed: the minimum invoice is an insurance for Springer if, for whatever reason, German authors don’t use the agreement to go on Hybrid OA, that it gets some value back now that reading is free of direct charge. From DEAL’s point of view, there is the risk to “pay for nothing” and it could be an incentive to push researchers to use Hybrid OA as it is “already charged”, rather than choosing the Full OA road, discounted but limitless as far as costs are concerned.

How transformative is the DEAL deal?

We can point to four potential or actual transformations from the agreement which runs until the end of 2022 with an option at the discretion of DEAL for 2023. First, obviously, it is the construction of a demanding workflow to regulate all the exchanges of authorship, institutionnal and financial information not only between Springer and DEAL, but also between DEAL operating entity and the Member institutions. Indeed, as with other Publish & Read type contracts, the sums actually paid by the research intensive institutions will be much higher than in the past. and conversely, more teaching or practice-oriented institutions would pay less. What is the cost of such a workflow for both entities? Is it easily scalable for other publishers/consoria? How would some institution react to their growing costs?

Second, this agreement raises the issue of researchers’ enrolment to open access publishing, even if the money does not seem to come from their own pockets or grants in this case9. Will they agree to publish in hybrid OA? Will they, on the other hand, remain insensitive to the total cost of APCs? Will they assume the position of correspondent author more than their foreign colleagues? What will be the associated institutional policies: more obligation to publish in open access or, on the contrary, a logic of individual choice? Answering these questions will make it possible to observe whether, indeed, open access is becoming the norm for German researchers in their publications at Springer.

Third, in direct connection with the previous transformation, the parties took calculated risks by signing this agreement. Springer may see its sales fall by between 15% and 20% in 2022 (APC discount at constant volume, minimum Hybrid Publishing price) in the event of failure with researchers, workflow problems or major disagreements within DEAL. Symmetrically, DEAL members risk a significant increase in the total price with a maximum of 20% Hybrid Publishing price and an explosion of full APC OA if production is moved to these journals. Transformative action at constant cost, because there is “enough money in the system”as OA2020 stated in 2015, is therefore not at all guaranteed.

Finally remains the question of the state of things at the end of the contract. If all goes well in their view, DEAL will validate the 2023 option, but what happens beyond that? And if they don’t, what will be their negotiating power? Will Springer be happy if both OA deals don’t have enough success to maintain their currents profits? Will the use of the “flagship journal” listed in the Wiley agreement to put some competition on Springer? Will Springer journals still be predominantly hybrid journals? Will the coalition S ultimatum on the lack of funding for APCs for this type of journals in 2024 be credible? There is nothing in the agreement to give answers to those questions, and in particular there is no commitment from Springer to flip its journals then. So, contrary to the recent ACM Open Model , this agreement does not constitute an irreversible transformation to open access. If things go south, subscriptions could be back at the very heart of the next agreement..

  1. The agreement is availabe on the Projekt Deal dedcated webpage with its own DOI. Announced at the beginning of the year by both parties, the full agreement was discreetly added in mid-February. Thanks to Quentin Dufour for flagging this document []
  2. According to this presentation by the European University Association, more than one billion euros a year for its members, including 700 millions for journals []
  3. Typically the section “business models” of the Scholarly Kitchen website. []
  4. On these two points, see the remarkable article by Shaun Yong-Seng Khoo, “Article processing charge hyperinflation and price insensitivity: An open access sequel to the serials crisis.” Liber Quarterly 29.1 (2019). []
  5. Technically, it is not an APC as stated in the FAQ page: “different from an Article Processing Charge (APC), the PAR fee, paid centrally by participating institutions for each article to appear under the DEAL agreement, covers the cost of the open access publishing services rendered and, to a lesser degree, reading access in Springer Nature subscription journals.” []
  6. In the Wiley deal, if I understood it correctly, the baseline payment is guaranted, unless it is shown that Wiley technically limits the actual publication of Hybrid OA ; but there is no max limit for the payment of €2,750. per article []
  7. I do not go into detail here about the type of article and in particular “Non Research Articles”, the price of which is €917 []
  8. Notably by the shift of corresponding author from a foreign researcher to a German one. []
  9. The actual source of money for APCs is not addressed at all in the contract, it is probably part of DEAL’s internal financial mechanics which are not public to my knowledge []

The Coming of Age of Open Access (I) or… Where are the alternative journals 18 years after the BOAI?

For most of us, February 14th is Valentine’s Day; for open access activists and lovers, It is also the celebration of the BOAI anniversary. It was 18 years ago, they were sixteen, meeting in Budapest in December 2001. Far from agreeing on everything, yet they co-signed a landmark declaration published on February 14th, 2002. 18 years later, it is the coming of age for Open Access, a time to look at what has been changed, redifined, gained and missed. To start with, we have to remember that the BOAI really defined open access, as a virtually unlimited re-use of academic documents:

By “open access” to this literature, we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited ((BOAI, 14th February 2002)).”

They also put together what was largely separated before, the soon named “green road” and “gold road”. Nevertheless, contrary to the popular belief, the supposed “original version” of the BOAI reproduced in lots of copies on the web, they were not calling them “self-archiving” and “open access journals”. Indeed, the reference above is not the true original version, but one slightly changed in the summer of 2002. The former one, which can be seen on the Web Archive, stated : “Open access to peer-reviewed journal literature is the goal. Self-archiving (I.) and a new generation of open-access alternative journals (II.) are the ways to attain this goal. So following BOAI, we will deal with this coming of age in two successive posts : this first one will focus on alternative journals, the second one on the triumph of organized archives over self-archiving.

A revolution with no defined business model

How are described these alternative journals? Why and how are they alternative and to what? The main answer is given in a long paragraph of the BOAI: it is our actual starting point, from which the history of these journals shall be analyzed. To ease the reading, we have divided it in three parts.

Second, scholars need the means to launch a new generation of alternative journals committed to open access, and to help existing journals that elect to make the transition to open access.

The two ways for journals to commit were already happening at the beginning of the century. On the one hand, very early electronic journals, often without publisher but what we now call a platform, didn’t go the subscription way and were established as free for readers. On the other, the 2001 PLOS letter/petition pushed publishers to change their ways and open their content, with lots of signees but few positive answers apart from BMC. So the BOAI reminds them that they could still “filp” to open access. But what does it mean exactly?

Because journal articles should be disseminated as widely as possible, these new journals will no longer invoke copyright to restrict access to and use of the material they publish. Instead they will use copyright and other tools to ensure permanent open access to all the articles they publish. Because price is a barrier to access, these new journals will not charge subscription or access fees, and will turn to other methods for covering their expenses.”

The alternativeness doesn’t come from the way journals should be run (editorial boards, scope, peer review) but from their economic model. Journals are qualified as “alternative” because they shouldn’t anymore rely on the property of content and subscription as the main route to pay for journal expenses (and profit). More than that, they would have extra costs as they have to maintain open access through time. With the vanishing of current and future revenues, on what shall the new business model rest?

There are many alternative sources of funds for this purpose, including the foundations and governments that fund research, the universities and laboratories that employ researchers, endowments set up by discipline or institution, friends of the cause of open access, profits from the sale of add-ons to the basic texts, funds freed up by the demise or cancellation of journals charging traditional subscription or access fees, or even contributions from the researchers themselves. There is no need to favor one of these solutions over the others for all disciplines or nations, and no need to stop looking for other, creative alternatives.

This is probably the most important part of the whole BOAI declaration, besides the open acess definition. Three main points should be retained: firstly, the idea of a diversity of sources of income, in an optimistic vision of the means of financing a journal undoubtedly fuelled by the success of the open source software movement. Secondly, this diversity is reinforced by the final sentence which supports the absence of a “one best way” even when the exploration of possibilities would have fully taken place. Finally, thirdly, the idea of recourse to a payment by authors is tconceived as a last resort (“or even”). In other words, not only does the alternative economic model remained unclear and uncertain, but the paying author’s proposal was not considered a priority at all.

The stories of 11 pioneers

But with such vagueness, who then ventured to go on the alternative and how did they settle their journals once launched? On the George Soros site, funder of the BOAI meeting through the Open Society Institute, a very short list of 11 entities was then available, even if it was supposed to be only examples1.

11 journals

So, a first way to fulfill the promise made by the title of this post is to investigate the trajectory of these 11 journals, or rather publication websites, so great is their diversity. We will treat them in groups, according to their destiny:

  1. Still free of charge mathematicians & computer scientists journals: Algebraic & Geometric Topology and Geometry & Topology were respectively founded in 1997 and 2001, have always published open access articles and are still community-based journals, published by MSP, which puts a very strong anti-APC statement on its website. Document Mathematica is the first journal of the Elibm platform, founded in 1996, which acts as a repository for maths proceedings and journals, free of charge for readers and authors. JMLR was created in 2001 in an independance movement by 40 members of the editorial board of Machine Learning, then owned by Kluwer and is a always hard-to-believe from the outside $10 per article cost kind of journal – thanks to huge volunteer work, Latex, open source software, no fancy website and outsourced micropublishing for paper versions with no financial exchange.
  2. Still owned by societies, but have switched to APC: The New Journal of physics founded in 1998, now published by IOP with an APC of 1630 €. It was a part of some “offset deals” (Austria, UK) and is still one of the journals of the SCOAP3 agreement. The Journal of Insect Science was supported by the University of Arizona, launched in 2001, it changed with the death of its editor-in-chief in 2014, owned by a society but published by Oxford with a 1176 € APC.
  3. Bought by Springer platforms: Living Reviews in Relativity was founded in 1998 by a Max Planck institute, it published only reviews, which were “living” as authors could update them as new literature could be taken into account. It was sold to Springer in 2015, which kept the same formula with, remarkably, no APC . The trajectory of BioMedCentral is probably well-known to readers, let us just remind that it was founded in 2000, cosigned BOAI through Jan Velterop, its then director, was the first “big” publisher to bet on APC and was finally sold by its owner, Vitek Tracz, to Springer in 2008.
  4. Popularizers of APC and inventors of the megajournal: PLOS didn’t really exist as a publishing place at the time of the BOAI. Its call/letter for Open Access the year before as almost only BMC responded positively. But they were already able to secure funds, cosigned BOAI through Michael Eisen and soon lauched PLOS Biology and then, in 2006, PLOS ONE which was the first megajournal, which climbed to more than 30,000 articles a year, invented new forms of peer review and supported article-level metrics againts journal-based metrics . It was also the launch of APC as a standard way to provide Open Access for large communities.
  5. The Platform that used to promote open access among publishers: Highwire has never been a journal nor a platform-journal, but rather a hosting service which develops tools and software for publishers. Founded in 1995 and based at Standford University, it used to be the largest archive of free full-text science on Earth with more than 2,4 million articles. Bought by an equity fund in 2014 (a minority share is still owned by Stanford), this “free texts” webpage stopped its counting on the 25th March, 2015 and the webpage was not maintained after 2018.
  6. Terminated by its learned society: Psycoloquy had been launched and supported by the American Psychological Association, with Stevan Harnad at its helm, who translated some of the features he developed in his previous journal, BBS, notably open peer commentary, into the electronic form. It stopped publishing new articles by 2002.

Other journals or platforms could have been indicated as examples in early 2002. One can notably think of Scielo which was already working very well in South America, Erudit was growing up in Quebec as well as Revues.org in France. But the BOAI was rather focused on STM and English-language journals, and the alternative journals of the BOAI are also located within a world already dominated by an oligopoly of big publishers that was to be changed or at least challenged. Despite these limitations, the 11 stories nevertheless show the diversity of actual trajectories, the adoption of economic models that had yet to be defined and implemented and the adoption of the alternative by some big publishers.

From Gold to Diamond:
when the alternative remains alternative

Above and beyond these examples, what trends could be drawn from these last 18 years? We have to consider a wide range of moves from public policies, learned societies, universities & libraries, research funders and finally of course publishers in order to give a second answer to the titile of this post. Of course, the first evolution is the invention of a locution, soon after the BOAI : open access journal, which replaced the “alternative” ones.

Then, as with 4 of the 11 listed, we observed a massive rise of the APC model, from BMC and PLOS pioneers. The idea that authors would accept to pay to publish was not to be taken for granted, would it be in principle or in practice with questions about the accounting circuit, the actual source of funding (authors, labs, departments, universities…), the level of price, etc. And still in some disciplines, being forced to pay is putting a low-quality stamp on the output. The Wellcome Trust in the UK and the ERC programs in the European Union played a huge role in experimenting with the possibility of paying APC through grants, which made them a “normal cost”, especially in well-granted disciplines (biomedecine, physics…). The UK official public policy, after the Finch Report in 2012, also injected money to pay for APCs.

It not only fueled the growth of relatively new publishers – BMC, PLOS but also MDPI, Frontiers in, Hindawi – but pushed “traditional” big publishers to adopt APC and make their journals “hybrid”, with a “basic funding” by subscription and “extra funding” through APC. Springer began its “Open Choice Program” in 2007, which name deeply reflects the liberal-market vision of open access. These two evolutions led to very harsh critiques of the whole Gold OA project : on the one hand, it raised the question of the birth of predatory publishing through APC ; on the other hand, hybrids meant double dipping and the deepening of the serial crisis.

Hybrid journals were conceived as transition tools to open access, as the then director of SPARC Europe theorized them2 So these private and public policies of APC funding were conceived as a way to reach a tipping point after which the Open Access, now renamed full open access journal, would happen. How naive wrote Richard Poynder in a recent essay3! Some powerful actors came to the same conclusion, so they recently try to impose new radical changes in the funding of journals, most notably Max Planck Gesellschaft in 2015, then the now famous Coalition S, which aim is to accelerate the transition to open access by in fact killing the subscription model, having a CC-BY license to authors for content. Does this sound familiar?

So it seems to go full circle: almost twenty years later, trying to get rid of the traditional economic modela for journals and to do that, talking with big publishers in order to sign “transformative agreements”. Open access has gone mainstream, Elsevier even now present open access as a standard. If changes happened, it was more on the way journals were run, most notably open peer review4. But wait a minute, if the alternative has gone mainstream, where is the new alternative? In fact, the support and success of the APC modeal made the impression on a lot of commentors and actors that the Gold way was now the equivalent of an author-payor model. That led some activists to coin new names for “no APC journals”. Would it be Diamond or Platinium, it meant that it was also free for authors, and not only readers.

Scielo, Erudit, Open Edition were already mentioned, just as 5 of our 11 pioneers. But we could add Open Library of Humanities or Redalyc as “big platforms” for journals5. They are the majority, as no APC journals still represents more than 70% of entries in the DOAJ, their business models are diverse, from bricolage to strong institutional support, just like the BOAI predicted. So the alternative is still alternative, though it has vastly grown in the last 18 years. Getting to adulthood, we will see whether OA journals coexist into two genres, non-APC and APC, or whether one of them in not sustanaible in the long run. Unless, of course, the other open access road gets us into a post-journal world through preprint servers and open archives. To be continued…

  1. this list didn’t evolve a lot in the next two years, Highwire and PLOS were removed, while two MDPI journals were added []
  2. Prosser, David C. “From here to there: a proposed mechanism for transforming journals from closed to open access.” Learned publishing 16.3 (2003): 163-166. []
  3. Poynder, Richard. “Open access: Could defeat be snatched from the jaws of victory?.” (2019). []
  4. Which means lots of different things, see Ross-Hellauer, Tony. “What is open peer review? A systematic review.F1000Research 6 (2017). []
  5. Not to mention the ones for books which are catalogued into the DOAB. []

The perfect hacking of journal peer review or… The fastest way to become a Highly Cited Researcher

Since the beginning of the 21st century, the names of great fraudsters have spread beyond the academic arenas, each one bringing their biographies, their practices and the astonished tale of the discovery of their misdeeds. This starisation of fraudsters should neither hide the existence of famous cases in the past1, nor the multitude of ordinary misdemeanors and misconduct daily taking place in the academic world, which is hardly different from other professional circles in this respect. Nevetheless, they deserve a place in the Hall of Hame of academic fraudsters; so, before addressing the case of our new champion Kuo-Chen Chou, let’s review a few exemplary figures of this Hall of Fame, in alphabetical order.

Yoshitaka Fujii (2012): enduring Japanese anesthesiologist who owns is world record holder for the number of articles retracted (183). He spent his career inventing data and despite a statistical analysis published in 2000 showing how “too nice” his numbers were, he was not really worried until 10 years later.2

Woo-Suk Hwang (2006) : amazing Korean veterinarian and biologist, specialized in stem cells and producer of the first human clone, announced by a publication in Science. After being accused of forcing his technicians to donate their eggs for his research, investigations revealed the total absence of human cloning. A national glory in South Korea and an international star, his public downfall was so brutal that he made the cover of Time Magazine.

Jan-Hendrik Schön (2002): industrious German physicist, working at Bell Labs on the limits of matter and life, able to co-author in less than two years seven papers in Nature, eight in Science and six in Physic Review. All of course have since been retracted, and it seems that all his research, including his thesis, was based more on his desire to stick to the expectations of theory or those of his colleagues than on the empirical results he claimed to have achieved.3

Diederik Stapel (2011) : extraordinary Dutch psychologist, whose social experiments always proved the hypotheses made… since they were never carried out, but fabricated on paper and computer. Denounced by a whistleblower from his team and 58 retracted articles later, he was the object of a sensitive New York Times portrait . His own production became an object of psychology of deception as his colleagues found small differences in style between his genuine articles and the fake ones.

From transparent peer review
to citation manipulation

These eminent members of the Hall of Fame, all men – women are an extremely small minority among the elected members – have produced “false science” but have neither massively plagiarized nor attacked the peer review system. They rather provided, as good forgers, the expected raw material to journals. However, over the last 10 years, there has been concern about how reviewers or publishers can indirectly influence the science produced, in a more subtle way. “Coerced citations”, “fake peer review” and “cartel citations” are all designations of practices that do not directly fudge the content of articles, but act on the margins by hacking into the journal peer review process.

So, the old criticisms about the misdeeds of anonymity in journal peer review4 were reborn at great expense and many debates about “transparent peer review” took place. Where in the past editors and authors were at the helm of these discussions, now the publishers are in charge and, above all, Elsevier. The company provided two in-house researchers with access to his back office and they were able to compare the bibliography of the manuscripts with those of the published articles and check whether added references were coauthored by reviewers of the manuscript. Unsurprisingly, the authors of When Peer Reviewers Go Rogue concluded that there was a manipulated citation, even if its level is quite low (0.79%).

At the same time, a research team led by the famous John Ioannidis sought to build a “clean” base of citations for the most cited researchers. For them, this meant being able to separate self-citations from the rest, and it was on this occasion that they made a surprising discovery: the staggering intensity in the level of self-citation of some colleagues. Indeed, as your level of citations rise, you expect that they are all the more coming from distant colleagues. Not for everybody:

Vaidyanathan, a computer scientist at the Vel Tech R&D Institute of Technology, a privately run institute, is an extreme example: he has received 94% of his citations from himself or his co-authors up to 2017 (…) He is not alone. The data set, which lists around 100,000 researchers, shows that at least 250 scientists have amassed more than 50% of their citations from themselves or their co-authors, while the median self-citation rate is 12.7% ((Nature, Hundreds of extreme self-citing scientists revealed in new database, 19th August 2019)).”

But then, what are the Highly Cited Researchers, whose numbers are one of the components of the Shanghai Ranking, actually doing? Are they renowned for their influential results or are they more adept at being manufacturers on the citation chain? Bibliometricians would say that a “high” self-citation rate is not necessarly a sign of fraud, but that a detailed inquiry woud be neeeded This is where we return to our newest member of the Hall of Fame, whose work is worthy of close consideration.

A perfect hacker,
always greedy for citations

It all starts with a mundane story: a reviewer asks for additional references in a manuscript. But the request itself is not so trivial: it consists of 35 references, the vast majority of which are co-signed by him, and he indicates that his recommandation to editors, on whether or not to accept the manuscript, will depend heavily on this inclusion. It should also be specified that it is not for a single review that this request is made, but for each of the manuscripts passed through his hands. In describing their decision to ban this unnamed reviewer, editors did not indicate how long this practice has existed. Indeed, it is unusual, to say the least, to request the addition of so many references, and one might question their own responsibility in this matter if it lasted as they reference to the “most recent reviews” seems to imply. To which they reply:

One might ask how this reviewer got away with submitting multiple reviews containing coercive requests for citation before being banned. The shortest explanation is that excessive self-citation demands are generally not seen as an ethical problem until a pattern is established, and a decentralized peer-review system is not amenable to detecting patterns“((Wren, Jonathan D., Alfonso Valencia, and Janet Kelso. “Reviewer-coerced citation: case report, update on journal policy and suggestions for future prevention.” (2019): 3217-3218.)).

And in fact they inquire in other journals, and that suggested the same pattern of behavirour for this reviewer. A year later, in early 2020, the investigation leads to an editorial in another journal, the Journal for Theoretical Biology, (JTB), which reveals perhaps the most complete case of manipulated citation to date. Indeed, the hacker is no longer a simple reviewer there, but a “handling editor” for JTB, which enables him to act at several stages of the manuscript, with a single objective: to accumulate citations.

  1. He took the charge of many manuscripts from his research centre to ensure that they are well treated (conflict of interest).
  2. He chose reviewers requested by the authors, or designated colleagues from his own centre (conflict of interest) or even reviewed them himself under a false name (ghost peer review).
  3. In many cases, with the return of the reviews, he would ask for the title of the article to be changed so that it explicitly refers to his own algorithm, as well as a discussion of his own work in the introduction and conclusion (coerced citations).
  4. As a result, he requested the addition of a very large number of references (up to more than 50) to the bibliography of the manuscript (coerced citations).
  5. Just before the acceptance of the manuscript, he was added as co-author of the article (gift authorship).

We therefore observe two complementary types of behaviour. On the one hand, hidden from the outside, it consists in hacking the flow of the peer review journal, capturing the evaluation process to ensure that the articles most “favourable” to its citation count are actually published – and sometimes with his coauthorship. On the other hand, visible to the authors and perhaps the editor-in-chief and publisher, the aim is to hack the byline, content and references of the manuscript by making imperative requests for inclusion. Thus, these ordinary manuscripts became articles loaded with citations from the hacker.

It can be noted that at this stage, the name of the reviewer is not given by JTB, which caused some to make educated guess on Twitter. News articles in Nature among others, soon follow and revealed his identity: Kuo-Chen Chou, a retired chinese-american biophysicist. We then learn that he has been for years a member of the Highly Cited Researcher “club”5. So, as usual, this extraordinary case will be treated as “rare”, counter-measures have been taken such as an algorithm written by one of the Bioinformatics editor. But the ordinary gaming will still happen, would it be in so-called predatory journals or “prestigious” publishers, with smarter colleagues less greedy on citations and not obsessed with the HCR club. Will you be one of them?6

  1. for example John Darsee, see Broad, William; Wade, Nicholas (1983), Betrayers of the Truth: Fraud and Deceit in the Halls of Science, London: Century Publishing, ISBN0-7126-0243-7 []
  2. For a quick view of this case, see Pontille, David, and Didier Torny. “Behind the scenes of scientific articles: defining categories of fraud and regulating cases.” (2012). []
  3. He was the subject of a wonderful book, Plastic Fantastic, ISBN 978-0-230-22467-4 []
  4. See David Pontille and Didier Torny, “The blind shall see! the question of anonymity in journal peer review.” Ada: A Journal of Gender, New Media, and Technology, No.4. doi:10.7264/N3542KVW (2014). []
  5. the Web of Science Group didn’t list him in 2019 as he had, like others, a high rate of self-citations but, as stated, “Although this list is updated and refreshed each year, a Highly Cited Researcher is always a Highly Cited Researcher—whether their name was included in 2013 or 2019.” []
  6. I am aware that this post contains two self-references but they won’t be counted in any database []