Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Who wins after a divorce?… or how to interpret the DEAL-Elsevier new agreement

Imagine that you are a young researcher in Germany, having started your thesis in September 2018. For the last 5 years, you have had no legal access to articles published by the world’s largest publisher, Elsevier. Your institution has saved hundreds of thousands or even millions of euros, but you don’t really know where that money has gone. By contrast, on a day-to-day basis, then as a PhD student, now as a post-doc, you tinker with your access by writing to authors, asking your colleagues abroad if they can send you this article, requesting your library to buy that crucial paper, scanning preprints, using the unpaywall button or, late at night from home, typing the full combination of letters and signs to reach the platform whose name you must never utter or write.

To my knowledge, this divorce between a major publisher and a national consortium, DEAL, folowed by a reconciliation, has been the longest for a very rich country,. This post analyses how the separation happened, what is known of a long period of divorce in which no German institution had a subscription to ScienceDirect, and finally moving on to the reconciliation agreement published on September 6th, 2023 and validated in January 2024.

From harsh talks to full divorce (2016-2018)

Indeed, it was not for the lack of money that DEAL did not sign with Elsevier, but because the conditions of a signing were not met. By contrast, reading DEAL’s agreement with Springer-Nature, analysed at length 3 years ago, shows what was expected: an agreement including subscription and open access publication, all at a cost deemed reasonable by the German consortium. So how did they get to a “no deal”? As often when trying to rest on past information with institutional sites and changing policies, I shall say that most documents cited below have disappeared from the DEAL website and, therefore are captures made by the Internet archive.

At Elsevier, serving research is our paramount goal. We have therefore chosen to continue providing access to Elsevier journals for dozens of German institutions that cancelled their individual subscriptions at the end of 2016. They did so anticipating that a new Germany-wide license agreement would be in place by January this year, which we regret so far has not been achievable. We strongly believe that access to high-quality research is important for German science. The continuing access for the affected institutions will be in place while good-faith discussions about a nationwide contract carry on. This reflects our support for German research and our expectation that an agreement can be reached.”1

I hope one day some colleagues will systematically study the rhetoric of big publishers PR. Anyway, the one above is typical of a service industry which makes believe its aims are totally aligned with the ones of its clients. Imagine the reverse situation, where DEAL would state : “at DEAL, assuring service providers profit is our paramount goal…”. Back to our main topic: the unconditionnal reconnection decided by Elsevier is not something unusual: at the same time, it happened for example for Taiwanese institutions in a similar situation2. But Elsevier hopes for a soon-to-be new German agreement would not be fulfilled. Indeed, after these back and forths, the negociations stalled, leading to a full divorce by mid-2018, as stated by the German Rectors Conference, which had “no choice”:

“The excessive demands put forward by Elsevier have left us with no choice but to suspend negotiations between the publisher and the DEAL project set up by the Alliance of Science Organisations in Germany.” That was the verdict of the lead negotiator and spokesperson for the DEAL Project Steering Committee, Prof Dr Horst Hippler, the President of the German Rectors’ Conference, speaking in Bonn, where the last discussion took place this week.”3

At this point, we shall note that all cited documents are written in English, while negociations surely happened in German. DEAL had the clear intention of making its moves very public and widelly known beyond the Federal German space and Mitteleuropa.

Learning to work without simple legal access (2019-2022)

So Elsevier pulled the plug in July 2018 and everything went quiet after almost two years of turmoil. That was not a given: you could think that protest letters, petitions or lobbying from unsatisfied lay researchers would multiply as a whole nation of scientists were cut from at least a fifth of the published literature. To lift the veil on the actual frustrations and losses resulting from the switch-off, it was… Elsevier, which commissioned a survey in the summer of 2019, the summary results of which can still be seen on the pages of one news agency.

Most German researchers agree that losing access to ScienceDirect made their research activities less efficient (61%) and delayed the production of the research output (54%). High-quality research further required access to current, international research results. However, the survey shows that 49% of the scientists surveyed believed that the lack of access to new research findings leads researchers to miss current developments or to become aware of them only with a delay. 44% of respondents fear that this will have a negative impact on the quality of their research. All in all, 84% of researchers surveyed think ScienceDirect was important or somewhat important while 76% supported or strongly supported the restoration of full access to ScienceDirect in Germany.

Of course, no raw data has been published and the study itself has not been shared beyond this PR. Nevertheless, in the body of the text, Elsevier mentions another ‘independent’ study carried out by the University of Münster. Like the previous one, this is not an actual academic study, but a library survey, published only on their blog, in German. Despite its limitations (size, a single institution), it presents some interesting, and most probably unique, results on the representations of German researchers one year after the cut. In particular, the following graph should be highlighted:

Extract form Münster Univeristät Survey, which results are presented here (in German).

The orange answers indicate respondents’ agreement, and the statements have been ranked in descending order of positive responses. They show a mixed picture in terms of opinions, both across the population as a whole and for many respondents themselves. from one question to the other. Though the vast majority, namely two-thirds (66%), agreed with the statement “I need more time to get the literature” and 58% thought that the right thing to do was to put pressure on Elsevier to give in, also the option with the fewest disagreeing votes (5%). That does not imply support for the shutoff: in fact, 55% agreed that “No deal is no option – negotiations should be resumed as soon as possible”, and 46% that the lack of access was “a serious competitive disadvantage”.

While 43% agreed that “Elsevier as a profit-orientated company would only harm science”, and only 11% disagreed, only 29% would “refrain from writing or review articles for Elsevier journals” against 40% who would still perform it. After some questions on the importance of Elsevier journals and the use of spared funds, the last question shows another divisive view on the resuming of negotiations, with only 16% in favour of it – which of course was not addressed in the Elsevier PR mentioned.

These two surveys are the only public manifestations of a debate in Germany during this period. If opinions remain relatively unpublic, what about practices? Does the impossibility of immediate legal reading actually have an impact on the way German academics write, their choice to publish in Elsevier journals or their productivity? To my knowledge and through the extensive use of Matilda, only two academic articles have addressed these issues The first is counterfactual, in that it looks at the behaviour of affiliated authors in Germany in chemistry for Springer and Wiley with which DEAL has signed an agreement. Published in 2021 in economics, it only considers the first year of the agreement (2020), in comparison with the previous period and with a control group with no agreement of this type. Nevertheless, the authors are already measuring some effect :

“researchers’ submission behavior in the field of chemistry has changed to some degree, as eligible researchers have increased their publications in Wiley and Springer Nature journals at the cost of other journals. While the effect is not overly large yet, it is statistically significant, and it may increase over time, as the agreements become even more well-known among scientists. Hence, journals covered by the DEAL agreements appear to have a competitive advantage in attracting authors”.4

If agreements signed raise attractivity, then unsigned ones shoud diminish it. The second one deals with the latter by considering the evolution of publication and referencing activities of the whole population of German authors in Elsevier journals, with no control group.  Published in 2023 in scientometrics, it is based on more than 400,000 articles and more than 33M references:

“We also observe year-on-year decreases in the proportion of citations, although the decrease is smaller. We conclude that negotiations with Elsevier and access restrictions have led to some reduced willingness to publish in Elsevier journals, but that researchers are not strongly affected in their ability to cite Elsevier articles, implying that researchers use other methods to access scientific literature.”5

The two studies therefore show that the structure of publications is affected by the agreements; whether signed or not, but only marginally, at least over a short period. Furthermore, reading seems to be remarkably unaffected by the lack of legal and rapid access to the literature. To enable simple and legal reading, It is likely that other internal work has been produced by the consortium or that self-support systems have been put in place, similar to what the Swedish libraries deployed during their own breakup with Elsevier6. Beyond this study, there is anecdotal evidence, given by colleagues, but also an interview of a member of the negociation team, Dr. Bernhard Mittermaier, head of Forschungszentrum Jülich’s Central Library, which tends to show that they were following the rate of publications:

“The option to publish with Elsevier was not affected. Some scientists, however, asked me whether a publishing boycott would make sense in view of the fact that many editors from Germany – including Prof. Wolfgang Marquardt – had discontinued their work for the publisher with reference to the stalled DEAL negotiations. In fact, Elsevier’s share of all Jülich publications decreased from 26 % in 2018 to 18 % in 2022. Across Germany, there was a decline from 19 to 15 %. This may also be a reason why Elsevier returned to the negotiating table.”

In the end, it is reasonable to consider that German researchers have adapted to a life without ScienceDirect over the long term, still reading articles published by Elsevier, but publishing less in journals disseminated by it . What the French and British did not dare to attempt after lengthy negotiations, the Germans did, with very substantial savings and a diminished dependance to the biggest commercial publisher. But what happens afterwards, when the time comes for one or other of them to consider recontracting?

Dealing again… on different terms (2023-2024)

2023 began, as in previous years, without ScienceDirect for German researchers. Im Westen nichts Neues, as a fellow economist lamented :

 

Bartosz Bartkowski tweet

In fact, Elsevier had returned to the negotiating table in autumn 2022 and, after a four-year drought, seemed ready to make concessions that would have been unthinkable four years earlier. The negotiations took place behind closed doors, until the sudden announcement of their success at the beginning of September 2023, followed by the publication of the contract itself. Let’s dive into it, as DEAL has always been transparent on their agreements (nice PDF, full text and monetary information,…), published under a CC-BY-ND license7.

We will not delve into the details of the usual characteristics of this type of agreement (definition of the parties, services expected, users authorised to read, corresponding author limitations, etc.), but will instead focus on the most central elements and on some unique features compared to the bodies of agreements analysed elsewhere.8. This agreement is a “classic” Read & Publish, which includes in its core payment articles published in hybrid journals, but not articles in full open access journals, for which the fee is simply reduced by 15% or 20%. It also includes a back catalogue upgrade for all institutions, at a total cost of €10m. It is a “pay as you publish” agreement, with a PAR fee for each article, depending whether they are in a “regular journal” (2,500 €) or a Cell Press/The Lancet journal (6,450€), with an inflation rate of 3% and 4% respectively9.

This payment model has two consequences that are quite specific to this agreement. Firstly, with the exception of the back catalog, institutions have no front money to commit. Whereas in the past some agreements offered “tokens” or “waivers” for publication, the opposite is now true: you only start to pay after publication. Secondly, this provision would encourage free riding: as withalmost all agreements of this type, the corresponding author is offered, as a priority, to publish in open access under the CC-BY licence, but he or she can refuse. There is also a provision in the contract that prevents this refusal to publish in open access from being organised by counting all the publications:

“For the avoidance of doubt, the applicable PAR fee for Core Hybrid journals for the year of the acceptance date will be applied to both open access and subscription articles in these journals and to subscription articles published in Cell Press and The Lancet journals.”

So, despite the diminishing share of articles observed during the absence of agreement and the lack of front money, Elsevier has a certain guarantee of revenue as 18, 19% or 20% of the German research production will end in one of its disseminated journals. In exchange, the company had to accept very harsh conditions on the data generated by German users. A full page (section 7.6) is dedicated to Data Privacy in the agreement, with reminders of legal provisions derived from the GDOR European regulation. DEAL and Elsevier will co-supervise the whole data processing, the latter refraining from using any personal data without the consent of users. On this point, a loophole was anticipated by forbidding any general opt-in device: German colleagues will be able to fully use ScienceDirect without signing any consent. Of course, all data will be stored in one of the Member States of the European Union. The matter is so sensitive that a future workshop is planned during the first year of the contract, where part of the IP addresses would be automatically erased when IPs are not located in professional settings.

Without doubt, Elsevier’s transformation into a data company and the growing controversy surrounding its new business models on reselling user data10 has been closely observed in a country so keen on privacy. Still, despite these worries, DEAL signed the deal and did not include any fines in case these limits would be trespassed11. But what about the signing of German iHER nstitutions?

Conclusion : which savings, for which uses?

In fact, there was still a little uncertainty when the agreement was unveiled, as a four-month period was about to begin during which the institutions would each have to indicate whether they would sign the agreement. It could only be ratified if at least 70% of the institutions approved it, and fees were lower if 90% did. On 15 January 2024, DEAL announced that this second threshold had been exceeded as “nearly all of Germany’s major universities and research institutions are now participating“. Elsevier has now joined Wiley & Springer in the DEAL family, with very similar agreements focused on hybrid open access. But what does it mean from the point of view of German HER institutions? Let’s go back to Dr. Bernhard Mittermaier’s interview, who talks about his own instiution costs and the global German ones:

“Taken together, Jülich institutes will now save around € 100,000 per year on fees for hybrid open access that were previously paid to Elsevier. For Forschungszentrum Jülich as a company, the costs for Elsevier will even decrease by about 40 % than was the case under the former agreement, assuming publication figures remain the same. This corresponds to about € 300,000 per year that can be saved compared to 2018, the last year of our previous agreement with Elsevier. Elsevier’s fees per article are now much lower than they were in 2018 and similar to those charged by Wiley and Springer Nature.”Compared to 2023, however, when hybrid open access, document delivery, and pay-per-view each cost around € 100,000, additional expenditure of € 200,000 will now be incurred.

Let’s try to do the math (which does not add up), based on that paragraph in the following table, with three references, the last year of the former (local) agreement, the shut-off period and the first year of the new agreement.

Expenditures/Year 2018 2021 2024
Total 600,000€
500,000€
100,000€ 300,000€
Forschungszentrum Jülich Elsevier expendures.

The previous total cost is 500K if you follow the 40% reduction and 600K€ if you add the total savings mentioned. Whatever the case, the new deal is far below the older ones, in which German institutions were known for paying “much more” than similar institutions in Netherlands or France. Let’s now project the costs nationally:

Year pre-2018 2021 2024
Expenitures 70M-100M€
in mostly reading agreements
5-10 M€ max in Hybrid OA publishing? 30-40M€ in P&R agreement
Extimated Elsevier revenue for Germany

The first figure was never made public, but I have heard estimiations in between these two markings, The second one is very maximalistic as OpenAPC counts between, 1M€ and 1,3M€ for Elsevier in Germany for the years 2020 to 2022. The thrid one is based on the number of expected publications and the different fees defined in the agreement. So the savings have been huge during the shutdown and Elsevier lost probably at least 300M€ before resuming negiotiations. And despite losing probably around 50% of its 2018 revenue, the company prefered to sign rather than leaving almost all the money on the table.

While, for example, French institutions have made a major commitment to using some of the resources saved for OA initiatives and by replenishing the National Open Science Fund, this does not seem to be the case in Germany. The national research funder DFG has recently announced the launch of a Diamond OA publishing platform… with a maximum budget of 1.5M€ per year. I let you figure out what it would have been with just 30% of the money spared. So the German HER institutionswon won a lot, Elsevier stalled, but the dependence from big commercial publishers has not been halted, or even reinforced.

  1. Harald Boersma, Continued Elsevier access in support of German science, 13th February 2017 []
  2. Schiermeier, Q., Mega, E. Scientists in Germany, Peru and Taiwan to lose access to Elsevier journals. Nature 541, 13 (2017). https://doi.org/10.1038/nature.2016.21223 []
  3. “DEAL and Elsevier negotiations: Elsevier demands unacceptable for the academic community”, 5 July 2018, German Rectors Conference press relase, https://web.archive.orga/web/20181219162556/https://www.projekt-deal.de/elsevier-news/ []
  4. Haucap, J., Moshgbar, N., & Schmal, W. B. (2021). The impact of the German ‘DEAL’ on competition in the academic publishing market. Managerial and Decision Economics, 42(8), 2027–2049. https://doi.org/10. 1002/mde.3493 []
  5. Fraser, N., Hobert, A., Jahn, N., Mayr, P., & Peters, I. (2023). No deal: German researchers’ publishing and citing behaviors after Big Deal negotiations with Elsevier. Quantitative Science Studies, 4(2), 325–352. https:// doi.org/10.1162/qss_a_00255 []
  6. Olsson, Lisa, et al. “Cancelling with the worlds largest scholarly publisher: lessons from the Swedish experience of having no access to Elsevier.” Insights-The UKSG Journal 33 (2020). 10.1629/uksg.507 []
  7. Elsevier B.V., & MPDL Services gGmbH, Max Planck Society (2023). Projekt DEAL – Elsevier Publish and Read Agreement. doi:10.17617/2.3523659 []
  8. Quentin Dufour, David Pontille, Didier Torny. Contracter à l’heure de la publication en accès ouvert. Une analyse systématique des accords transformants. [Rapport de recherche] 206 150, CNRS; Comité pour la science ouverte. 2021, pp.81. ⟨halshs-03203560⟩ []
  9. I won’t get here into some society journals excluded from the agreement, either because they won’t go hybrid or because they thought they won’t get paid enough by Elsevier. On the specific question of learned societies journals in such deals, see The Brief https://www.ce-strategy.com/the-brief/out-of-reach/ []
  10. Didier Torny. From paywall builders to data tracking moguls or… How the big publishers have put on a new super vilain costume. Politics of technoscientific futures, EASST, Jul 2022, Madrid, Spain. ⟨hal-03885480⟩ []
  11. Thanks to Björn Brembs for underlying this absence, see his plea for German institutions not to sign the new agreement https://bjoern.brembs.net/2023/09/no-evilsevier-deal/ []

The sustainability argument or… How academic journals economic models never really last

The starting point for this post is an article from Scholarly Kitchen in which, once again, the sustainability of Diamond journals and here the Subscribe to Open model, is questioned. This leads the author, Rick Anderson, to define sustainability:

“It’s a concept that gets invoked in many different contexts to mean a range of different things, but in this context its meaning is both basic and simple: a publisher’s business model is sustainable if it’s able to be sustained over time. […] What determines sustainability? For an ongoing and open-ended project like publishing, the baseline determinant of sustainability is simple: recurring, reliable revenue.”

This definition is interesting, though it stands on a muddy ground: how do we define “recurring reliable revenue”? What is the timeframe to judge reliability? My post will argue that there is no such thing as a stable business model, at least for a long time. Moreover, if Anderson is right to question the S20 future, the same questions should be asked to much-lightly considered “stable models”, starting with subscriptions.

Our present is not the continuation of the past: the short history of subscriptions

Over the three and a half centuries of scientific publication in journals, the economic relations between publishers of scientific outputs and their readers were far from stable. It was probably not until after the Second World War that the main relationships became those between publishers and academic libraries, on a national or international scale, not as part of a gift or exchange economy, but rather as a commodity.

As the number and budget of libraries increased and the number of published journals grew fast, a short golden age of subscriptions for journal producers, and notably commercial ones, began1. But by the 1970s, as budgets stagnated, harsh competition for libraries money was the first signal of what was later referred to as the serial crisis. This decades-long relationship, based on the sale of subscriptions and paper issues for each journal, has been profoundly transformed by the digitalization of journals.

In the late 1990s, three major events took place in the contractual relationship between libraries and scientific publishers. Firstly, in a relatively short period of time after the inception of the World Wide Web, the largest publishers put online not only their entire contemporary catalogue, but also part of their archival material. Secondly, publishers have been offering access to packages or bundles, not on a title-by-title basis, but to a long list or even all of their journals. Thirdly, to make this offer attractive, they favoured the emergence of library consortia which, by adding their singular needs, could constitute clients interested in this new plethoric offer. The combination of these three events gave rise to a new form of standard economic agreement, the big deal2.

As a result, the subscription business model has been changed from an audience-centered model – libraries purchase what readers want, title by title – to a model centered on the size of the publisher – libraries buy the most extensive offerings, leading to a much stronger oligopolization through buyouts of publishers and change of publishers for scholarly societies, very visible twenty years later.

Percentage of papers published by the five major publishers, by discipline in the Natural and Medical Sciences, 1973–2013.3

For most publishers – including self-publishing learned societies – subscription has only been profitable for a short time and is not anymore. It is not sustainable, since it now implies the disappearance of their autonomy or at least dependence on increasingly powerful players, likely to act unilaterally on their revenues. And even for the largest publishers, the threat of non-renewal of Big Deals is growing stronger from 2010 onwards, whether through the sudden drop in financial resources (Greece) or through the choice to no longer pay for a service that does not meet the needs of libraries (United States) or open access demands (Germany, Sweden). It is in this context that Elsevier has started to brand itslef as a data company, while new publishers are trying to make a new model last, based on Article Processing Charges.

The future will not be similar to nowdays: charging authors to the breakdown point

Charging authors is not a recent business model, there have been many examples of vanity publishing, targeted towards academia or outside of it4. In the US, from the 1930s on , an alternative funding model had already thrived, as subscription revenues were considered too low. It targeted authors and their funders, was based on per-page charges, first in Physics, then in other STM disciplines.5 But it was with electronification that the idea of paying a lump sum for authors – as opposed to a multitude of varying services (colour charges, page charges, cover charges…) emerged, soon to be known as Article Processing Charges (APC). Some new publishers have entirely adopted this new model, sooner or later being bought by legacy Big Publishers, like BMC by Springer or Hindawi by Wiley. But other ones have quickly become themselves global Big Publishers.

Retrospective statistics of the leading academic publishers in 20216

On selected Clarivate sources7, MDPI and Frontiers are now in the top 6 in volume published while added, they were making less than a fifth of ACS, Sage or OUP a decade ago! From the point of view of these new big players, APCs are so sustainable that they create journals almost every week. For example, in 2021 MDPI launched 84 new journals and only acquired two existing titles. As Dan Brockington has shown in his comprehensive analysis of MDPI data, this growth also comes from the lowering of rejection rates:

“Now, some 45% of the MDPI journals I analysed, have rejection rates of below 40% (Table 2). Papers in these journals account for nearly 38% of revenues from publication fees (Table 3). Conversely, the journals with rejection rates of over 50% account for just over 25% of revenues. Measures of esteem, such as listing in the Web of Science, did not seem to make a difference to rejection rates. Average rejection rate for WoS listed journals was 42.7%, and for unlisted journals 41.6%.”8

The incentive for publishers to accept a manuscript in the APC model has been discussed for a decade, and its link to the growth of vanity presses now dubbed “predatory publishers” is well established. Above and beyond what is often portrayed as a potential threat to the whole scholarly communication system, the APC business model is not sustainable from the authors and research organizations’ point of view. A large literature has constantly shown the rise of APC prices through time, would it be for open access journals or for those relying on the hybrid model. Whether they would name it “prestige prices” or “market power”, researchers describe an ever-growing number of APC articles and a rise in individual prices.9

Proponents of market regulation will argue that each author will adjust his or her willingness-to-pay to the audience and the supposed quality of the journal, but instead we see the exclusion of authors for lack of funds or the sale of places in the byline to pay APCs. And of course, the quasi-absence of success for such a business model in underfunded disciplines, like most of HSS ones.

Sustainable for whom? The durability of Diamond journals

The two most visible business models for disseminating journal content are therefore both not only at the mercy of default by their funders, but are unsustainable for both readers and authors and their respective institutions. The fact that they constitute today’s largest expenditure items in scholarly communication should not be taken as evidence of sustainability through the capture of recurrent reliable revenue. In research worlds subject to severe budgetary constraints and increasing visibility of expenditure lines, they are in fact the most threatened in their foundation.

That being said, what are the alternatives? They are well known, and have been running in some corners of the global journal market for decades without any structural sustainability problems, despite being underfunded. Their landscape has been described in a comprehensive study, showing small-scale non-profit community-owned arhcipelagoes10. Far from APC-based megajournals and publishers with huge portfolios, this ecosystem is sustained by learned societies, universities, research organizations, some research funders, but also large-scale technical infrastructures, the most obvious being PKP’s Open Journal Systems.

While it is certain that more funding and more support from different institutions is needed11, thousands of journals – and dozen of dissemination platforms – have shown their reliability as they passed the test of time.

Yet, most don’t pass the “Anderson sustainability test” as they don’t rely on “revenue” but rather support and funding as they have not been comodified. Moreover, the support come from the exact same sources that pay, in a way or another, the “unsustainable publishing models” described above. So, they are obviously sustainable for authors and for readers, but also for these supporting institutions. Though they don’t have a unified business model12 -Subscribe to Open being the latest & adequate to already commidified journals, they seem to thrive, each of them at their low-scale, but with an agregated population still larger than APC journals. After almost three decades of existence, resisting to several “serial crises”, haven’t they earn the right not to be questionned on their sutainibility, but rather considered as one of the most secure ways to build a sustanaible scholarly communication system, allied with institutional archives?

  1. Fyfe, Aileen. “From philanthropy to business: the economics of Royal Society journal publishing in the twentieth century.” Notes and Records (2022). Aileen Fyfe, Noah Moxham, Julie McDougall-Waters, and Camilla Mørk Røstvik , A History of Scientific Journals Publishing at the Royal Society, 1665-2015, UCL Press, 2022, chapter 14 []
  2. Frazier, Kenneth. “What’s the big deal?” The serials librarian 48.1-2 (2005): 49-59. []
  3. Larivière V, Haustein S, Mongeon P (2015) The Oligopoly of Academic Publishers in the Digital Era. PLoS ONE 10(6): e0127502. https://doi.org/10.1371/journal.pone.0127502 []
  4. see for a detailed history on books, Timothy Laquintano, The Legacy of the Vanity Press and Digital Transitions, Volume 16, Issue 1, Summer 2013, https://doi.org/10.3998/3336451.0016.104 []
  5. On the American Chemical Society example, see Noel, M. (2020). Back to disciplines: exploring the stability of publication regimes in chemistry: the case of the Journal of the American Chemical Society (1879–2010). Humanities and Social Sciences Communications, 7(1), 1-13; on the APS/AIP example, see Scheiding, T. (2009). Paying for knowledge one page at a time: The author fee in physics in twentieth-century America. Historical Studies in the Natural Sciences, 39(2), 219-247. []
  6. from Understanding the increasing market share of the academic publisher “Multidisciplinary Digital Publishing Institute”Dan Brockington in the publication output of Central and Eastern European countries: a case study of Hungary []
  7. That is much more restricted than Crossref, so more favourable to legacy publishers []
  8. Dan Brockington,MDPI Journals: 2015 -2021, 10 November 2022, https://danbrockington.com/2022/11/10/mdpi-journals-2015-2021/ []
  9. see for example, Budzinski, O., Grebel, T., Wolling, J. et al. Drivers of article processing charges in open access. Scientometrics 124, 2185–2206 (2020). https://doi.org/10.1007/s11192-020-03578-3 []
  10. Bosman, Jeroen, Jan Erik Frantsvåg, Bianca Kramer, Pierre-Carl Langlais, and Vanessa Proudman. “The OA diamond journals study. Part 1: Findings.” (2021) 10.5281/zenodo.4558704 []
  11. see recommandations from the mentioned study, Becerril, Arianna, Lars Bjørnshauge, Jeroen Bosman, Jan Erik Frantsvåg, Bianca Kramer, Pierre-Carl Langlais, Pierre Mounier, Vanessa Proudman, Claire Redhead, and Didier Torny. “The OA Diamond Journals Study. Part 2: Recommendations.” (2021), https://doi.org/10.5281/zenodo.4562790 []
  12. This diversity should be studied, as we will do it in the current European-funded Diamas project []

From paywall builders to data tracking moguls or.. How the big publishers have put on a new super vilain costume.

Elsevier is a felon, that is a given. This company epitomizes all the crimes, misdemeanors and petty theft that can be accomplished by a publisher. Its wikipedia page is so full of affairs, scandals and raunchy stories that it would be enough to read it to give a talk to an academic congress. And yet Elsevier still find new ways to extract value from academic communities, which both produces new profits and new critiques. This post is the story of the publisher becoming a data company.1

An endless list of academic misdemeanors

In 1880, It all begin with a “borrowing” as we say in academic life, which may also be named an hommage, a plagiarism or a steal, depending on the point of view. When the company was founded, it took over a logo from an ancestral famous printing Dutch family, whose name was Elzevier (yes, with a Z).

Elsevier logo 2019.svg
Elzevier/Elsevier logo

As a Dutch publisher, they first put out journals in the language of the country, but the need to go into exile in England in 1940, and no doubt a rather specific vision of scholarly communication, led the company to launch English-language journals in the post-war period. Along with Pergamon, Elsevier is certainly the inventor of the concept of the ‘international journal’ and created a global market for scientific writings, with customers all over the world and, what is more, a profitable market. This led to cycles of development and acquisitions, which continue to this day. But as we know in the world of superheroes, “with great power comes great responsibilities”.

And indeed they are responible. The list of “problems” attributed to Elsevier can be categorised into three different groups: firstly, a propensity to act in a “sloppy and dirty” manner, for example in copyediting failures, by selling closed articles for which authors have already paid an APC, or by not acting in front of legitimate requests for retracting articles, as in the very recent following example.

Secondly, its constant pursuit of profit leads it to bend academic rules. Above and beyond offering researchers Amazon vouchers to write reviews on products, one of the most famous examples is the publication of journals in Australia that were de facto advocacy media for Merck pharmaceutical products, through a subsidiary that is cited by Sergio Sismondo as an example of ghost management2.

Third, its concern for protecting its intellectual property leads it to numerous actions opposing open access, unlimited text and data mining, or even metadata sharing. Elsevier thus funds numerous lobbying actions, and one aiming at the US Congress led to the “Cost of Knowledge” petition in 2010. This petition called for a boycott to write, review or make editorial work for the company. It was signed by tens of thousands of academics and led to some mocking of Elsevier logo.

Michael Eisen, CC BY 3.0 https://creativecommons.org/licenses/by/3.0, via Wikimedia Commons

To sum it up, If the kids of Bruno Latour had been STS PhD students in 2010, they would probably have authored a paper entitled “Portrait of a publisher as a wild capitalist”. But that wouldn’t have predicted what happened next.

From academic publisher to data company: a very public transition

In fact, Elsevier continued to thrive as a publisher despite the tens of thousands of petitionners. But the company has changed its core business and has significantly expanded its range of services until it no longer appears as a publisher. Take two exemplary acquisitions: in 2013, Elsevier purchased Mendeley, a library management service, and for some, it was as if the Empire had bought the rebels.

https://twitter.com/tpoi/status/1543940630837022720

Elsevier had two objectives: on the one hand, to extend its information retrieval ecosystem, and on the other, to collect data on Mendeley users, potentially authors and reviewers. These same objectives were reflected in the acquisition two years later of SSRN, a preprint platform then specialising in the social sciences.

“Elsevier is now getting closer and closer to researchers with business models that don’t involve libraries,” says Joe Esposito, a publishing consultant in New York City. “The positioning is well thought out: lock up revenues to the legacy publishing business, move into areas where piracy is not much of an issue, create deeper relationships with researchers and become more and more essential to researchers even as librarians become less so.”3.

The series of purchases aim to control the bricks directly used by researchers whose research projects, results, research data, texts read, cited, reviewed, tweeted, etc were linked and. Elsevier being able to identify them. But as the 2019 comprehensive diagram below shows, they are not the only target of the “new Elsevier”.

Chen, G., Posada, A., & Chan, L. 2019. Vertical Integration in Academic Publishing : Implications for Knowledge Inequality. In Chan, L., & Mounier, P. (Eds.), Connecting the Knowledge Commons — From Projects to Sustainable Infrastructure : The 22nd International Conference on Electronic Publishing – Revised Selected Papers. Marseille : OpenEdition Press.

In fact, Elsevier’s other target market is higher education and research institutions, and even governmental institutions. The enclosure of the Elsevier ecosystem has, for example, guaranteed the company a position as a subcontractor in the construction of the first European open access monitor, which has been deemed scandalous by open access activists4. When a consortium of Dutch universities signed a transforment agreement with the publisher in 2019, this included the joint development of projects involving all kinds of data, being a Faustus pact with Lucifer where open science becomes sustaining Elsevier data infrastructure in exchange for open access papers5

In a decade, Elsevier had become a data company, selling them to numerous clients, both academic and non-academic and defining itself in corporate documents as “a global leader in information and analytics, (which) helps researchers and healthcare professionals advance science and improve health outcomes for the benefit of society.”, while making videos on the perfect world of information it designs with a product called PURE.

Naming a new supervilain : surveillance publishing

As of 2019, this transformation of Elsevier and, to a lesser degree, other big publishers was a wake-up call for various institutions and authors. They started to formalise the list of new dangers that the construction of data tracking and information aggregation systems constituted, some of them specific to the academic world and others similar to those created by GAFAM-like companies. For example, a commission of the DFG published a briefing paper where they acted as alarm raisers for the following concerns6:

  1. entail a violation of academic freedom and the freedom of research and teaching;
  2. constitute a violation of the right to the protection of personal data;
  3. pose a potential threat to scientists, as the data could also become accessible to foreign governments and authoritarian regimes;
  4. constitute an encroachment of competition law, as new participants barely have a chance to enter the market;
  5. favour a reduction in the value of public research investment, since data on research activity can be collected by commercial research competitors or made available to them in return for payment in connection with industrial espionage.

These fears may seem hypothetical, but the fact, for example, that Elsevier’s parent company, RELX, has signed a huge contract to supply personal data to the US Immigration and Customs Enforcement agency has givn some weight to their warnings. But what data are we talking about? Two facetious colleagues used the provisions of the GDPR to ask Elsevier for their data and documented their findings on the traces of their stay in the Elsevier Hotel. It contains a number of directly personal data (phone numbers, bank details, addresses), but above all a great deal of usage data on the opening of e-mails sent by the company, the most basic operations on Mendeley and Science Direct or, more amusingly or worryingly, the trace of persons consents and non-consents:

Eiko I. Fried, Robin Niels Kok, Welcome to Hotel Elsevier: you can check-out any time you like … not, 2022

A new petition, twelve years after The Cost of knowledge, calls to “Stop Tracking Science“, which actually means stop tracking academics. In the new configuration, libraries are still a passage point between the big publishers and the researchers though not exclusive anymore. But the data that goes for these exchanges is now considered in a differnt manner : “they are even attempting to persuade libraries to install trackers inside university networks: the research behavior of all of us is being recorded in real time.” While identification has for long been presented as a necessary security to provide access to closed texts, it is now a source of concern, in a very similar manner as cell phone or internet tracking. To describe this phenomenon, several labels have been proposed, such as”platformization of science”7 or “surveillance publishing”8 The big publishers are trying through various legal actions to present Sci-hub and Libgen not only as intellectual property offenders, but also as dangerous hackers for the security of research institutions., while it is the same accusation that is directed towards them. So, in the end, for you, which ones are the supervilains threatening academic communities?

  1. this post is based on a communication given at the 2022 EASST Conference in Madrid at the same date []
  2. Sismondo S (2007) Ghost Management: How Much of the Medical Literature Is Shaped Behind the Scenes by the Pharmaceutical Industry? PLoS Med 4(9): e286 []
  3. Van Noorden, R. Social-sciences preprint server snapped up by publishing giant Elsevier. Nature (2016). []
  4. see for example Jonathan Tennant, & Björn Brembs. (2018, October 26). RELX referral to EU competition authority. Zenodo []
  5. This deal has been the object of a very long post. See also SPARC analysis []
  6. Data tracking in research: aggregation and use or sale of usage data by academic publishers. A briefing paper of the Committee on Scientific Library Services and Information Systems of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) 28 October 2021 []
  7. Kunz, Raffaela: Threats to Academic Freedom under the Guise of Open Access: The Power of Publishers, Data Tracking in Science, and the Responsibilities of Public Actors, VerfBlog, 2022/3/18 []
  8. Pooley, J., (2022) Surveillance Publishing, The Journal of Electronic Publishing 25(1) []

Living in a post-Ingelfinger world or… The HCQ-COVID-19 publication show

Disclaimer: this post does not address the merits of the treatments proposed by the IHU team nor their risks, and even less the fact that Prof. Raoult would be a genius, a madman or a top scientist who got lost along the way.

It all started with a video, posted on February 25th, 2020, then entitled “Covid-19: endgame”, and put by IHU Méditerranée-Infection on Youtube. In that less than 2 mn video clip, extracted from the end of a seminar, Didier Raoult states that COVID-19 is “probably the easiest respiratory infection to treat” and that chloroquine (CQ) is effective and already “recommended for all clinically positive cases” in China. It wasn’t the first time this infectious disease star recommended CQ and its cousin molecule, Hydroxychloroquine (HCQ) to fight viral infections. Indeed, as early as 2007, he presented these drugs as “an interesting weapon to face present and future infectious diseases worldwide” in the International Journal of Antimicrobial Agents. (IJAA). Framed as a recycling of these antimalarial drugs, the article constituted a literature review, mainly of in vitro studies, and was part of the scientific and medical strategy of the IHU, the repositioning of old molecules, free of rights, towards new uses. And this possibility of reuse was taken up in a letter sent on February 11th, 2020 to the same journal (IJAA), accepted the same day and published on Februray 15th.

The series of IJAA publications continued. The day after the Youtube video, a new article was submitted, specifically dedicated to the use of CQ as a treatment for the COVID-19 epidemic. Accepted the next day, February 27th and published a week later, it repeated the efficacy claims observed by the Chinese and as a result of clinical recommendation. This assertion is based in particular on one of the strangest references I have ever encountered. Indeed, it is a letter of exactly ten lines published in BioSciences Trend, which body is copied below :

The coronavirus disease 2019 (COVID-19) virus is spreading rapidly, and scientists are endeavoring to discover drugs for its efficacious treatment in China. Chloroquine phosphate, an old drug for treatment of malaria, is shown to have apparent efficacy and acceptable safety against COVID-19 associated pneumonia in multicenter clinical trials conducted in China. The drug is recommended to be included in the next version of the Guidelines for the Prevention, Diagnosis, and Treatment of Pneumonia Caused by COVID-19 issued by the National Health Commission of the People’s Republic of China for treatment of COVID-19 infection in larger populations in the future.

Defined as an “abstract” on the journal site, but without any other body of text, this “article” doesn’t seem to be fully supported by the 7 references listed. It relies mainly an in vitro study from early February, already widely cited, which indicates that CQ could be effective. In fact, It wasn’t until February 29 that the results of a CQ clinical study were submitted to a Chinese journal, before being published on March 6. But let’s go back to the IHU timeline.

Ten days later, a second video was put on Youtube, presenting the results of an observational study made in Marseille and showing the effects of HCQ alone and in combination with an antibiotic, azithromycine (AZ). So there was a slight shift: going from CQ to HCQ and adding an antibiotic. The main result is only the absence of virus in nose and throat, so it is not clinical results but Didier Raoult drew from results to tell his audience their consequences for the clinical institution he manages:

The fact that you no longer have the virus changes the prognosis. Actually, that’s what infectious diseases are all about. If you don’t have the germ anymore, you’re saved… You have a right to be tested here, and if you’re tested, you have a right to be treated here. That is what we will do.

So basically, for him, results were so good that you HAD to treat people when they are tested positive. No more trials or research needed, the time for clinical medicine had come, hoping other places would follow his lead. Slides were available on the same webpage but no link to an existing paper, though the same day, but not mentionned in the video, a preprint was submitted to MedrXiv Simultaneously, as it is often the case with biomedical preprints, it was submitted to a journal… the ever-welcoming IJAA, who accepted it, as usual, one day later and published it on March, 20th. Before we come to the extraordinary fate of this paper, let us go back to the title of this post and its interest at this point.

From preprints to preprints:
the life and dearth of the Ingelfinger rule

We can observe from the two examples above a pattern of scientific communication: the IHU first posts videos, then produces preprints and finally publishes articles in academic journals – here IJAA. This is very unusual, at least in contemporary times, but happened in various ways during centuries of scholarly communication. The idea that you had first to communicate with your peers through a journal before getting to “the public” is neither constant nor dominating in all disciplines. In our era, it was pushed at a key moment in the mid-1960s. Back then, a first wave of preprints was being supported by NIH and was gaining momentuml in some biomed communities through Information Exchange Groups (IEG) that would circulate by air mail printed copies of unpublished manuscripts1. Nature started a campaign against the “preprint galore” and a few European and US biology and biochemistry journals editors-in-chief met in Vienna in 1966 to get rid of them by stating that : “The journals listed below will not consider manuscripts for publication if preprints, of essentially identical content, are to be distributed, in substantial numbers, by an agency independent of the author or of the publisher of the journal. “2

That led to the termination of the IEG experiment by the NIH in 1967. Two years later, the New England Journal of Medicine (NEJM) editor-in-chief, Franz J. Ingelfinger, coined the rule of acceptance of a paper, based on his interpretation of “sole contribution”, de facto forbidding even “circulation-controlled journals” to print something ahead of the NEJM3. In the same sentence, he remarkably included “news media”: he therefore aimed not only at the exclusive circulation of the article within scientific communities, but also to the prohibition of dissemination of its content to journalists and other medical news enthusiasts. In the early 1970s, his work to promote this exclusivity had a double effect: this practice was given the name Ingelfinger Rule, and many high-profile journals adopted it explicitely. While at the beginning of the 21st century the Ingelfinger Rule was often interpreted as a means to fight against the duplication of papers, its aims were more about controlling the circulation of knowledge in order to protect the newsworthiness of “general medical journals”4 and to organize communication about medical academic papers in a specifc way, favorable to a limited number of journals.

Indeed, as Vincent Kiernan beautifully described in his 1997 article5, the Ingelfinger Rule had become prevalent in Anglo-American journals. It is in particular the efforts of the International Committe of Journal Medical Editors (ICMJE) that built it as a “publishing standard”, which effect was for these journals and their editors-in-chief to simultaneously operate a double control:

  1. control on the authors by requiring them not to reveal the content of their articles, and even less so share the figures and other synthetic representations of results.
  2. control on journalists by providing them with preprint copies of articles in advance, while imposing an embargo on them until actual publication by the journal.

As a result, the general press (free of charge) advertises the content of the journals – it is not an article by Dr. X & Y., but an article from the NEJM or The Lancet – and organizes the dissemination of “medical discoveries” by strengthening the influence of these journals both within academic communities and within press professionnals and the general public. To conclude his paper, Kiernan questions the durability of such practics in the Internet-era and points out the effect of ArXiv preprints, citing the efforts of the ICMJE to extend the Ingelfinger rule to e-prints, with the argument of the direct consequences of biaised or false medical knowledge for the public.

The biomedical field resisted 15 more years to preprints and the Ingelfinger Rule largely stood6, even if it was adapted to emergency contexts, such as the AIDS epidemic. But Kiernan’s forecast came into reality, notably with the creation of BiorXiv in 2013 and the subsequent success of preprints in biology and biomedicine, until preprints became quasi-articles. Consequently, the Ingelfinger rule was dropped by numerous journals and publishers, even if NEJM itself keeps a case by case policy.

Prof. Raoult and his videos, possibly including slides with the figures so dear to the NEJM, thus live in a post-Ingelfinger world, in which academics can directly ensure their communication, not only in terms of content, but also in terms of comments, criticism, reporting or response. Indeed, we will see that the primary communication is not the only one modified by the abandonment of this rule, but the complete organization of the journal’s centrality in the whole chain of scientific communication.

Chaos and creation around one paper

Let us go back to this first publication by Raoult’s team on the effects of HCQ on viral porting, published in the IJAA on March 20, 2020. At the time of writing this post, the article has received 1124 citations according to Google Scholar but also thousands of tweets, blog posts and other references in press articles according to PlumX, a company owned by Elsevier, itself the IJAA publisher. The early circulation of the article was not based on a press release of the IJAA, but on Raoult’s own video and that of his various networks. As Wired recounts, with the help of a lawyer, a retired doctor, a shared google doc and an interview to Fox News, an heterogeneous assemblage à la Bruno Latour, the study published in the IJAA won a quote in a Tweet from the President of the United States the day after its publication:

That Trump endorsement of course had enormous consequences on the HCQ market, the launching of clinical trials, self-medication HCQ practices and the scope of public discussion on the efficacy and dangers of such a treatment. We won’t directly treat these important questions here, but keep on following the exotic trajectory of the publication itself. Simultaneouly to the Trump tweet, a PubPeer thread was lauched on the famous post-publication comment platform, but contrary to the Voinnet affair7, most of the first commentators signed their critiques. Among other topics, the communication trajectory of the paper helped the critique: for example, Leonid Schneider noticed the discrepancies between the figures attached to the video and the ones drawn in the published paper.

Above and beyond Pubpeer, three reviews were quickly published, questionning many aspects of the IJAA paper. The first one is a twitter thread by a master student on March, 22nd ; the second one is a zenodo 18-pages paper by three British/Irish statisticians on March, 23rd ; the third one was a blog post by a very famous Dutch microbiologist and scientific misconduct specialist, Elizabeth Bik on March, 24th. So only four days after publication – still four times the actual reviewing IJAA delay – the paper is being trounced online. Among the many points, let us note that the publishing history was being questioned, some noticing the differences between the first “preprint” on IHU website and the final paper, others underlying the lack of changes, an hint for them on how tenuous the peer review process has been., the 24h delay being surprising to every commentator. The fact that one of the authors was also the editor-in-chief of IJAA was underlined, as well as the “vanishing” of 6 patients (among 26 treated by the combined drugs), which could completly change the statistical value of the results.

While Prof. Raoult was fighting for HCQ to be authorized for general physicians in France, the online discussion kept on going until the learned society, the International Society of Antimicrobial Chemotherapy (ISAC) behind the journal, made a troubling press relase on April 3rd:

“ISAC shares the concerns regarding the above article published recently in the International Journal of Antimicrobial Agents (IJAA). The ISAC Board believes the article does not meet the Society’s expected standard, especially relating to the lack of better explanations of the inclusion criteria and the triage of patients to ensure patient safety. Despite some suggestions online as to the reliability of the article’s peer review process, the process did adhere to the industry’s peer review rules. Given his role as Editor in Chief of this journal, Jean-Marc Rolain had no involvement in the peer review of the manuscript and has no access to information regarding its peer review. Full responsibility for the manuscript’s peer review process was delegated to an Associate Editor. Although ISAC recognises it is important to help the scientific community by publishing new data fast, this cannot be at the cost of reducing scientific scrutiny and best practices. Both Editors in Chief of our journals (IJAA and Journal of Global Antimicrobial Resistance) are in full agreement.”

So the paper has a lot of problems, but stuck by the peer review rules. This cryptic PR became even more troubling a week later as it was “replaced” by an ISAC and Elsevier press release. In fact, the journal is not owned by the learned society, but by the Publisher, only being an “official society journal”. This second PR is streamlined compared to the first one as the “not meeting standard” sentence has disappeard and an announcement of post-publication peer review audit. Through this example, we measure how much different is the situation from what was prevalent under the Ingelfinger Rule. But it is with another Raoult’s team paper that science communication came back to its 17th century roots.

From presidential visit to media frenzy:
the marginalization of journals in scholarly communication

After a follow-up study published at the end of March which made less headlines and as some HCQ trials on diverse patient groups were starting to being published, it is with another observationnal study that Prof. Raoult showed the world how he was really managing scholarly communication. On April 9th, the French president, Emmanuel Macron unexpectidely visits IHU Mediterrannée and meets with Prof. Raoult, who presents him the results of its ongoing study. There was no press, but members of the IHU had recorded the arrival of Macron and posted it, making it available to the whole French media.

Here we need to go back to the origins of scientific communication, even before journals were born, when the quality of witnesses – meaning mostly royalty kinship – were an important element of the credit given to the narrative of an experiment or an observation8. In our times, it became a two-way credit flux: Macron was showing his will to base public health on evidence-based, all the more given by a star scientist, while Raoult was legitimizing his position in the French public health landscape, where critics of his methods and results were numerous.

The next day, Raoult made public his first results, not in the form of a preprint or slides with an associated video, but as a simple tweet with the abstract and a summary table.

This tweet was of course massively picked up, commented on and aroused strong media interest, all the more so as the results reinforced those of the previous study by moving from a purely biological effect to a clinical effect: “The HCQ-AZ combination, when started immediately after diagnosis, is a safe and efficient treatment for COVID-19, with a mortality rate of 0.5%, in elderly patients. It avoids worsening and clears virus persistence and contagiosity in most cases. ” Four days later, Prof. Raoult was invited in Dr Oz show, a famous TV host in the US, harshly criticized for his often unproven medical advice.

https://www.youtube.com/watch?v=uy1cPT1ztko

At the day of the interview, there was no preprint and the paper was not even submitted to a journal. Yet, Prof. Raoult presents his results as facts. It was only on the 20th that the manuscript was sent to Travel Medicine and Infectious Disease,9, with 10 days for peer review and a publication on May, 5th. Tens of thousands of comments on Facebook and tweets have followed according to PlumX,10 though media as much endorsed the results as they reported the methodological limits os the study – mostly the absence of a control group.

This study is undoubtedly a borderline case in the marginalization of journals, with communication aimed primarily at peers being out of step with announcements to political leaders and media outlets. Nevertheless, the massive availability of preprints, abstracts or other materials on topics such as the effectiveness of masks or tests, the persistence of coronavirus on this or that surface, or cases of cure, has led to significant media coverage. From the point of view of the public authorities and the general public, it could have strengthened the authority of academic journals, again in a position to assert their necessity as a obligatory passage point for public dissemination. But this return to grace assumed that the journal peer review is an effective barrier against “bad science”, an hyptohesis which has been dismissed by thirty years of studies and literature.

Prestige journals in epidemic times:
an economy of reputation crumbling down?

Indeed, prestige journals are bad for methodology: they don’t follow their own standards on reporting clinical trials, and more generally disicplinary standards. Yet they remain prized places to publish, even during the pandemic where preprints are so trendy because of the urgency to share results and knowledge. And some HCQ papers have been quietly published in such journals, until one observationnal study seemed to close the dabate on this treatment efficacy and risks.

For this study, there was no advance communication, no preprint but a straight article published in The Lancet by 4 authors. Oh, yes, there is a little gem still there on Twitter : two days before online publication, the “first author” answered a tweet by Richard Horton, editor-in-chief of The Lancet:

https://twitter.com/MRMehraMD/status/1263034198870429696

The reaffirmation of their confidence in the journal peer review system, even in times of health emergency, is comforting. And their trust is shared by the highest health authorities. On May 22nd, the study was published and asserted on the basis of a gigantic aggregation of almost worldwide patient databases that HCQ is not only inefficient, but also a very dangerous for COVID-19 patients. This announcement came at a time when many ongoing trials are displaying HCQ treatment arms. As a result, the WHO decided the next day to evaluate the continuation of its Solidarity study and announced its position on May 25th:

“Having met on 23 May 2020, the Executive Group of the Solidarity Trial decided to implement a temporary pause of the hydroxychloroquine arm of the trial, because of concerns raised about the safety of the drug. This decision was taken as a precaution while the safety data were reviewed by the Data Safety and Monitoring Committee of the Solidarity Trial. “

Nevertheless, in a manner similar to Prof. Raoult’s article, statisticians then look at the content of the article, the data it provides, and begin to point out obvious errors. But for some it was more a police investigation than data re-analysis: how can there be only 4 authors (and no acknowledgements) for such a study? Why are the hospitals involved not mentioned? What is this mysterious enterprise – Surgisphere – unknown until recently, which provides this data? What is the career of its manager and co-author of the paper? Putting apart questions about the company, 6 days after publication, they end up writing an open letter to the authors and the journal, signed by 201 colleagues and endorsed by James Watson11. They mainly point out the necessity to open the data, even more considering the extraordinary results, and describe obvious errors, questionning the quality of the database and the way (including ethics) data was gathered.

The Lancet and the authors were very prompt in responding to these criticisms: in fact, on May 30 a correction was published, covering very minor aspects. : “the numbers of participants from Asia and Australia should have been 8101 (8·4%) and 63 (0·1%), respectively. One hospital self-designated as belonging to the Australasia continental designation should have been assigned to the Asian continental designation.” Of course, the conclusion was a classic in those corrections : “There have been no changes to the findings of the paper.” But critics keep on pushing on the problems, would they be HCQ supporters, Prof. Raoult himself stating “fake data” or “manipulated data” on Twitter or clinicians trying to find coherence between the papers’ data and their own. So, only 3 days after the correction, The Lancet puts an expression of concern on the paper:

“Although an independent audit of the provenance and validity of the data has been commissioned by the authors not affiliated with Surgisphere and is ongoing, with results expected very shortly, we are issuing an Expression of Concern to alert readers to the fact that serious scientific questions have been brought to our attention”.

The paper was still saveable, thanks to the independant impeding audit. Alas, another 2 days and the 3 authors who do not belong to Surgisphere threw in the towel by stating they haven’t seen the data, and demanded the retraction of the article. The Lancet officialized it, provoking expression of outrage, the questioning of the seriousness of the journal and… the reactivation of the suspended trials. Thus, in less than a week, the worldwide study published in what many consider to be “one of the best medical journals in the world” has been awarded the 3 labels commonly used in post-publication peer review – Correction, Expression of Concern, Retraction12 – nullifying the evidence claimed on May, 22nd. But the Surgisphere story goes beyond that article: another paper, published by NEJM on the “same kind of data” was retracted on the same day. Moreover, there are at last two regions – South America and Africa – which have and will suffer from public health policies being developed on preprints and data published by Surgisphere. While #LancetGate was trending on twitter, in-depth inquiries were being made on Surgisphere and the 4th author of study who, ironically, coauthored a paper entitled : “Combating Fraud in Medical Research’ in 2013 !

Science at its best:
boring, negative results

To conclude this story on scholarly communication, we have to add that most HCQ articles have not been given the same media treatment and have not been communicated in fancy ways by authors: a preprint on BiorXiv or MedrXiv, then an article with often no spectacular results and limitations because of the number of patients, their previous health conditions, incomparability between groups, etc. One day before the retractions, the same NEJM published the first randomized-control trial on post-exposition use of HCQ, so close to the “Raoult treatment” – AZ not being included. Here is part of the abstract published:
Side effects were more common with hydroxychloroquine than with placebo (40.1% vs. 16.8%), but no serious adverse reactions were reported.After high-risk or moderate-risk exposure to Covid-19, hydroxychloroquine did not prevent illness compatible with Covid-19 or confirmed infection when used as postexposure prophylaxis within 4 days after exposure.”

What do we get from this abstract? That the article is a typical example of those “negative results” that fail to be published, leading to significant biases in the evaluation of treatments in clinical trials through a “publication bias”13. And yet, not because of its own interest, originiality, breakthrough knowledge, but because of its relevance to public health in an epidemic situation, this trial has been published by the other “world’s best medical journal”.

While predictions of “really bad science to come” have sounded true for most commenters and supported by a high number of retractions, the COVID-19 academic publication landscape has also shown a massive uptake on preprints, public education on scientific controversies, conflict of interest and statistical analysis and furthermore… yes, publication of null results in prestige journals. Whether you think this is a total mess and you prefered the Ingelfinger rule depends on the way you conceive academic research and scholarly communication. Back then, preprints were non-existent in biology and social networks had to be invented, but The Lancet published the Wakefield paper on the link between MMR vaccine and autism. Was it a better time?

  1. See Cobb, Matthew., 2017. “The prehistory of biology preprints: A forgotten experiment from the 1960s.” PLoS biology 15.11 []
  2. Thorpe, W. V. (1967). International Statement on I nformation Exchange Groups. Science, 155(3767), 1195-1196. []
  3. Ingelfinger, Franz. “Definition of” sole contribution”.” N Engl J Med 281 (1969): 676-677. []
  4. Ingelfinger, F. J. (1977). The general medical journal: for readers or repositories?. New England Journal of Medicine, 296(22), 1258-1264. []
  5. Kiernan, V. (1997). Ingelfinger, embargoes, and other controls on the dissemination of science news. Science Communication, 18(4), 297-319. []
  6. See as an example this defense of the rule by Nature in 2010, five years after having written they were ok with preprint servers []
  7. see Torny, Didier. “Pubpeer: vigilante science, journal club or alarm raiser? The controversies over anonymity in post-publication peer review.” 2018 and Guaspare, Catherine, and Emmanuel Didier. “The Voinnet Affair: Testing the Norms of Scientific Image Management.Gaming the Metrics: Misconduct and Manipulation in Academic Research (2020): 157. []
  8. See the classic book Shapin, S., & Schaffer, S. (1985). Leviathan and the air-pump: Hobbes, Boyle, and the experimental life (Vol. 109). Princeton University Press []
  9. A journal in which one of the authors is an associate editor have underlined Raoult’s critics []
  10. The story is quite different within the academic world with “only” 21 citations until now, far much less than the March study. In fact, many observationnal studies and trials were competing with this study []
  11. EDIT June 9th: James Watson made a fantastic interview on an australian radio where he gets into detail about how he started and run this 5-days inquiry, hear it there []
  12. On the standization of journal policies, see Pontille, D., & Torny, D. (2017). Beyond Fact Checking: Reconsidering the Status of Truth of Published Articles. []
  13. There is a huge literature on this topic in the last 30 years, see as an example this The Lancet article, Easterbrook, P. J., Gopalan, R., Berlin, J. A., & Matthews, D. R. (1991). Publication bias in clinical research. The Lancet, 337(8746), 867-872. []

Faustus pact with Lucifer or… How Open Science becomes sustaining Elsevier data infrastructure in exchange for open access papers


“On these conditions following:
First, that Faustus may be a spirit in form and substance.
Secondly, that Mephistophilis shall be his servant and at his command.
Thirdly, that Mephistophilis shall do for him, and bring him whatsoever.
Fourthly, that he shall be in his chamber or house invisible.
Lastly, that he shall appear to the said John Faustus at all times,
in what form or shape soever he please.

I, John Faustus of Wittenberg, Doctor, by these presents do give both body and soul to Lucifer, Prince of the East, and his minister Mephistophilis, and furthermore grant unto them, that twenty-four years being expired the articles above written inviolate, full power to fetch or carry the said John Faustus body and soul, flesh, blood, or goods,
into their habitation, wheresoever. By me, John Faustus.

Faustus
CCBY Bart Everson

The legend of Faust has known many versions, but that of Christopher Marlowe, highlighted above, is no exception to the common rule: it is the absolute thirst for knowledge that drives the scientist to conclude this pact, while the evil or deceptive nature of Lucifer does not play a major part in its making1. So to call this reference to the signing of an agreement between scholarly institutions, by definition producers of knowledge, and a publishing house, however powerful it may be, normally only responsible for disseminating it, may seem counter-intuitive. Yet, as we shall see, it is the one that is required, as the relationship between the two parties may be potentially inverted. With this new agreement, Elsevier will try to become the knowledge-producing entity, the one that will give these institutions and their authors what information they think they absolutely need.

From subscription to a Read & Publish pilot
to a full Publish & Read agreement

The relationship between the Dutch universities, represented here by SURFmarket B.V., and the publisher Elsevier is very old and has mainly consisted of the supply of journals in the form of paper subscriptions, then by electronic access from the end of the 20th century until 2015. In March 2016, if a new contract is signed, it contains not only subscription services but also provisions for the open access publication of a limited number of articles, originally 3600 over 3 years. This agreement was not necessarily as successful as expected, as for example 1300 articles were not “consumed” at the end of this first agreement. Nevertheless, from amendment to amendment – 7 in total, the contract was extended in terms of the journals concerned (Cell Press) and temporally until 20 April 2020.

In contemporary classifications, this agreement could therefore be considered as a Read & Publish, with a subscription fee, open access publications being produced without additional payment. The first parts of the new contract show a reversal of this logic by displaying a unified cost for all the services provided by Elsevier: reading is no longer separated from the publication in the pricing, even though the provisions of the former are much more complex and pages long than those of the latter

Indeed, as is often the case in subscription contracts, numerous provisions govern the rights to access and read content, but also the duties of the publisher in terms of document supply and the scope of services. But, as we saw in the case of the Springer/DEAL agreement, the provisions of publication services can be relatively complex. This is not the case here: no financial exchange linked to each publication, no limit on the number of articles, no separation between publication in hybrid and full open access journals, so only two pages define the conditions of publication. Beyond the description of the workflow, one article should be highlighted:

Both parties are committed to reach 100% Open Access during the term of this Agreement, In line with this joint ambition, Elsevier offers Corresponding Authors the possibility to publish Gold Open Access in the widest possible range of Elsevier journals under the Terms of this Schedule 4. As per the effective date of this Agreement 95% of the journal articles by the Corresponding Authors are eligible to be published Open Access. For the remainder of the journal articles, Elsevier will continue to strive for sustainable immediate open access options across its journal portfolio to support the 100% Open Access goal.

As in a large number of technologies, lack of success is not necessarily an obstacle. Whereas in spite of more than four years of possible publication under the previous agreement, only a fraction of Dutch authors had chosen this route, Dutch universities this time aim for 100% open access, and Elsevier promises them that almost all the journals it distributes will meet this end. While at the same time, authorizing authors to not chose Open Access (p. 45), pushing further away this objective of 100% OA for corresponding authors papers.

The whole scheme is close to the one signed by Elsevier and Bibsam, the Swedish consortia, after they spent almost 2 years with no deal. But the Swedes claimed they are actually paying less than before in total costs in a recently published article2 while signing an agreement where Swedish authors are almost mandated to go for an OA publication.

More services means more costs

On this OA publication part, the Dutch contract is therefore not just a continuation of the previous one since new journals are involved and technical provisions are made to publish “by default” in open access in CC-BY. Moreover, the volume of publishable articles – even if it was previously never fully consumed – is now unlimited. This expansion of the service is accompanied by a sharp increase in costs. If we take the amounts listed in the various amendments to the 2016-2020 contract and report the new amounts, we obtain the following graph, quite different from the Swedish one3 :

Over a “long period” (9 years), we therefore observe a 40% increase in costs, meaning an inflation of more than 4,3% every year. Far from the assertion of “cost neutrality” as in the OA2020 text of 2015 and the initial hypotheses of the Coalition S, the simply potential transformation of all Dutch publications into open access articles is therefore extremely costly in this case and renews the observations of serial crisis already made by SPARC 25 years ago. If the amount paid is constant between 2021 and 2024, there is no guarantee that it will not sharply rise again after the end of the current contract. Financial information was not surprisingly completly absent of the press release, Dutch institutions touting the new agreement objectives as if they were already realised:

NWO President Stan Gielen said: “Enabling Open Access to research results has been a core mission for NWO since 2003. This agreement is a giant step in our collective ambition to provide 100 percent Open Access for all publicly funded research in the Netherlands.”
NFU / CEO of Amsterdam UMC Hans Romijn, said: “This is definitely a game changing agreement in open access publishing in medicine from both national and international perspectives, considering the large impact and the volume of Elsevier journals. This will certainly contribute considerably to the advancement of research, and, most importantly, better treatments for our patients.”

The same assertions have been made over the last 10 years about the agreements signed by different consortia, highlighting the open access part of such deals. They are however very different from the “revolutionary idea” proposed by Elsevier in Automn 2019 about data. In fact, it was so revolutionary that it leaked out :

https://twitter.com/sarahderijcke/status/1190610725250764800

As Sarah de Rijcke, a distinguished science and technology studies scholar, underlines it, Elsevier then tried to directly exchange open publications for data, continuing Big Publishers strategy in investing scholarly infrastructures in order to maintain their profits while adopting open access for publications4. That led to a public discussion of ongoing negociations and a VSNU communication that denied “selling” metadata and research data to Elsevier. In December 2019, a press release reaffirmed that data remained the propriety of universities and that some principles were taken to avoid vendor lock-in. Let us now see how it has been dealt in the final agreement.

Elsevier as a data company
and how you will be willing to pay for it

Apart from the introduction pages, one has to reach page 102 to deal with data and “Open Science Services for Research Intelligence and Scholarly communication” that are part of the agreement. The first and second page of this section describe the collaborative principles that were quoted in the December 2019 press release, which look very consensual.

  1. interoperability and vendor neutrality
  2. transparency, inclusion and collaboration
  3. access to research data and metadata
  4. data portability

If we add to this the common governance structure specified in the last pages and the fact that each party retains its data at the end of the agreement, this part of the agreement can be considered as a true joint collaboration. Nevertheless, Mephistopheles drapes itself in detail, and a full reading of the articles on page 104 underlines how Elsevier now considers itself a data company. Firstly, by default, everything belongs to Elseiver, except what is directly “provided” by the institutions. Secondly, under no circumstances can intellectual property resulting from the development of services be shared. Thirdly, if a common intellectual property were to be created, a new agreement would be needed in which Elsevier would have ownership and the institutions a free but non-exclusive right of use. Fourthly, all existing openly licensed data provided by the institutions are directly reusable by Elsevier. Fifthly, even in the absence of such data, Elsevier may develop equivalent or similar services with other partners. Finally, sixthly, if sensitive data or data belonging to third parties were to be at included in the services, the responsibility would of course only be that of the signatory institutions.

The contrast is therefore striking: on the one hand, Elsevier is (finally) ready to release the publications of all its journals under Publish & Read agreements in return for a fee; on the other hand, the publisher locks all the data and does not wish to share them under any circumstances, thus underlining how much they are now considered to be the real valuable object of the academic world5.

But what pilot services are implemented in the agreement? For the time being, and contrary to the subscription and open access publication services, none are specified. These are simply examples that are given in a table on page 103, reproduced in the FAQ and below:

USE CASE DESCRIPTION
Aggregation and deduplication service based on CRIS systems Improves findability and visibility of NL research outputs by aggregating and deduplicating separate CRIS systems into a Pure Community module available to all institutions which can serve as a building block to a NL open knowledge base.
2. NL Research data Link research data from member institutes affiliated researchers in subject or domain specific repositories into Dutch knowledge base
3. Funding information Link NL research outputs to grants and funders (EC, ERC, NWO, RVO, ZonMw), to allow for improved tracking / assessment of impact of funded research.
4. Health Data Management Link NL health ‘data silos’ in a secure HDM platform
5. OA compliance as a service A proposed service to better use knowledge base OA publication reminders, meet funder requirements, collect assets + reporting
6. Fair recognition and reward A proposed service to integrate a wider array of metrics and success stories for a better, wider recognition of academics. Inclusion of teaching, society outreach, management, etc.

This list contains extremely different objects: some of them look like pure IT services that could be provided by companies operating outside of the academic world, with the building of shared data infrastructures. Others are based on the crossing and enrichment of very specific data of the academic world, and therefore likely to feed even more the Elsevier databases, for example to build its own Open Science Monitor for diverse institutions. Finally, the last item on the list is quite staggering since it is no more or less the project of delegating to Elsevier a service for the individual evaluation of researchers, including of course open science dimensions.

Whether these pilots come true or not, this last part of the agreement underlines the extent to which it embodies a dystopian vision of Open Science, portrayed by Philip Mirowski as an extension of platform capitalism6. It strengthens Elsevier’s position as owner of scholarly infrastructure, provides the company with potential models for new services and organizes digital labor to enrich the data it already owns. All that while continuing to pay huge sums for access to its publications and in exchange of the “liberation” of some thousands open access articles which will of course drive web traffic to its servers. Maybe the new services will never see the light of day and this agreement will just be another Publish & Read. But if not, Faustus will have not only increased its dependence on the publisher, but will have empower it to the point it becomes the real information provider in their relationship, as publications would be reduced to “raw data”.


  1. this post was cowritten by Quentin Dufour []
  2. Olsson, L., Lindelöw, C. H., Österlund, L., & Jakobsson, F. (2020). Cancelling with the world’s largest scholarly publisher: lessons from the Swedish experience of having no access to Elsevier. Insights, 33(1), 13. DOI: http://doi.org/10.1629/uksg.507 []
  3. EDIT: part of the rise could also be attributed to the inclusion of new Dutch institutions in the agreement []
  4. see this wondeful conference paper Posada, Alejandro, and George Chen. “Inequality in knowledge production: The integration of academic infrastructure by big publishers.” 2018 []
  5. On a side note: It remains unclear whether article metadata will be released on a CC0 license in Crossref, continuing or not the anti-open citations Elsevierpolicy []
  6. Mirowski, Philip. “The future (s) of open science.Social studies of science 48.2 (2018): 171-203. []

The perfect hacking of journal peer review or… The fastest way to become a Highly Cited Researcher

Since the beginning of the 21st century, the names of great fraudsters have spread beyond the academic arenas, each one bringing their biographies, their practices and the astonished tale of the discovery of their misdeeds. This starisation of fraudsters should neither hide the existence of famous cases in the past1, nor the multitude of ordinary misdemeanors and misconduct daily taking place in the academic world, which is hardly different from other professional circles in this respect. Nevetheless, they deserve a place in the Hall of Hame of academic fraudsters; so, before addressing the case of our new champion Kuo-Chen Chou, let’s review a few exemplary figures of this Hall of Fame, in alphabetical order.

Yoshitaka Fujii (2012): enduring Japanese anesthesiologist who owns is world record holder for the number of articles retracted (183). He spent his career inventing data and despite a statistical analysis published in 2000 showing how “too nice” his numbers were, he was not really worried until 10 years later.2

Woo-Suk Hwang (2006) : amazing Korean veterinarian and biologist, specialized in stem cells and producer of the first human clone, announced by a publication in Science. After being accused of forcing his technicians to donate their eggs for his research, investigations revealed the total absence of human cloning. A national glory in South Korea and an international star, his public downfall was so brutal that he made the cover of Time Magazine.

Jan-Hendrik Schön (2002): industrious German physicist, working at Bell Labs on the limits of matter and life, able to co-author in less than two years seven papers in Nature, eight in Science and six in Physic Review. All of course have since been retracted, and it seems that all his research, including his thesis, was based more on his desire to stick to the expectations of theory or those of his colleagues than on the empirical results he claimed to have achieved.3

Diederik Stapel (2011) : extraordinary Dutch psychologist, whose social experiments always proved the hypotheses made… since they were never carried out, but fabricated on paper and computer. Denounced by a whistleblower from his team and 58 retracted articles later, he was the object of a sensitive New York Times portrait . His own production became an object of psychology of deception as his colleagues found small differences in style between his genuine articles and the fake ones.

From transparent peer review
to citation manipulation

These eminent members of the Hall of Fame, all men – women are an extremely small minority among the elected members – have produced “false science” but have neither massively plagiarized nor attacked the peer review system. They rather provided, as good forgers, the expected raw material to journals. However, over the last 10 years, there has been concern about how reviewers or publishers can indirectly influence the science produced, in a more subtle way. “Coerced citations”, “fake peer review” and “cartel citations” are all designations of practices that do not directly fudge the content of articles, but act on the margins by hacking into the journal peer review process.

So, the old criticisms about the misdeeds of anonymity in journal peer review4 were reborn at great expense and many debates about “transparent peer review” took place. Where in the past editors and authors were at the helm of these discussions, now the publishers are in charge and, above all, Elsevier. The company provided two in-house researchers with access to his back office and they were able to compare the bibliography of the manuscripts with those of the published articles and check whether added references were coauthored by reviewers of the manuscript. Unsurprisingly, the authors of When Peer Reviewers Go Rogue concluded that there was a manipulated citation, even if its level is quite low (0.79%).

At the same time, a research team led by the famous John Ioannidis sought to build a “clean” base of citations for the most cited researchers. For them, this meant being able to separate self-citations from the rest, and it was on this occasion that they made a surprising discovery: the staggering intensity in the level of self-citation of some colleagues. Indeed, as your level of citations rise, you expect that they are all the more coming from distant colleagues. Not for everybody:

Vaidyanathan, a computer scientist at the Vel Tech R&D Institute of Technology, a privately run institute, is an extreme example: he has received 94% of his citations from himself or his co-authors up to 2017 (…) He is not alone. The data set, which lists around 100,000 researchers, shows that at least 250 scientists have amassed more than 50% of their citations from themselves or their co-authors, while the median self-citation rate is 12.7% ((Nature, Hundreds of extreme self-citing scientists revealed in new database, 19th August 2019)).”

But then, what are the Highly Cited Researchers, whose numbers are one of the components of the Shanghai Ranking, actually doing? Are they renowned for their influential results or are they more adept at being manufacturers on the citation chain? Bibliometricians would say that a “high” self-citation rate is not necessarly a sign of fraud, but that a detailed inquiry woud be neeeded This is where we return to our newest member of the Hall of Fame, whose work is worthy of close consideration.

A perfect hacker,
always greedy for citations

It all starts with a mundane story: a reviewer asks for additional references in a manuscript. But the request itself is not so trivial: it consists of 35 references, the vast majority of which are co-signed by him, and he indicates that his recommandation to editors, on whether or not to accept the manuscript, will depend heavily on this inclusion. It should also be specified that it is not for a single review that this request is made, but for each of the manuscripts passed through his hands. In describing their decision to ban this unnamed reviewer, editors did not indicate how long this practice has existed. Indeed, it is unusual, to say the least, to request the addition of so many references, and one might question their own responsibility in this matter if it lasted as they reference to the “most recent reviews” seems to imply. To which they reply:

One might ask how this reviewer got away with submitting multiple reviews containing coercive requests for citation before being banned. The shortest explanation is that excessive self-citation demands are generally not seen as an ethical problem until a pattern is established, and a decentralized peer-review system is not amenable to detecting patterns“((Wren, Jonathan D., Alfonso Valencia, and Janet Kelso. “Reviewer-coerced citation: case report, update on journal policy and suggestions for future prevention.” (2019): 3217-3218.)).

And in fact they inquire in other journals, and that suggested the same pattern of behavirour for this reviewer. A year later, in early 2020, the investigation leads to an editorial in another journal, the Journal for Theoretical Biology, (JTB), which reveals perhaps the most complete case of manipulated citation to date. Indeed, the hacker is no longer a simple reviewer there, but a “handling editor” for JTB, which enables him to act at several stages of the manuscript, with a single objective: to accumulate citations.

  1. He took the charge of many manuscripts from his research centre to ensure that they are well treated (conflict of interest).
  2. He chose reviewers requested by the authors, or designated colleagues from his own centre (conflict of interest) or even reviewed them himself under a false name (ghost peer review).
  3. In many cases, with the return of the reviews, he would ask for the title of the article to be changed so that it explicitly refers to his own algorithm, as well as a discussion of his own work in the introduction and conclusion (coerced citations).
  4. As a result, he requested the addition of a very large number of references (up to more than 50) to the bibliography of the manuscript (coerced citations).
  5. Just before the acceptance of the manuscript, he was added as co-author of the article (gift authorship).

We therefore observe two complementary types of behaviour. On the one hand, hidden from the outside, it consists in hacking the flow of the peer review journal, capturing the evaluation process to ensure that the articles most “favourable” to its citation count are actually published – and sometimes with his coauthorship. On the other hand, visible to the authors and perhaps the editor-in-chief and publisher, the aim is to hack the byline, content and references of the manuscript by making imperative requests for inclusion. Thus, these ordinary manuscripts became articles loaded with citations from the hacker.

It can be noted that at this stage, the name of the reviewer is not given by JTB, which caused some to make educated guess on Twitter. News articles in Nature among others, soon follow and revealed his identity: Kuo-Chen Chou, a retired chinese-american biophysicist. We then learn that he has been for years a member of the Highly Cited Researcher “club”5. So, as usual, this extraordinary case will be treated as “rare”, counter-measures have been taken such as an algorithm written by one of the Bioinformatics editor. But the ordinary gaming will still happen, would it be in so-called predatory journals or “prestigious” publishers, with smarter colleagues less greedy on citations and not obsessed with the HCR club. Will you be one of them?6

  1. for example John Darsee, see Broad, William; Wade, Nicholas (1983), Betrayers of the Truth: Fraud and Deceit in the Halls of Science, London: Century Publishing, ISBN0-7126-0243-7 []
  2. For a quick view of this case, see Pontille, David, and Didier Torny. “Behind the scenes of scientific articles: defining categories of fraud and regulating cases.” (2012). []
  3. He was the subject of a wonderful book, Plastic Fantastic, ISBN 978-0-230-22467-4 []
  4. See David Pontille and Didier Torny, “The blind shall see! the question of anonymity in journal peer review.” Ada: A Journal of Gender, New Media, and Technology, No.4. doi:10.7264/N3542KVW (2014). []
  5. the Web of Science Group didn’t list him in 2019 as he had, like others, a high rate of self-citations but, as stated, “Although this list is updated and refreshed each year, a Highly Cited Researcher is always a Highly Cited Researcher—whether their name was included in 2013 or 2019.” []
  6. I am aware that this post contains two self-references but they won’t be counted in any database []