Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Dealing with the grey zone of publishing or… how I will never be an editorial board member of MDPI Publications.

The “predatory publisher” category raises more questions than answers. Just like “academic fraud”, it tends to validate a black & white world in which rules and norms are clear-cut and universally shared through time, disciplines and countries. There is now an extensive literature presenting lists, criteria and even automatic detection for such publishers or their journals, most of it being written without questioning the label “predatory”1. More interestingly, there are a few papers describing the point of view of authors publishing in such vilified outputs, showing both the deceptions performed by the publisher and the good faith of most authors2.

Such results could be downplayed on the grounds of the authors’ peripheral position or the low power of the studies. On the opposite, everyday stories show that separating the wheat from the chaff is rather complex because a huge and diverse “grey zone” exists, even for scholars well versed in the arcane of publishing. This post aims to describe such an example by making the story personal rather than abstract, using testimonies, personal opinions and statements. In this world, the choice to review, write or edit for a given journal or publisher remains tricky, based on existing alternatives, personal ethics and situated decisions

It has become an almost daily ritual: an invitation to present at a conference, to submit a manuscript to a journal, or even to join an editorial committee, sent by people you don’t know, with a vague personalized message through your name and the reproduction of the title of an article you’ve written, with zero relevance to the issuing conference or journal. On August, 9th 2023, I received one of these messages, entitled: “[Publications] (ISSN 2304-6775) Invitation to Serve as an Editorial Board Member”. It caught my attention for three reasons: firstly, it’s that time of the year with the lowest e-mail volume; secondly, there was an apparent MDPI account in the copy of the mailing; and thirdly, I know this journal, some of its articles I found relevant or even enlightning, and others cite some of my work (which is not a sign of quality but could explain this invitation). So let’s read it again:

We would like to invite you to join the Editorial Board of Publications (ISSN 2304-6775, https://www.mdpi.com/journal/publications). Publications is open access and peer-reviewed, covering all aspects of scholarly publishing and communication. You can find the proposed scope of the journal here: https://www.mdpi.com/journal/publications/about/.
Publications is abstracted and indexed by Scopus, ESCI, DOAJ.

This presentation is not typical of « predatory » emails that begin by flattering the recipient of the message, emphasising the importance of their work and what they can contribute to the journal, and at the same time inviting them to join the editorial board and submit a manuscript.

Editorial Board Members will be responsible for final decisions on  manuscripts in their field of expertise and may be invited to review manuscripts. The initial term lasts for 2 years, and entails:
• Pre-screening and making decisions on new submissions related to your research interests;
• Providing input or feedback regarding journal policies;
• Helping to promote the journal among your peers or at conferences;
• Attending Board Meetings to suggest journal development strategies;
• Reviewing manuscripts.
• Helping to attract suitable expert authors.

The job description for editorial board members is typical of a journal owned by a commercial publisher: in a nutshell, you run the journal, you promote it without being the decision-maker on its policies and, obviously, without compensation except in what many colleagues name a prestige economy.3

If you accept our invitation, please provide your contact information and a list of keywords reflecting your expertise, in accordance with the entry examples at https://www.mdpi.com/journal/publications/editors/. If possible, please also send, for our records, a CV or an official website with your biographical data (including a list of your publications). If this is of interest, but comes at an inopportune time, you may have a recommendation for a senior expert to serve as an Editorial Board member. If you have any questions, suggestions or recommendations, please let us know.

This detailed procedure is another hint to add in the direction of a genuine invitation for a position in the committee of a standard commercial publisher journal, which is rarely considered as a “grey zone” output. Nevertheless, there is another surprising piece of information about the ‘benefits attached to the position of editorial board member. I don’t know if it is a standard practice in APC-based journals5. Yet this three-line paragraph looks extremely problematic to me for three cumulative reasons. Once again, others could consider it completely benign, due to shared ethics and practices, which would prevent everything that follows.

Additionally, you are welcome to publish with the journal—this will be free of charge once accepted for publication. The term for the Editorial Board membership lasts for two years and can be renewed.

This paragraph alone could have made me cringe and refuse the invitation. But it also comes from a publisher which is piling up 50 shades of grey stories.

At the end of the last century, a Chinese chemist woking in Switzerland invented a preservation practice: to deposit a sample of compounds associated with an article, and to do so confounded the la Molecular Diversity Preservation International (MDPI). He then created a journal, Molecules, published by Springer, taking the position of editor-in-chief and explained his aim:

Soon, the relationship with the publisher turned sour, and the chemist, Sun-Kun Lin, decided to add to MDPI operations the publication of a journal entitled… Molecules. Springer threatened to sue him claiming it owned the title but finally did not. The lone rebel quickly developed a successful business formula, sometimes labelled “low-cost full open access journals”. It was a striking example of the “new journals” that the open access activists pushed for in the BOAI. Later, MDPI was rebranded MultiDisciplinary Publishing Initiative, became one of the top 5 academic publishers as far as volume is concerned, with most journal titles being as generic as possible from Acoustics to Youth, passing by Genes, Physics, Societies and Software. So, the initial declaration of indépendance made by one academic grew into a worldwide success story, but went along with dubious reputation and negative narratives.

Let’s give a few examples of these stories. A neurosurgeon received a request for an assessment from the Journal of Clinical Medicine and responded with a very negative assessment within 2 days. In his view, there were major methodological problems preventing publication, in particular discrepancies between the protocol described and the reality of the clinical trial. Two days after sending his review, he received a request for the revised article. This new manuscript was very different from the first:

Despite these problems, two other reviewers accepted the manuscript, while another colleague agreed to reject it, ‘forcing’ the editor-in-chief to refuse publication. But the story doesn’t end there, because the same manuscript reappeared in another MDPI journal, Geriatrics… in a version very similar to the first manuscript submitted. The reviewer contacted the editor-in-chief, shared his experience and the manuscript was ‘withdrawn’ in agreement with the authors. But like a B-movie with undead enemies who never disappear, the manuscript was eventually published in a third MDPI journal, Medicina. This story is typical of the grey zone: authors willing to do anything to get published, complacent reviewers, a system of manuscript transfer to ‘optimise’ publication, but also honest reviewers and editors willing to listen even when they want to reject.

Let’s turn now to the employees of MDPI, those who maintain the supply chain publication. In Bucharest, a young employee died of a heart attack on MDPI’s premises, and the local media questioned the employer’s responsibility in dealing with the health emergency and, more broadly, working conditions. As in other capital-intensive production facilities, journal editors are partly judged and rewarded on the basis of quantitative indicators. Not only does the APC-based model generate income only if the manuscript is accepted, but the workers are directly incentivised by this ‘success’:

This production culture, which combines speed and pressure, is felt by both editors and reviewers. It is always in this form of experience and testimony that colleagues react, in a ‘me too’ frame. This often leads them to break off all relations with the publisher, its journals and its editors, despite frequent reminders by e-mail:

Responses to the Neurosurgeon reviewer story on Blue Sky https://bsky.app/profile/supersciencegrl.co.uk/post/3l2kapfizr626

These experiences happen within the frame of “normal” journals, but there is an elephant in the room: MDPI developed/invented the concept of multiple special issues happening at the same time for any given journal. As a social scientist, I normally enjoy special issues on my topics of interest and cherish a well-crafted collection of papers which, ideally, discuss with one another. For example “Women in Chemistry Science” seems like a very interesting topic for the already cited Molecules. But MDPI twisted this nice idea making the traditional “special issue” another grey object.

I said at the beginning of this post that I would give priority to a situated view, to experience rather than objective data. But for those of you who don’t know, here were the figures for MDPI on the importance of special issues:

Source: https://paolocrosetto.wordpress.com/wp-content/uploads/2021/04/overall_articles_si_waffle.png

This figure is extracted from a very informative blog post by Paolo Crosetto, written almost 4 years ago, and entitled “Is MDPI a predatory publisher?” in which the author answers “yes and no”.

That very fine post was heavily commented on the data, its interpretation, showing very contrasted views to answer on the nature of MDPI publishing. Furthermore, by the end of 2021, the number of special issues had exploded, with some journals publishing more than one a day. On the producer side, we now know that the special issue system has a similar “point system” that pushes MDPI employees to publish more. But how do we explain this success on authors side? In my opinion, it is a perfect combination of what at first sight might not seem to be so: the close relationships of small groups of colleagues sharing peculiar interests and a large-scale industrial production model. Again, let’s be concrete and talk about my own experience. After not responding to the invitation to join the editorial board, I received an invitation 14 months later to manage a special issue:

Dear Dr. Torny,

This is Mike Tang, Section Managing Editor of Publications.
We are planning to launch a new Special Issue, "Preprint and Open-Access Publishing", in the journal Publications, and we would like to invite you to act as the Guest Editor for this issue.

Wouah ! Preprint and OA publishing, exactly the topics I am interested in. I would easily find 10 colleagues in different countries who would provide very interesting and innovative contributions from diverse perspectives. Sure, I would also suggest expert reviewers, find 10 APCs in grants, manage the whole process and be proud of the results. All for free: I forgot to tell you, there is no financial incentive in getting the job done, no APC waiver,, except fulfilling your dreams as a knowledge creator and disseminator. And all the benefits of such a special issue, as well as your duties are being described in depth by the publisher. Despite being emailed with the exact same message3 more times, I did not answer.

After my narration I could state, with other colleagues that some elements of MDPI business models and practices are unpleasant, incentivize problematic publications or that its special issue model is deeply fraught at scale. That is a personal opinion based on empirical matter, it does not make enough to stand in an academic article. Especially as the “predatory” label has led to strong responses from publishers, as it has been seen again in the MDPI case. In July 2021, an article was published on the question of predatory publishing definition , taking the empirical example of MDPI journals. Its conclusion could be summed up like this::”The formal criteria together with the analysis of the citation patterns of the 53 journals under analysis all singled them out as predatory journals. 

Its fate illustrates the consequences of a black-and-white representation and the difficulties to qualify the ‘grey zone’ in academic discourse. In fact, less than a month later, MDPI responded point by point to this article on its website11. Above and beyond the refutation of some empirical data, MDPI would point out the problems to define the predatory category thus invalidating the overall conclusion of the author.. For example, they discussion the number of editorial board members:

MDPI also insists in several occurrences that it is not unique among publishers, or at least among commercial publishers. It even acknowledges the possibility of predatory publishing… naturally into other publishing houses!

So the author made a paper claiming a lot of MDPI journals were predatory, the publisher responded in a very civil academic way. What happens next ? One month later, Research Evaluation/OUP published an Expression of concern without detail, then retracted the article and published simultaneously a new version, redacted by them and the author. Beyond the fact that such a process is rare – you would expect a simple correction, has MDPI played a role in the process ? Was there pressure? How dit the author feel? As often, we have to turn to Retraction Watch to have all the details, with interviews of the author and MDPI12. We will focus here on MDPI position:

So not only MDPI claims to have been left in the dark, but ironically considers that the largest university press (and one of the oldest) does not follow best practices. Their call was heard as a PDF was published a month later as supplementary data, depicting all the changes between the original and the revised version. Let’s simply take the following two exemples:

In both cases, the author has kept her general argumentation, but she has abandoned her objective language in favor of verbal modalisations that leave room for subjectivity: we may think that these journals are predatory, but we no longer assert, without any possible discussion, that they are classified as such. In other words, we’re back in the grey zone where it’s up to everyone (authors, reviewers, publishers, institutions) to decide what they want to do.

And this is where we go back to our starting point. Why did I receive that invitation to join Publications editorial board in the first place? Because the vast majority of the editorial board of Publications had resigned a few months earlier, relatively discreetly. The most immediately visible trace came from its former editor in chief, Gemma E Derrick, which started the exit movement.

Source: https://x.com/GemmaDerrick/status/1636719479441727488

Her justification is typically framed as a ‘grey area’ one : it is not possible to articulate good editorial practices in her specific scientific collective with the multiplication of stories about the problematic practices of MDPI, the owner and publisher of the title. The subsequent resignations are reported by a Norwegian newspaper13. In addition to the reasons given above, it is also the case that this particular journal deals with publications that need to be brought even more into line with current publishing standards. I was already aware of the resignations when I received the first MDPI email, and that was an additional reason for me not to accept the invitatio, as it would belittle the move from colleagues whom I hold in high esteem, and not to show the solidarity they have thus expressed.

Unlike other cases of mass resignation from journals, Publications editorial board members didi not directly criticize MDPI, because they target the publisher’s general policies rather than their specific treatment. Their resignation reminds us that we can decide, in each of our micro-acts, in which scholarly communication world we want to live . Let’s take a final MDPI example: one journal had really become their flagship, International Journal of Environmental Research and Public Health (IJERPH). In fact, its growth seemed limitless, notably through the special issue format, to reach the incredible number of 16,889 articles published in 2022. And then, in April 2023, Clarivate announced the delisting of more than 50 journals, notably many Hindawi journals, but also MDPI IJERPH. What were the consequences? MDPI’s PR stated that still 90% of its content was listed by Web of Science. But authors fled in flocks as soon as the announcement was made, even before the journal was going to effectively lose its Journal Impact Factor.


Source: https://scholarlykitchen.sspnet.orgwp-content/uploads/2023/09/Figure-1-1024×530.png

This trend has been confirmed in 2024, with the total number of IJERPH articles making less than 10% of its peak in 2022, raising the question of authors’ responsibility. All it takes is Clarivate’s ‘quality signal’ to disappear for them to turn their backs on a journal in which they used to publish en masse. This is the magic of the grey zone, where one can venture without any real consequences. Playing with the rules, bending practices, encouraging mass production – these are not specific to MDPI, but to varying degrees are shared by all commercial publishers, with authors complying and most often editorial teams. And it is probable that their megajournals like Elsevier’s Heliyon and Springer’s Cureus will meet a similar fate.

  1. On the geopolitical consequences of that position, see Taşkın, Zehra, Franciszek Krawczyk, and Emanuel Kulczycki. “Are papers published in predatory journals worthless? A geopolitical dimension revealed by content-based analysis of citations.” Quantitative Science Studies 4.1 (2023): 44-67, https://doi.org/10.1162/qss_a_00242 []
  2. Boukacem-Zeghmouri, Chérifa, Lucas Pergola, and Hugo Castaneda. “Profiles, motives and experiences of authors publishing in predatory journals: OMICS as a case study.” (2023) []
  3. For example, Tennant, Jonathan P., et al. “Ten hot topics around scholarly publishing.” Publications 7.2 (2019): 34.https://doi.org/10.3390/publications7020034 []
  4. This is a theoretical division, actual tasks performed are another story see on the Diamond journals case, Dufour, Quentin, David Pontille, and Didier Torny. “Supporting Diamond Open Access journals.” Nordic Journal of Library and Information Studies 4.2 (2023): 35-55., 10.7146/njlis.v4i2.140344 []
  5. To my knowledge there is no data on these policies towards editors, except on the question of paid editorial board members, especially in biomedicine []
  6. Teixeira da Silva, Jaime A. “The Conceptual ‘APC Ring’: Is There a Risk of APC-Driven Guest Authorship, and Is a Change in the Culture of the APC Needed?.” Journal of Scholarly Publishing 55.3 (2024): 404-425.https://doi.org/10.3138/jsp-2023-0060 []
  7. Lin, SK. Editorial: A Good Yield and a High Standard. Molecules 1, 1–2 (1996). https://doi.org/10.1007/s00783005000). []
  8. Rene Aquarius, “My reviewer experience at MDPI”, August 2024, https://deevybee.blogspot.com/2024/08/guest-post-my-experience-as-reviewer.html []
  9. Young employee’s death puts workplace culture in spotlight at publisher MDPI, Retractionwatch, 22nd October 2024 []
  10. Paolo Crosetto, “Is MDPI a predatory publisher?”, 12 April 2021 []
  11. MDPI: Comment on: ‘Journal citation reports and the definition of a predatory journal: The case of the Multidisciplinary Digital Publishing Institute (MDPI)’ from Oviedo-García, []
  12. .Article that assessed MDPI journals as “predatory” retracted and replaced, Retraction Watch, 8 May 2023. Once again, it has become a very commented post []
  13. https://www.khrono.no/truer-med-a-flykte-fra-tidsskrift-etter-at-redaktoren-ble-kastet/794389 []

Who wins after a divorce?… or how to interpret the DEAL-Elsevier new agreement

Imagine that you are a young researcher in Germany, having started your thesis in September 2018. For the last 5 years, you have had no legal access to articles published by the world’s largest publisher, Elsevier. Your institution has saved hundreds of thousands or even millions of euros, but you don’t really know where that money has gone. By contrast, on a day-to-day basis, then as a PhD student, now as a post-doc, you tinker with your access by writing to authors, asking your colleagues abroad if they can send you this article, requesting your library to buy that crucial paper, scanning preprints, using the unpaywall button or, late at night from home, typing the full combination of letters and signs to reach the platform whose name you must never utter or write.

To my knowledge, this divorce between a major publisher and a national consortium, DEAL, folowed by a reconciliation, has been the longest for a very rich country,. This post analyses how the separation happened, what is known of a long period of divorce in which no German institution had a subscription to ScienceDirect, and finally moving on to the reconciliation agreement published on September 6th, 2023 and validated in January 2024.

From harsh talks to full divorce (2016-2018)

Indeed, it was not for the lack of money that DEAL did not sign with Elsevier, but because the conditions of a signing were not met. By contrast, reading DEAL’s agreement with Springer-Nature, analysed at length 3 years ago, shows what was expected: an agreement including subscription and open access publication, all at a cost deemed reasonable by the German consortium. So how did they get to a “no deal”? As often when trying to rest on past information with institutional sites and changing policies, I shall say that most documents cited below have disappeared from the DEAL website and, therefore are captures made by the Internet archive.

At Elsevier, serving research is our paramount goal. We have therefore chosen to continue providing access to Elsevier journals for dozens of German institutions that cancelled their individual subscriptions at the end of 2016. They did so anticipating that a new Germany-wide license agreement would be in place by January this year, which we regret so far has not been achievable. We strongly believe that access to high-quality research is important for German science. The continuing access for the affected institutions will be in place while good-faith discussions about a nationwide contract carry on. This reflects our support for German research and our expectation that an agreement can be reached.”1

I hope one day some colleagues will systematically study the rhetoric of big publishers PR. Anyway, the one above is typical of a service industry which makes believe its aims are totally aligned with the ones of its clients. Imagine the reverse situation, where DEAL would state : “at DEAL, assuring service providers profit is our paramount goal…”. Back to our main topic: the unconditionnal reconnection decided by Elsevier is not something unusual: at the same time, it happened for example for Taiwanese institutions in a similar situation2. But Elsevier hopes for a soon-to-be new German agreement would not be fulfilled. Indeed, after these back and forths, the negociations stalled, leading to a full divorce by mid-2018, as stated by the German Rectors Conference, which had “no choice”:

“The excessive demands put forward by Elsevier have left us with no choice but to suspend negotiations between the publisher and the DEAL project set up by the Alliance of Science Organisations in Germany.” That was the verdict of the lead negotiator and spokesperson for the DEAL Project Steering Committee, Prof Dr Horst Hippler, the President of the German Rectors’ Conference, speaking in Bonn, where the last discussion took place this week.”3

At this point, we shall note that all cited documents are written in English, while negociations surely happened in German. DEAL had the clear intention of making its moves very public and widelly known beyond the Federal German space and Mitteleuropa.

Learning to work without simple legal access (2019-2022)

So Elsevier pulled the plug in July 2018 and everything went quiet after almost two years of turmoil. That was not a given: you could think that protest letters, petitions or lobbying from unsatisfied lay researchers would multiply as a whole nation of scientists were cut from at least a fifth of the published literature. To lift the veil on the actual frustrations and losses resulting from the switch-off, it was… Elsevier, which commissioned a survey in the summer of 2019, the summary results of which can still be seen on the pages of one news agency.

Most German researchers agree that losing access to ScienceDirect made their research activities less efficient (61%) and delayed the production of the research output (54%). High-quality research further required access to current, international research results. However, the survey shows that 49% of the scientists surveyed believed that the lack of access to new research findings leads researchers to miss current developments or to become aware of them only with a delay. 44% of respondents fear that this will have a negative impact on the quality of their research. All in all, 84% of researchers surveyed think ScienceDirect was important or somewhat important while 76% supported or strongly supported the restoration of full access to ScienceDirect in Germany.

Of course, no raw data has been published and the study itself has not been shared beyond this PR. Nevertheless, in the body of the text, Elsevier mentions another ‘independent’ study carried out by the University of Münster. Like the previous one, this is not an actual academic study, but a library survey, published only on their blog, in German. Despite its limitations (size, a single institution), it presents some interesting, and most probably unique, results on the representations of German researchers one year after the cut. In particular, the following graph should be highlighted:

Extract form Münster Univeristät Survey, which results are presented here (in German).

The orange answers indicate respondents’ agreement, and the statements have been ranked in descending order of positive responses. They show a mixed picture in terms of opinions, both across the population as a whole and for many respondents themselves. from one question to the other. Though the vast majority, namely two-thirds (66%), agreed with the statement “I need more time to get the literature” and 58% thought that the right thing to do was to put pressure on Elsevier to give in, also the option with the fewest disagreeing votes (5%). That does not imply support for the shutoff: in fact, 55% agreed that “No deal is no option – negotiations should be resumed as soon as possible”, and 46% that the lack of access was “a serious competitive disadvantage”.

While 43% agreed that “Elsevier as a profit-orientated company would only harm science”, and only 11% disagreed, only 29% would “refrain from writing or review articles for Elsevier journals” against 40% who would still perform it. After some questions on the importance of Elsevier journals and the use of spared funds, the last question shows another divisive view on the resuming of negotiations, with only 16% in favour of it – which of course was not addressed in the Elsevier PR mentioned.

These two surveys are the only public manifestations of a debate in Germany during this period. If opinions remain relatively unpublic, what about practices? Does the impossibility of immediate legal reading actually have an impact on the way German academics write, their choice to publish in Elsevier journals or their productivity? To my knowledge and through the extensive use of Matilda, only two academic articles have addressed these issues The first is counterfactual, in that it looks at the behaviour of affiliated authors in Germany in chemistry for Springer and Wiley with which DEAL has signed an agreement. Published in 2021 in economics, it only considers the first year of the agreement (2020), in comparison with the previous period and with a control group with no agreement of this type. Nevertheless, the authors are already measuring some effect :

“researchers’ submission behavior in the field of chemistry has changed to some degree, as eligible researchers have increased their publications in Wiley and Springer Nature journals at the cost of other journals. While the effect is not overly large yet, it is statistically significant, and it may increase over time, as the agreements become even more well-known among scientists. Hence, journals covered by the DEAL agreements appear to have a competitive advantage in attracting authors”.4

If agreements signed raise attractivity, then unsigned ones shoud diminish it. The second one deals with the latter by considering the evolution of publication and referencing activities of the whole population of German authors in Elsevier journals, with no control group.  Published in 2023 in scientometrics, it is based on more than 400,000 articles and more than 33M references:

“We also observe year-on-year decreases in the proportion of citations, although the decrease is smaller. We conclude that negotiations with Elsevier and access restrictions have led to some reduced willingness to publish in Elsevier journals, but that researchers are not strongly affected in their ability to cite Elsevier articles, implying that researchers use other methods to access scientific literature.”5

The two studies therefore show that the structure of publications is affected by the agreements; whether signed or not, but only marginally, at least over a short period. Furthermore, reading seems to be remarkably unaffected by the lack of legal and rapid access to the literature. To enable simple and legal reading, It is likely that other internal work has been produced by the consortium or that self-support systems have been put in place, similar to what the Swedish libraries deployed during their own breakup with Elsevier6. Beyond this study, there is anecdotal evidence, given by colleagues, but also an interview of a member of the negociation team, Dr. Bernhard Mittermaier, head of Forschungszentrum Jülich’s Central Library, which tends to show that they were following the rate of publications:

“The option to publish with Elsevier was not affected. Some scientists, however, asked me whether a publishing boycott would make sense in view of the fact that many editors from Germany – including Prof. Wolfgang Marquardt – had discontinued their work for the publisher with reference to the stalled DEAL negotiations. In fact, Elsevier’s share of all Jülich publications decreased from 26 % in 2018 to 18 % in 2022. Across Germany, there was a decline from 19 to 15 %. This may also be a reason why Elsevier returned to the negotiating table.”

In the end, it is reasonable to consider that German researchers have adapted to a life without ScienceDirect over the long term, still reading articles published by Elsevier, but publishing less in journals disseminated by it . What the French and British did not dare to attempt after lengthy negotiations, the Germans did, with very substantial savings and a diminished dependance to the biggest commercial publisher. But what happens afterwards, when the time comes for one or other of them to consider recontracting?

Dealing again… on different terms (2023-2024)

2023 began, as in previous years, without ScienceDirect for German researchers. Im Westen nichts Neues, as a fellow economist lamented :

 

Bartosz Bartkowski tweet

In fact, Elsevier had returned to the negotiating table in autumn 2022 and, after a four-year drought, seemed ready to make concessions that would have been unthinkable four years earlier. The negotiations took place behind closed doors, until the sudden announcement of their success at the beginning of September 2023, followed by the publication of the contract itself. Let’s dive into it, as DEAL has always been transparent on their agreements (nice PDF, full text and monetary information,…), published under a CC-BY-ND license7.

We will not delve into the details of the usual characteristics of this type of agreement (definition of the parties, services expected, users authorised to read, corresponding author limitations, etc.), but will instead focus on the most central elements and on some unique features compared to the bodies of agreements analysed elsewhere.8. This agreement is a “classic” Read & Publish, which includes in its core payment articles published in hybrid journals, but not articles in full open access journals, for which the fee is simply reduced by 15% or 20%. It also includes a back catalogue upgrade for all institutions, at a total cost of €10m. It is a “pay as you publish” agreement, with a PAR fee for each article, depending whether they are in a “regular journal” (2,500 €) or a Cell Press/The Lancet journal (6,450€), with an inflation rate of 3% and 4% respectively9.

This payment model has two consequences that are quite specific to this agreement. Firstly, with the exception of the back catalog, institutions have no front money to commit. Whereas in the past some agreements offered “tokens” or “waivers” for publication, the opposite is now true: you only start to pay after publication. Secondly, this provision would encourage free riding: as withalmost all agreements of this type, the corresponding author is offered, as a priority, to publish in open access under the CC-BY licence, but he or she can refuse. There is also a provision in the contract that prevents this refusal to publish in open access from being organised by counting all the publications:

“For the avoidance of doubt, the applicable PAR fee for Core Hybrid journals for the year of the acceptance date will be applied to both open access and subscription articles in these journals and to subscription articles published in Cell Press and The Lancet journals.”

So, despite the diminishing share of articles observed during the absence of agreement and the lack of front money, Elsevier has a certain guarantee of revenue as 18, 19% or 20% of the German research production will end in one of its disseminated journals. In exchange, the company had to accept very harsh conditions on the data generated by German users. A full page (section 7.6) is dedicated to Data Privacy in the agreement, with reminders of legal provisions derived from the GDOR European regulation. DEAL and Elsevier will co-supervise the whole data processing, the latter refraining from using any personal data without the consent of users. On this point, a loophole was anticipated by forbidding any general opt-in device: German colleagues will be able to fully use ScienceDirect without signing any consent. Of course, all data will be stored in one of the Member States of the European Union. The matter is so sensitive that a future workshop is planned during the first year of the contract, where part of the IP addresses would be automatically erased when IPs are not located in professional settings.

Without doubt, Elsevier’s transformation into a data company and the growing controversy surrounding its new business models on reselling user data10 has been closely observed in a country so keen on privacy. Still, despite these worries, DEAL signed the deal and did not include any fines in case these limits would be trespassed11. But what about the signing of German iHER nstitutions?

Conclusion : which savings, for which uses?

In fact, there was still a little uncertainty when the agreement was unveiled, as a four-month period was about to begin during which the institutions would each have to indicate whether they would sign the agreement. It could only be ratified if at least 70% of the institutions approved it, and fees were lower if 90% did. On 15 January 2024, DEAL announced that this second threshold had been exceeded as “nearly all of Germany’s major universities and research institutions are now participating“. Elsevier has now joined Wiley & Springer in the DEAL family, with very similar agreements focused on hybrid open access. But what does it mean from the point of view of German HER institutions? Let’s go back to Dr. Bernhard Mittermaier’s interview, who talks about his own instiution costs and the global German ones:

“Taken together, Jülich institutes will now save around € 100,000 per year on fees for hybrid open access that were previously paid to Elsevier. For Forschungszentrum Jülich as a company, the costs for Elsevier will even decrease by about 40 % than was the case under the former agreement, assuming publication figures remain the same. This corresponds to about € 300,000 per year that can be saved compared to 2018, the last year of our previous agreement with Elsevier. Elsevier’s fees per article are now much lower than they were in 2018 and similar to those charged by Wiley and Springer Nature.”Compared to 2023, however, when hybrid open access, document delivery, and pay-per-view each cost around € 100,000, additional expenditure of € 200,000 will now be incurred.

Let’s try to do the math (which does not add up), based on that paragraph in the following table, with three references, the last year of the former (local) agreement, the shut-off period and the first year of the new agreement.

Expenditures/Year 2018 2021 2024
Total 600,000€
500,000€
100,000€ 300,000€
Forschungszentrum Jülich Elsevier expendures.

The previous total cost is 500K if you follow the 40% reduction and 600K€ if you add the total savings mentioned. Whatever the case, the new deal is far below the older ones, in which German institutions were known for paying “much more” than similar institutions in Netherlands or France. Let’s now project the costs nationally:

Year pre-2018 2021 2024
Expenitures 70M-100M€
in mostly reading agreements
5-10 M€ max in Hybrid OA publishing? 30-40M€ in P&R agreement
Extimated Elsevier revenue for Germany

The first figure was never made public, but I have heard estimiations in between these two markings, The second one is very maximalistic as OpenAPC counts between, 1M€ and 1,3M€ for Elsevier in Germany for the years 2020 to 2022. The thrid one is based on the number of expected publications and the different fees defined in the agreement. So the savings have been huge during the shutdown and Elsevier lost probably at least 300M€ before resuming negiotiations. And despite losing probably around 50% of its 2018 revenue, the company prefered to sign rather than leaving almost all the money on the table.

While, for example, French institutions have made a major commitment to using some of the resources saved for OA initiatives and by replenishing the National Open Science Fund, this does not seem to be the case in Germany. The national research funder DFG has recently announced the launch of a Diamond OA publishing platform… with a maximum budget of 1.5M€ per year. I let you figure out what it would have been with just 30% of the money spared. So the German HER institutionswon won a lot, Elsevier stalled, but the dependence from big commercial publishers has not been halted, or even reinforced.

  1. Harald Boersma, Continued Elsevier access in support of German science, 13th February 2017 []
  2. Schiermeier, Q., Mega, E. Scientists in Germany, Peru and Taiwan to lose access to Elsevier journals. Nature 541, 13 (2017). https://doi.org/10.1038/nature.2016.21223 []
  3. “DEAL and Elsevier negotiations: Elsevier demands unacceptable for the academic community”, 5 July 2018, German Rectors Conference press relase, https://web.archive.orga/web/20181219162556/https://www.projekt-deal.de/elsevier-news/ []
  4. Haucap, J., Moshgbar, N., & Schmal, W. B. (2021). The impact of the German ‘DEAL’ on competition in the academic publishing market. Managerial and Decision Economics, 42(8), 2027–2049. https://doi.org/10. 1002/mde.3493 []
  5. Fraser, N., Hobert, A., Jahn, N., Mayr, P., & Peters, I. (2023). No deal: German researchers’ publishing and citing behaviors after Big Deal negotiations with Elsevier. Quantitative Science Studies, 4(2), 325–352. https:// doi.org/10.1162/qss_a_00255 []
  6. Olsson, Lisa, et al. “Cancelling with the worlds largest scholarly publisher: lessons from the Swedish experience of having no access to Elsevier.” Insights-The UKSG Journal 33 (2020). 10.1629/uksg.507 []
  7. Elsevier B.V., & MPDL Services gGmbH, Max Planck Society (2023). Projekt DEAL – Elsevier Publish and Read Agreement. doi:10.17617/2.3523659 []
  8. Quentin Dufour, David Pontille, Didier Torny. Contracter à l’heure de la publication en accès ouvert. Une analyse systématique des accords transformants. [Rapport de recherche] 206 150, CNRS; Comité pour la science ouverte. 2021, pp.81. ⟨halshs-03203560⟩ []
  9. I won’t get here into some society journals excluded from the agreement, either because they won’t go hybrid or because they thought they won’t get paid enough by Elsevier. On the specific question of learned societies journals in such deals, see The Brief https://www.ce-strategy.com/the-brief/out-of-reach/ []
  10. Didier Torny. From paywall builders to data tracking moguls or… How the big publishers have put on a new super vilain costume. Politics of technoscientific futures, EASST, Jul 2022, Madrid, Spain. ⟨hal-03885480⟩ []
  11. Thanks to Björn Brembs for underlying this absence, see his plea for German institutions not to sign the new agreement https://bjoern.brembs.net/2023/09/no-evilsevier-deal/ []

Matilda is finally available… or how open academic search engines are a key part of open science

Matilda homepage, 6th Ocotber 2023.

There was a time, towards the end of the 20th century, when things were simple. If you just wanted to count the publications of an author, an institution, a country, you had to refer to the databases of the Institute for Scientific Information (ISI), created and directed by Eugene Garfield. The most famous of these, the Science Citation Index, was built on the idea of selecting the most relevant journals to capture the heart of science, in the already long tradition of bibliotheconomics. And these core journals wer sufficient to draw a relevant picture of the whole of scientific content. Taking the part for the whole raised many questions about the representativeness of the journals present, data and calculation errors, biases in favour of certain disciplines, languages and countries, but as Margaret Thatcher said about her economic world: ‘There is no alternative’

25 years later, commercial competition is fierce between Clarivate’s Web of Science, Elsivier’s Scopus and (almost) Springer’s Dimensions to capture the most money available from Higher Education & Research institutions . In another world, Google Scholar has woven its web, Google Scholar (GS), the only corporate service without advertising or direct tracking of usage. But these systems still have their drawbacks: the commercial databases still are still excluding machines, deciding what is “searchable” among the whole literature; GS services are restricted in their uses (eg no massive downloads) and its sources are neither described nor open.

This is the landscape in which Matilda was created, thanks to Huma-num and an ANR grant. If you want to know more about how it was envisionned in 2019, there is an “origins” paper1. For now, let’s get straight to the tutorial. The video below is all you need to use it, no API coding, no computer skills, just an idea of what you are searching for as an academic or someone avid to find academic sources.

Open citations at the heart, open data everywhere.

Matilda: is one the outcomes of the “open citations” movement: Originally, in 2010, it was a reference data corpus, the Open Citation Corpus (Pironi et al. 2015), before these remarkable precursors2 were joined by various organizations demanding the release of Crossref citation data to publishers. The I4OC collective has consequently obtained the availability, under a CC0 license, of the whole CrossRef database by default., But what to do with this pile of data? A number of tools, including the VOS Viewer developed by Leiden University, use them.. However, they hope that other actors would take them and build services on this new shared resource. Like the Open Citations databases3, they often presuppose professional users, either experts in API manipulation or interested in very advanced bibliometric developments. Matilda took a different approach by making the simplest tool possible,

Follow an author, make citation tracking of a core text in your field, search for texts with a given expression in their title, download full metadata to zotero, download a copy of the text if it is legally available, create an alert through a RSS feed that is publicly available, share it in your team project through a Zotero group, all this and more with just a few clicks. It is free, reusable so are results, because the metadata has been liberated thanks to these activists and to the collective movement that followed, including publishers.

Almost real time, always get freshest texts

Even if there is almost no literature on how academics practically search for their sources, we assume that when they know their field, they are searching for new information, that is texts that weren’t there yesterday but are available today. That has been the promise of many information devices, from the first academic journals to ISI Current Contents, from abstracts/review journals to contemporary Scopus/WoS alerts.

Beyond openness, one of the promises of Matilda is to offer you this freshness by going to the sources, applying YOUR search keys and deliver them to you in no time. In practice, that means that around 2 days after their creation in Crossref, RePeC, ArXiv, Pumed, you will get the relevant metadata in your Zotero RSS feed. As a mean, around 40,000 new texts appear in Matilda and some will probably interest you, that is discover the title, read the abstract, include it in your bibliography while other will be rejected.

What’s next and how you can help Matilda

The current version of Matilda is V. 2.0.2 and we have money to build the V3 with plenty of new features, the most spectacular being full-text search as we will index every found PDF so that you can add these results to those on the metadata. We also will add boolean operators for search – currently by default it is OR. In the long run, codes will be available – everything is open source software – and APIs will be open for direct reuse, for example instead of an uncheckable “WoS citations”, you will find a tracable “Matilda citations”.

We also think about adding new sources such as aggregated online archives as we wish to be inclusive as possible, so that YOU choose what is relevant for your research, not US.

The ultimate aim is clear: offer an alternative to current WoS/Scopus users, so that their institutions stop paying millions for tools that were not made for lay researchers – the bibliometrics uses of such platforms are debatable, though the Open Research Information spurring movement could also push them into history. Show that we need to decolonize scholarly metada that was for long limited to 1/ jorunals 2/ with articles written in English 3/ from Global North scholars 4/ and especially those owned or disseminated by big publishers. It also aims at providing an open alternative to Google Scholar, with open, tracable sources and enrichments and no limitations in download and uses. As everybody knows, Google can decide to shut down services in a day, so there is no long-term garuantee that GS will exist in the long run.

What can you do to help develop and sustain this open science platform? First, talk about it, create and share links, go to your institution head and show them that they could invest in open science rather than funding capitalistic vilains. Second, use it, test it, send us some feedback, good or bad, ask for features, explain what you need and expect from such a tool. Third, your IP addresses are not traced, but we have the aggregated image of RSS feeds, so even by just using it, you will help us.

  1. Didier Torny, Laurent Capelli, Lydie Danjean, Stéphane Pouyllau. Matilda: Building a bibliographic/metric tool for open citations and open science. ELPUB 2019 23rd edition of the International Conference on Electronic Publishing, Jun 2019, Marseille, France. ⟨10.4000/proceedings.elpub.2019.22⟩. ⟨hal-02141839⟩ []
  2. Disclaimer: I have been a member of the Advisory Board of Open Citations on behalf of the French Open Science Committeee since 2021 []
  3. see Heibi, I., Peroni, S. & Shotton, D. Software review: COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations. Scientometrics 121, 1213–1228 (2019). https://doi.org/10.1007/s11192-019-03217-6 []

The sustainability argument or… How academic journals economic models never really last

The starting point for this post is an article from Scholarly Kitchen in which, once again, the sustainability of Diamond journals and here the Subscribe to Open model, is questioned. This leads the author, Rick Anderson, to define sustainability:

“It’s a concept that gets invoked in many different contexts to mean a range of different things, but in this context its meaning is both basic and simple: a publisher’s business model is sustainable if it’s able to be sustained over time. […] What determines sustainability? For an ongoing and open-ended project like publishing, the baseline determinant of sustainability is simple: recurring, reliable revenue.”

This definition is interesting, though it stands on a muddy ground: how do we define “recurring reliable revenue”? What is the timeframe to judge reliability? My post will argue that there is no such thing as a stable business model, at least for a long time. Moreover, if Anderson is right to question the S20 future, the same questions should be asked to much-lightly considered “stable models”, starting with subscriptions.

Our present is not the continuation of the past: the short history of subscriptions

Over the three and a half centuries of scientific publication in journals, the economic relations between publishers of scientific outputs and their readers were far from stable. It was probably not until after the Second World War that the main relationships became those between publishers and academic libraries, on a national or international scale, not as part of a gift or exchange economy, but rather as a commodity.

As the number and budget of libraries increased and the number of published journals grew fast, a short golden age of subscriptions for journal producers, and notably commercial ones, began1. But by the 1970s, as budgets stagnated, harsh competition for libraries money was the first signal of what was later referred to as the serial crisis. This decades-long relationship, based on the sale of subscriptions and paper issues for each journal, has been profoundly transformed by the digitalization of journals.

In the late 1990s, three major events took place in the contractual relationship between libraries and scientific publishers. Firstly, in a relatively short period of time after the inception of the World Wide Web, the largest publishers put online not only their entire contemporary catalogue, but also part of their archival material. Secondly, publishers have been offering access to packages or bundles, not on a title-by-title basis, but to a long list or even all of their journals. Thirdly, to make this offer attractive, they favoured the emergence of library consortia which, by adding their singular needs, could constitute clients interested in this new plethoric offer. The combination of these three events gave rise to a new form of standard economic agreement, the big deal2.

As a result, the subscription business model has been changed from an audience-centered model – libraries purchase what readers want, title by title – to a model centered on the size of the publisher – libraries buy the most extensive offerings, leading to a much stronger oligopolization through buyouts of publishers and change of publishers for scholarly societies, very visible twenty years later.

Percentage of papers published by the five major publishers, by discipline in the Natural and Medical Sciences, 1973–2013.3

For most publishers – including self-publishing learned societies – subscription has only been profitable for a short time and is not anymore. It is not sustainable, since it now implies the disappearance of their autonomy or at least dependence on increasingly powerful players, likely to act unilaterally on their revenues. And even for the largest publishers, the threat of non-renewal of Big Deals is growing stronger from 2010 onwards, whether through the sudden drop in financial resources (Greece) or through the choice to no longer pay for a service that does not meet the needs of libraries (United States) or open access demands (Germany, Sweden). It is in this context that Elsevier has started to brand itslef as a data company, while new publishers are trying to make a new model last, based on Article Processing Charges.

The future will not be similar to nowdays: charging authors to the breakdown point

Charging authors is not a recent business model, there have been many examples of vanity publishing, targeted towards academia or outside of it4. In the US, from the 1930s on , an alternative funding model had already thrived, as subscription revenues were considered too low. It targeted authors and their funders, was based on per-page charges, first in Physics, then in other STM disciplines.5 But it was with electronification that the idea of paying a lump sum for authors – as opposed to a multitude of varying services (colour charges, page charges, cover charges…) emerged, soon to be known as Article Processing Charges (APC). Some new publishers have entirely adopted this new model, sooner or later being bought by legacy Big Publishers, like BMC by Springer or Hindawi by Wiley. But other ones have quickly become themselves global Big Publishers.

Retrospective statistics of the leading academic publishers in 20216

On selected Clarivate sources7, MDPI and Frontiers are now in the top 6 in volume published while added, they were making less than a fifth of ACS, Sage or OUP a decade ago! From the point of view of these new big players, APCs are so sustainable that they create journals almost every week. For example, in 2021 MDPI launched 84 new journals and only acquired two existing titles. As Dan Brockington has shown in his comprehensive analysis of MDPI data, this growth also comes from the lowering of rejection rates:

“Now, some 45% of the MDPI journals I analysed, have rejection rates of below 40% (Table 2). Papers in these journals account for nearly 38% of revenues from publication fees (Table 3). Conversely, the journals with rejection rates of over 50% account for just over 25% of revenues. Measures of esteem, such as listing in the Web of Science, did not seem to make a difference to rejection rates. Average rejection rate for WoS listed journals was 42.7%, and for unlisted journals 41.6%.”8

The incentive for publishers to accept a manuscript in the APC model has been discussed for a decade, and its link to the growth of vanity presses now dubbed “predatory publishers” is well established. Above and beyond what is often portrayed as a potential threat to the whole scholarly communication system, the APC business model is not sustainable from the authors and research organizations’ point of view. A large literature has constantly shown the rise of APC prices through time, would it be for open access journals or for those relying on the hybrid model. Whether they would name it “prestige prices” or “market power”, researchers describe an ever-growing number of APC articles and a rise in individual prices.9

Proponents of market regulation will argue that each author will adjust his or her willingness-to-pay to the audience and the supposed quality of the journal, but instead we see the exclusion of authors for lack of funds or the sale of places in the byline to pay APCs. And of course, the quasi-absence of success for such a business model in underfunded disciplines, like most of HSS ones.

Sustainable for whom? The durability of Diamond journals

The two most visible business models for disseminating journal content are therefore both not only at the mercy of default by their funders, but are unsustainable for both readers and authors and their respective institutions. The fact that they constitute today’s largest expenditure items in scholarly communication should not be taken as evidence of sustainability through the capture of recurrent reliable revenue. In research worlds subject to severe budgetary constraints and increasing visibility of expenditure lines, they are in fact the most threatened in their foundation.

That being said, what are the alternatives? They are well known, and have been running in some corners of the global journal market for decades without any structural sustainability problems, despite being underfunded. Their landscape has been described in a comprehensive study, showing small-scale non-profit community-owned arhcipelagoes10. Far from APC-based megajournals and publishers with huge portfolios, this ecosystem is sustained by learned societies, universities, research organizations, some research funders, but also large-scale technical infrastructures, the most obvious being PKP’s Open Journal Systems.

While it is certain that more funding and more support from different institutions is needed11, thousands of journals – and dozen of dissemination platforms – have shown their reliability as they passed the test of time.

Yet, most don’t pass the “Anderson sustainability test” as they don’t rely on “revenue” but rather support and funding as they have not been comodified. Moreover, the support come from the exact same sources that pay, in a way or another, the “unsustainable publishing models” described above. So, they are obviously sustainable for authors and for readers, but also for these supporting institutions. Though they don’t have a unified business model12 -Subscribe to Open being the latest & adequate to already commidified journals, they seem to thrive, each of them at their low-scale, but with an agregated population still larger than APC journals. After almost three decades of existence, resisting to several “serial crises”, haven’t they earn the right not to be questionned on their sutainibility, but rather considered as one of the most secure ways to build a sustanaible scholarly communication system, allied with institutional archives?

  1. Fyfe, Aileen. “From philanthropy to business: the economics of Royal Society journal publishing in the twentieth century.” Notes and Records (2022). Aileen Fyfe, Noah Moxham, Julie McDougall-Waters, and Camilla Mørk Røstvik , A History of Scientific Journals Publishing at the Royal Society, 1665-2015, UCL Press, 2022, chapter 14 []
  2. Frazier, Kenneth. “What’s the big deal?” The serials librarian 48.1-2 (2005): 49-59. []
  3. Larivière V, Haustein S, Mongeon P (2015) The Oligopoly of Academic Publishers in the Digital Era. PLoS ONE 10(6): e0127502. https://doi.org/10.1371/journal.pone.0127502 []
  4. see for a detailed history on books, Timothy Laquintano, The Legacy of the Vanity Press and Digital Transitions, Volume 16, Issue 1, Summer 2013, https://doi.org/10.3998/3336451.0016.104 []
  5. On the American Chemical Society example, see Noel, M. (2020). Back to disciplines: exploring the stability of publication regimes in chemistry: the case of the Journal of the American Chemical Society (1879–2010). Humanities and Social Sciences Communications, 7(1), 1-13; on the APS/AIP example, see Scheiding, T. (2009). Paying for knowledge one page at a time: The author fee in physics in twentieth-century America. Historical Studies in the Natural Sciences, 39(2), 219-247. []
  6. from Understanding the increasing market share of the academic publisher “Multidisciplinary Digital Publishing Institute”Dan Brockington in the publication output of Central and Eastern European countries: a case study of Hungary []
  7. That is much more restricted than Crossref, so more favourable to legacy publishers []
  8. Dan Brockington,MDPI Journals: 2015 -2021, 10 November 2022, https://danbrockington.com/2022/11/10/mdpi-journals-2015-2021/ []
  9. see for example, Budzinski, O., Grebel, T., Wolling, J. et al. Drivers of article processing charges in open access. Scientometrics 124, 2185–2206 (2020). https://doi.org/10.1007/s11192-020-03578-3 []
  10. Bosman, Jeroen, Jan Erik Frantsvåg, Bianca Kramer, Pierre-Carl Langlais, and Vanessa Proudman. “The OA diamond journals study. Part 1: Findings.” (2021) 10.5281/zenodo.4558704 []
  11. see recommandations from the mentioned study, Becerril, Arianna, Lars Bjørnshauge, Jeroen Bosman, Jan Erik Frantsvåg, Bianca Kramer, Pierre-Carl Langlais, Pierre Mounier, Vanessa Proudman, Claire Redhead, and Didier Torny. “The OA Diamond Journals Study. Part 2: Recommendations.” (2021), https://doi.org/10.5281/zenodo.4562790 []
  12. This diversity should be studied, as we will do it in the current European-funded Diamas project []

The fair price of an open access article or… how Nature relaunched a long lasting conversation

If you ask the open access community what happened in October 2003, chances are they will cite the Berlin Declaration as an important moment of consolidation of international mobilisation. At the same time, however, there was a large-scale attempt to charge authors for the publication of their open access research. Indeed, this was the time when the publisher Biomedcentral announced the switch of all its journals to a then little known financial model: Article Processing Charges. Let’s take the example of two journals passing on this announcement when they discuss the price of the service:

Although some authors may consider US$525 expensive, it must be remembered that The Journal of Translational Medicine does not levy additional page or colour charges on top of this fee, which can easily exceed US$525. With the article being online only, any number of colour figures and photographs can be included, at no extra cost.

There is no remuneration of any kind provided to the Editors-in-Chief, to any members of the editorial board, or to peer reviewers; all of whose work is entirely voluntary. Although some authors may consider US$525 expensive, it must be remembered that Journal of Neuroinflammation does not levy any additional page or color charges on top of this fee. Because we are an online-only journal, any number of color figures, photographs, and ‘extra’ pages can be included at no extra cost. Such color and page charges, as assessed by more traditional journals, can easily exceed our flat US$525 per-article APC. Another common expense with traditional journals is the purchase of reprints for distribution, and the cost of these reprints is also frequently greater than our APCs. The Journal of Neuroinflammation provides free, publication-quality pdf files for distribution, in lieu of reprints.

Three elements emerge from these excerpts: firstly, their similarity indicates a copying of elements provided by BMC to justify this change in business model, the financing having previously had to rely on any source except the authors and in particular a support programme for research institutions; secondly, the price is related to the costs of making content and formats available free of charge to readers; and thirdly, the novelty of the payment for authors is minimised in favour of a continuous interpretation between page charges and article processing charges. Indeed, at least since the 1930s, in some disciplines, the authors’ contribution to publication costs – and not only to the cost of reprinting copies for personal circulation – has been documented. And a vast majority of science journals were still asking for such charges at the beginning of the 2010s1

This continuity is debatable, but the APC system put in place by BMC, like the one adopted at the official launch of PLOS Biology around the same time, is a partial legacy of these in print practices. As in the past, only accepted items are invoiced at a single “catalogue” price for defined services.

TO BE CONTINUED

  1. Curb, Lisa A., and Charles I. Abramson. “An examination of author-paid charges in science journals.” Comprehensive Psychology 1 (2012): 01-17. []

“You pay less, I earn more”… or how UC and Springer Nature made a seemingly win-win agreement

Win Win 306/365
CC-BY-ND Dennis Skley

And yet another agreement! While it was celebrated over the ocean as “the largest OA deal ever signed in the US” or a “milestone” for OA, we Europeans are now used to these “groundbreaking” contracts announcements every other week. So much that I have already written one in March on the German Springer/DEAL and another one in May on the Faustian Elsevier/Dutch consortium. So all things come in threes, and for a good reason, as Californians give us some food for thought on the financial side of the agreement.

First of all, it should be noted that the contract between Springer Nature (SN) and the University of California (UC) has not yet been written, but that only the Memorandum of Understanding (MoU) was made public this week1. This publication derives from a clear commitment on the part of the universities to make the negotiation processes and the principles governing the choice of subscription, support or no deal transparent to the local academic communities, but also more broadly to all stakeholders interested in these issues.

As we are almost in the middle of the year, the fact that the agreement has been signed for the years 2020 to 2023 has a first important consequence: all the mechanisms necessary for the identification of authors, for the various payments and for monitoring will probably not be in place before the end of the year (SN is committed to this by 1 January 2021). In practice, UC will pay in 2020 an undisclosed amount name “UC 2020 spend” for a Read & Publish in which the Publish part will be free of charge. It is only over the next three years that mechanisms will appear, which combined originality is at the heart of this post.

The Muti-payer model.
Getting authors and funders involved

One of the originalities of this contract with Springer is the adoption of a model first experimented in the UC/PLOS agreement, with the splitting of an APC into two distinct blocks: the first 1000 dollars which will be systematically paid by the university and the rest which will be paid by the authors if they have the possibility to do so. This mechanism smells like a device invented by economists, and it is one, a professor at UC Berkeley, who describes its purpose in The Scientist:

“In the US, there already were multiple funding sources—libraries paid for subscriptions, and when authors wanted to publish open access, they paid a surcharge on top of that out of their funds,” says MacKie-Mason. “The key thing here is that we’re integrating those into a single contract. That creates cost control for the institutions and the researchers [during the transition to open access], which is critical because the cost of scholarly publishing has been exploding.”

So the solution to the “new serial crisis” would be to imply authors as UC people have repeatedly stated2, but aren’t they already with classical “one shot APCs”? The idea to combine APC with institutionnal support in a contract is here pushed to the limit as we will see. In some “transformative agreements”, there is no way for a third party to understand who in the end pays what and from which source, especially in consortiia stteings. Here, it is quite the opposite as in the who MoU, a clear separation is made between two sources:

  1. The UC – would it be California Digitaly Library or UC itself – takes in charge a 750,000$ reading fee, 1000$ for each APC and, as we will detail, more if authors can’t pay. All these will be counted apart in “UC Fully OA Spend”, “UC Hybrid Spend” and of course the reading fee.
  2. The authors would pay the “APC remainder”, whoever is the original funder, and these sums play a very limited role into the contract, are not agregated under specific names.

So the splitting is not only made for each article, but for the total contract as “cost regulation” supported by Mackie-Manson but in fact only on the UC side, authors could spend whatever they wish on APC, and benefit from the UC participation. In consequence, as authors shall pay, they have the possibility to opt out of OA in hybrid journals, which is the default option. Consequently, the deal does not guarantee that all UC corresponding authors articles will be OA, but only those who wish so and, to some extent, that are ready to pay, favorable to hybrid journals, or APC gold open access supporters. The division and authors’ choice are highly visible in an exception in the contract. If, despite very short deadlines, SN was able to implement the entire workflow before the end of 2020, then it could start invoicing APCs. Under no circumstances would UC have anything to pay, but authors could be solicited:

Should Springer Nature implement the Multi-payer Model before January 1, 2021, Springer Nature may begin collecting the APC Remainder under the terms of the model […]. If the corresponding author does not have research funds available to cover the APC Remainder, then Springer Nature shall not collect an APC for those articles. No UC Fully OA or Hybrid Spend payments will be charged during this time (article 3.8.2).

It is hard to imagine a corresponding author who can get free APC deciding to pay, unless their grant is nearing completion and they cannot spend it otherwise. But this provision does indeed support the idea of two decoupled payers, as the rules applying to them may differ, the first (UC) not paying in 2020 before being obliged to contribute, the second remaining in a logic of choice throughout the contract. But what exactly are the amounts to be paid?

Price, Volume, Participation :
an equation to determine an Hybrid bill

The price calculation formulas are not yet complete, since the agreement is not signed, but the foreseeable variations are known throughout the contract. For full OA journals, there will be a base price in 2020, with a maximum increase of 3.5% per year. This base price is certainly not the catalog price, since it is specified that ” If at any time during the agreement the then-current list price APC is lower than the APC to be charged under the agreement, the current, lower APC will be charged instead” (art. 3.3). The issue of prices and volumes is most complex when it comes to hybrids APC. First of all, unit pricing is almost constant with the same prices in 2020, 2021 and 2022, and a maximum increase of 2% in 2023. But while the paid volume published in full OA appears unlimited, the paid volume published in Hybrid journals is very constrained.

First the number of articles published in Hybrid by the corresponding authors in 2019 and 2020 is calculated, and the smallest of the values is taken, which becomes the Base article number. The minimum volume of articles is then simply defined as 85% of this number, over time. On the other hand, the maximum number depends on two variables: first, an “inflation” of the authorized volume, of 5% per year, then a calculation that depends on the effective participation of the authors in the publication scheme. Indeed, the parties expect that between 30% and 40% of the authors of articles will choose to publish in hybrid AO rather than revert to a paywalled publication. (orange curve) If the program is successful, more than 60% of the authors adhere, then the red curve defines the maximum number of articles; symmetrically, in case of failure – less than 30% – it is the yellow curve that defines this maximum number.

In a close fashion to the agreement with DEAL, Springer defines a volume control on Hybrid, which can lead up to a third more articles published than the current Hybrid APC. But the consequences of going over this limit are very different than the German counterpart : UC is not anymore paying its 1000$ above the maximum, but authors – if they chose so, must pay the APC remainder. On the other end, if the minimum is not reached, UC shall pay “the average hybrid APC for UC corresponding authors from the previous year for the number of articles necessary to bring the total to the minimum. In 2021, the average hybrid APC from 2019 ($3208) shall be used.” So Springer Nature is sure to have (almost) its money back and UC has a control mechanism which prevents a high rise of its Hybrid spend by volume control.

Hard capping the total costs.
Will UC pay less in the end?

Until now, it seems that we analyse another “cost-neutral” agreement that in practice could absolutely become a high rise contract : APC individual price inflation, unlimited payment for full OA articles, controlled max rise of hybrid OA would contribute to a larger bill for UC. Then comes the most original point of the UC/SN contract : a hard cap on the sum of fluctuating bills. In fact, some agreements, typically the JISC ones, include a price control that says “we will pay this, period”. Of course, the trade off is most often a defined, limited volume. Here, as we read it in article 3.6.

In each year of the contract, the Total UC Spend shall be subject to a fee control mechanism, as set out below. All fee control mechanisms are computed in relation to the license fees paid by UC for Springer journals, Adis Journals, Palgrave journals, andacademic journals on nature.com in 2020 (“UC 2020 Spend”).

So the starting “subscription” – ie Read & Publish – set price caps the whole price of the contract, once again in a very precise and shall I write, twisted way. Starting from the “UC 2020 spend”, in 2021 you can not exceed 95% of that sum: if it is the case, then UC gets some reading fee part, and if it is not enough, refunding from SN. So the max is clear and -5% compared to the starting year. But in 2022 and 2023, you can not exceed 98% of that sum ; if it is the case you get only the Reading fee back and nothing else. In other words, there is in fact no fixed maximum payment, and certainly not a garantuee that UC would pay less in 2022 and 2023 than in 2020, and as we don’t know what were the different bills, even more less than 20193. The UC part is very confident on the result as the associate executive director of the California Digital Library, Ivy Anderson, stated : “The new agreement is expected to save the system money overall, but the exact cost will depend on the number of articles UC researchers publish”.

Whatever the final outcome, and one can think, given the complexity of the provisions that the UC part has run many simulations on its final bill, there are three lessons to be learned from this MoU. First, in the absence of price transparency, it is difficult for outsiders to determine whether an agreement is really financially interesting or whether it mechanically leads, as with subscription formulas, to higher prices paid by higher education institutions. Secondly, this agreement builds a link between the payment of authors and that of the university: it therefore allows the direct inclusion of research funders, while ensuring traceability and monitoring of flows for each of the parties. It also contains incentives on the behaviour of authors, who would benefit from using the UC workflow to partially or totally reduce their own payment. But it is the ability to capture money from funders, third parties to the contract, that is striking, with certainly Coalition S members in mind.

Consequently, thirdly, it is the de facto guarantee of Springer’s revenues by encouraging new spending in the form of APC in subsidizing them. Making new provisions to turn the Nature journals into a hybrid goes in the same direction. In a similar way to “Pure Publish” agreements that goes with a discount on APC, the UC agreement is a transformative one as it explicitly changes universities from fund providers to fund collectors for publishers, with the hope of a diminishing or stable bill in exchange for that service.

  1. We saw on the Dutch case that there could be quite significant differences between an MoU and the actual contract []
  2. See this piece on Impact of Social Sciences LSE Blog []
  3. I previously wrongly tweeted that they would pay less, as I thought the reference was UC 2019 spending []

Faustus pact with Lucifer or… How Open Science becomes sustaining Elsevier data infrastructure in exchange for open access papers


“On these conditions following:
First, that Faustus may be a spirit in form and substance.
Secondly, that Mephistophilis shall be his servant and at his command.
Thirdly, that Mephistophilis shall do for him, and bring him whatsoever.
Fourthly, that he shall be in his chamber or house invisible.
Lastly, that he shall appear to the said John Faustus at all times,
in what form or shape soever he please.

I, John Faustus of Wittenberg, Doctor, by these presents do give both body and soul to Lucifer, Prince of the East, and his minister Mephistophilis, and furthermore grant unto them, that twenty-four years being expired the articles above written inviolate, full power to fetch or carry the said John Faustus body and soul, flesh, blood, or goods,
into their habitation, wheresoever. By me, John Faustus.

Faustus
CCBY Bart Everson

The legend of Faust has known many versions, but that of Christopher Marlowe, highlighted above, is no exception to the common rule: it is the absolute thirst for knowledge that drives the scientist to conclude this pact, while the evil or deceptive nature of Lucifer does not play a major part in its making1. So to call this reference to the signing of an agreement between scholarly institutions, by definition producers of knowledge, and a publishing house, however powerful it may be, normally only responsible for disseminating it, may seem counter-intuitive. Yet, as we shall see, it is the one that is required, as the relationship between the two parties may be potentially inverted. With this new agreement, Elsevier will try to become the knowledge-producing entity, the one that will give these institutions and their authors what information they think they absolutely need.

From subscription to a Read & Publish pilot
to a full Publish & Read agreement

The relationship between the Dutch universities, represented here by SURFmarket B.V., and the publisher Elsevier is very old and has mainly consisted of the supply of journals in the form of paper subscriptions, then by electronic access from the end of the 20th century until 2015. In March 2016, if a new contract is signed, it contains not only subscription services but also provisions for the open access publication of a limited number of articles, originally 3600 over 3 years. This agreement was not necessarily as successful as expected, as for example 1300 articles were not “consumed” at the end of this first agreement. Nevertheless, from amendment to amendment – 7 in total, the contract was extended in terms of the journals concerned (Cell Press) and temporally until 20 April 2020.

In contemporary classifications, this agreement could therefore be considered as a Read & Publish, with a subscription fee, open access publications being produced without additional payment. The first parts of the new contract show a reversal of this logic by displaying a unified cost for all the services provided by Elsevier: reading is no longer separated from the publication in the pricing, even though the provisions of the former are much more complex and pages long than those of the latter

Indeed, as is often the case in subscription contracts, numerous provisions govern the rights to access and read content, but also the duties of the publisher in terms of document supply and the scope of services. But, as we saw in the case of the Springer/DEAL agreement, the provisions of publication services can be relatively complex. This is not the case here: no financial exchange linked to each publication, no limit on the number of articles, no separation between publication in hybrid and full open access journals, so only two pages define the conditions of publication. Beyond the description of the workflow, one article should be highlighted:

Both parties are committed to reach 100% Open Access during the term of this Agreement, In line with this joint ambition, Elsevier offers Corresponding Authors the possibility to publish Gold Open Access in the widest possible range of Elsevier journals under the Terms of this Schedule 4. As per the effective date of this Agreement 95% of the journal articles by the Corresponding Authors are eligible to be published Open Access. For the remainder of the journal articles, Elsevier will continue to strive for sustainable immediate open access options across its journal portfolio to support the 100% Open Access goal.

As in a large number of technologies, lack of success is not necessarily an obstacle. Whereas in spite of more than four years of possible publication under the previous agreement, only a fraction of Dutch authors had chosen this route, Dutch universities this time aim for 100% open access, and Elsevier promises them that almost all the journals it distributes will meet this end. While at the same time, authorizing authors to not chose Open Access (p. 45), pushing further away this objective of 100% OA for corresponding authors papers.

The whole scheme is close to the one signed by Elsevier and Bibsam, the Swedish consortia, after they spent almost 2 years with no deal. But the Swedes claimed they are actually paying less than before in total costs in a recently published article2 while signing an agreement where Swedish authors are almost mandated to go for an OA publication.

More services means more costs

On this OA publication part, the Dutch contract is therefore not just a continuation of the previous one since new journals are involved and technical provisions are made to publish “by default” in open access in CC-BY. Moreover, the volume of publishable articles – even if it was previously never fully consumed – is now unlimited. This expansion of the service is accompanied by a sharp increase in costs. If we take the amounts listed in the various amendments to the 2016-2020 contract and report the new amounts, we obtain the following graph, quite different from the Swedish one3 :

Over a “long period” (9 years), we therefore observe a 40% increase in costs, meaning an inflation of more than 4,3% every year. Far from the assertion of “cost neutrality” as in the OA2020 text of 2015 and the initial hypotheses of the Coalition S, the simply potential transformation of all Dutch publications into open access articles is therefore extremely costly in this case and renews the observations of serial crisis already made by SPARC 25 years ago. If the amount paid is constant between 2021 and 2024, there is no guarantee that it will not sharply rise again after the end of the current contract. Financial information was not surprisingly completly absent of the press release, Dutch institutions touting the new agreement objectives as if they were already realised:

NWO President Stan Gielen said: “Enabling Open Access to research results has been a core mission for NWO since 2003. This agreement is a giant step in our collective ambition to provide 100 percent Open Access for all publicly funded research in the Netherlands.”
NFU / CEO of Amsterdam UMC Hans Romijn, said: “This is definitely a game changing agreement in open access publishing in medicine from both national and international perspectives, considering the large impact and the volume of Elsevier journals. This will certainly contribute considerably to the advancement of research, and, most importantly, better treatments for our patients.”

The same assertions have been made over the last 10 years about the agreements signed by different consortia, highlighting the open access part of such deals. They are however very different from the “revolutionary idea” proposed by Elsevier in Automn 2019 about data. In fact, it was so revolutionary that it leaked out :

https://twitter.com/sarahderijcke/status/1190610725250764800

As Sarah de Rijcke, a distinguished science and technology studies scholar, underlines it, Elsevier then tried to directly exchange open publications for data, continuing Big Publishers strategy in investing scholarly infrastructures in order to maintain their profits while adopting open access for publications4. That led to a public discussion of ongoing negociations and a VSNU communication that denied “selling” metadata and research data to Elsevier. In December 2019, a press release reaffirmed that data remained the propriety of universities and that some principles were taken to avoid vendor lock-in. Let us now see how it has been dealt in the final agreement.

Elsevier as a data company
and how you will be willing to pay for it

Apart from the introduction pages, one has to reach page 102 to deal with data and “Open Science Services for Research Intelligence and Scholarly communication” that are part of the agreement. The first and second page of this section describe the collaborative principles that were quoted in the December 2019 press release, which look very consensual.

  1. interoperability and vendor neutrality
  2. transparency, inclusion and collaboration
  3. access to research data and metadata
  4. data portability

If we add to this the common governance structure specified in the last pages and the fact that each party retains its data at the end of the agreement, this part of the agreement can be considered as a true joint collaboration. Nevertheless, Mephistopheles drapes itself in detail, and a full reading of the articles on page 104 underlines how Elsevier now considers itself a data company. Firstly, by default, everything belongs to Elseiver, except what is directly “provided” by the institutions. Secondly, under no circumstances can intellectual property resulting from the development of services be shared. Thirdly, if a common intellectual property were to be created, a new agreement would be needed in which Elsevier would have ownership and the institutions a free but non-exclusive right of use. Fourthly, all existing openly licensed data provided by the institutions are directly reusable by Elsevier. Fifthly, even in the absence of such data, Elsevier may develop equivalent or similar services with other partners. Finally, sixthly, if sensitive data or data belonging to third parties were to be at included in the services, the responsibility would of course only be that of the signatory institutions.

The contrast is therefore striking: on the one hand, Elsevier is (finally) ready to release the publications of all its journals under Publish & Read agreements in return for a fee; on the other hand, the publisher locks all the data and does not wish to share them under any circumstances, thus underlining how much they are now considered to be the real valuable object of the academic world5.

But what pilot services are implemented in the agreement? For the time being, and contrary to the subscription and open access publication services, none are specified. These are simply examples that are given in a table on page 103, reproduced in the FAQ and below:

USE CASE DESCRIPTION
Aggregation and deduplication service based on CRIS systems Improves findability and visibility of NL research outputs by aggregating and deduplicating separate CRIS systems into a Pure Community module available to all institutions which can serve as a building block to a NL open knowledge base.
2. NL Research data Link research data from member institutes affiliated researchers in subject or domain specific repositories into Dutch knowledge base
3. Funding information Link NL research outputs to grants and funders (EC, ERC, NWO, RVO, ZonMw), to allow for improved tracking / assessment of impact of funded research.
4. Health Data Management Link NL health ‘data silos’ in a secure HDM platform
5. OA compliance as a service A proposed service to better use knowledge base OA publication reminders, meet funder requirements, collect assets + reporting
6. Fair recognition and reward A proposed service to integrate a wider array of metrics and success stories for a better, wider recognition of academics. Inclusion of teaching, society outreach, management, etc.

This list contains extremely different objects: some of them look like pure IT services that could be provided by companies operating outside of the academic world, with the building of shared data infrastructures. Others are based on the crossing and enrichment of very specific data of the academic world, and therefore likely to feed even more the Elsevier databases, for example to build its own Open Science Monitor for diverse institutions. Finally, the last item on the list is quite staggering since it is no more or less the project of delegating to Elsevier a service for the individual evaluation of researchers, including of course open science dimensions.

Whether these pilots come true or not, this last part of the agreement underlines the extent to which it embodies a dystopian vision of Open Science, portrayed by Philip Mirowski as an extension of platform capitalism6. It strengthens Elsevier’s position as owner of scholarly infrastructure, provides the company with potential models for new services and organizes digital labor to enrich the data it already owns. All that while continuing to pay huge sums for access to its publications and in exchange of the “liberation” of some thousands open access articles which will of course drive web traffic to its servers. Maybe the new services will never see the light of day and this agreement will just be another Publish & Read. But if not, Faustus will have not only increased its dependence on the publisher, but will have empower it to the point it becomes the real information provider in their relationship, as publications would be reduced to “raw data”.


  1. this post was cowritten by Quentin Dufour []
  2. Olsson, L., Lindelöw, C. H., Österlund, L., & Jakobsson, F. (2020). Cancelling with the world’s largest scholarly publisher: lessons from the Swedish experience of having no access to Elsevier. Insights, 33(1), 13. DOI: http://doi.org/10.1629/uksg.507 []
  3. EDIT: part of the rise could also be attributed to the inclusion of new Dutch institutions in the agreement []
  4. see this wondeful conference paper Posada, Alejandro, and George Chen. “Inequality in knowledge production: The integration of academic infrastructure by big publishers.” 2018 []
  5. On a side note: It remains unclear whether article metadata will be released on a CC0 license in Crossref, continuing or not the anti-open citations Elsevierpolicy []
  6. Mirowski, Philip. “The future (s) of open science.Social studies of science 48.2 (2018): 171-203. []

Making a transformative deal with DEAL or… How 51 pages of contract are needed to replace subscriptions

This post should not have come into existence. In fact, for a long time, “contracts” and “agreements” between publishers and higher education and research consortia have not only been proprietary texts, but filled with confidentiality clauses that prevented them to be disclosed. This culture of secrecy is still there, as the agreement between Springer and DEAL states this on its 45th page1.

Disclosure of agreement
It is Publisher’s position that the terms of this Agreement are proprietary, however the Parties have agreed in this case that the Agreement is placed under a Creative Commons CC-BY-ND 4.0 license and may be made public under this license.

Indeed, the pursuit of transparency accompanying the open access movement has led in recent years to disclosing these contracts, highlighting the very large financial sums involved in accessing scientific literature2. But beyond the figures, the nature of the contracts and their concrete provisions are little discussed, outside of limited circles, notably in library & information sciences3.

The purpose of this post is therefore to propose a first analysis of the structure of this agreement before focusing on its financial part, the most original one, which is supposed to drive the transition to open access. But first we need to describe the two partners of the agreement. On the publisher side, we have Springer, or rather Springer Nature Customer Service Center GmbH. In practice, this means an entity that covers not only Springer and Nature publications, but also BioMed Central and Palgrave McMillan, i.e. more than 2,800 journals. On the customer side, it’s a bit more complicated: the negotiator is an intermediary, MPDL Services GmbH , which acts on behalf of the Projekt Deal, which is a consoritum initiated by the Alliance of German Science Organizations to negotiate nationwide transformative “publish and read” agreements with the largest commercial publishers of scholarly journals. The consortium structure therefore complicates the terms of the agreement with Eligible Institutions that can become Members with associated rights and duties.

Before entering into the agreement, it is important to add how much the writing itself shows the intensive interpretative work on its terms. As in any contract the key terms are of course defined: “eligible articles” “publishing services” or “open access license” among many others. But one also finds in the agreement no less than 18 occurrences of “For the avoidance of doubt” and 48 of “For clarity”, redundancies aimed at limiting the ambivalence of written proposals and injunctions and hints of the carefulness of both parties to limit the risks generated by the agreement.

From a simple preamble
to a complex folded agreement

At first, things seem really simple, as the preamble states the common aim of the two organizations. In fact, they share the rise of Open Access publications in the BOAI meaning, with its known advantages and underline the scope of this agreement, compared to previous ones.

The parties enter this contract with the goal to enable open access publishing of articles from German- funded researches in Springer Nature journals, to make these articles available to the public worldwide, and to provide access for German-funded researchers to most of Springer Nature content. At time of signing, the contract becomes the world’s largest transformative open access agreement, making it possible for over 13,000 articles annually from German-funded researchers to be made immediately available Open Access for use and reuse from the moment of publication, bringing the benefits of maximum visibility, increased usage and citations, and greater and broader impact to researchers across Germany.

Yet, the summary of the agreement depicts a complex set of successive services, which highlights the concrete constraints of a “Publish and Read” agreement for such a large consortium. The actual starting date of the agreement is far away, since the institutions have in practice several months to adhere to the terms of the contract and to put in place the necessary infastructures to carry it out. It is only from August 2020 that centralized funding for open access publishing will really kick in. However, researchers from affiliated institutions can already access Springer content from now on. This paradox is resolved if one considers that the R&P agreement is in fact one contract which overlays four contracts between the parties, named as follows :

  1. Fully Open Access Publishing
  2. Hybrid Publishing
  3. DEAL Journal Archives
  4. Reading Access

Let’s start by looking at the last two, which are the simplest in financial terms. Reading access (p. 31-41) defines the conditions of access to Springer’s content, provides for cases in which this service is discontinued – in particular non-payment in connection with the other components, but does not itself contain any financial elements. Reading is therefore provided free of charge for researchers at the member institutions of the DEAL project, as this deal is really a “Publish & Read“. The “DEAL journal archives” (p. 27-30) is charged, but for a fixed sum of €3,75 million. It allows the “upgrading” of all the institutions on the journal legacy, a little over 3 million articles, and the constitution of a “dark archive” that can be used during and after the contract.

Still, there are some interesting articles in these parts, for example the fact that DEAL can tell Springer to cease reading access to Member institutions if these institutions fail to pay the DEAL operating entity (p. 32). We can also read that the English-language agreement is the one that prevails (p. 40) ; considering that both parties are German and that German Law in Heidelberg applies in case of disagreement, it is very intriguing. Finally, at the opposite of the philosophy of Open Access, there are very strong limitations to the uses of the Archive or current content : access, download and very strict usage in academic courses. In particular, text and data mining for a given Member institution should only be authorized after an addendum is signed (p. 34). It is therefore clear that the already closed content remains paywalled and that the transformational will only applies to future publications.

Controlled Gambling
on future open access publishing

But how can this transformational aspect be translated into a contract? As we shall see, there is a form of gambling – with certain limits – carried out by both parties in the two contracts at the heart of the scheme, the Fully Open Access Publishing (p. 7-14) and the Hybrid Publishing (p. 15-26). The first has become quite standard – and very close to the contract signed by DEAL with Wiley at the beginning of 2019. It is a centralized payment system with corresponding author recognition and verification, sharing of metadata and financial reporting, all in exchange for some deduction on the price of APCs (p. 14).

For the purposes of calculation of the APC Rates, the list price increases for any Article Processing Charges under these Product Terms will not exceed 3.5% per journal title per year (“Cap”); increases will be calculated based on the 2020 list price.
For BMC and certain other Springer titles which are included in the Open Access Journals, Publisher will apply in addition to the Cap a 20% discount, the journals being eligible for such discount will be identified accordingly in the DEAL Journal List.

Price control is therefore very limited: although the reduction on the ‘public price’ is not negligible, it can quickly be offset by the foreseeable inflation of full OA APC costs charged by Springer. On the one hand, price rise at the 3.5% limit is almost certain, given the “natural” evolution of APCs prices; on the other hand, the current APC price insensitivity pushes us to predict that the number of articles published in full OA APCs will increase4. But this is precisely Springer’s gamble in signing this type of deal, by quickly making up for the quantity of articles in exchange for a limited reduction in the unit price. And this gamble is all the bigger here, given that its other source of income, under the Hybrid Publishing agreement, may fall in 2021, 2022, or 2023.

That is the biggest surprise of this Springer-DEAL agreement. Reading the announcement of the agreement on January 9, 2020, one would have thought that this part of the deal would once again be a copy of the Wiley agreement. Indeed, the fee5 of €2,750 for any research article in a hybrid journal published by Springer, signed without limit with Wiley, was communicated6. However, it is a very different expenditure scheme that was accepted by both parties (p. 25), represented in the following image.

For the year 2020, the amount is based on a “Reference Value » (RF) as the product of the number of articles estimated to be published by €2750, that is €26,125,0007. The RF does not move during the contract and so very much look like a “subscription price” from the point of view of Springer. Nevertheless, there is a complex real price paid that only partly takes into account the actual number of articles published. In 2020, the minimum invoice is the RF, if more articles are published, the price can go up to 5% more. In 2021, it is a minimum 95% of the RF and up to 10% more than the RF; then, 2022, it is 85% and up to 20% and finally, at DEAL’s option, for 2023, it is 75% and up to 30%.

On the upper side of the RF, from Springer’s point of view, the risk is to publish “too many” Hybrid OA articles. In such a situation, they would “miss” some revenue which would have hypothetically been generated by individual “Open Choice” APC. From DEAL’s point of view, it is litteraly an insurance against a growing cost generated by the capture of publications by Springer journals8.

For the avoidance of doubt , Publisher will continue to publish Eligible articles even if the Upper Threshold is met or exceeded. Publisher will never charge any part of the Calculated Total PAR Fee exceeding the Upper Threshold, irrespective of the actual Calculated Total PAR Fee and/or number of Published Articles.

If we now look on the other side of the RF, roles are reversed: the minimum invoice is an insurance for Springer if, for whatever reason, German authors don’t use the agreement to go on Hybrid OA, that it gets some value back now that reading is free of direct charge. From DEAL’s point of view, there is the risk to “pay for nothing” and it could be an incentive to push researchers to use Hybrid OA as it is “already charged”, rather than choosing the Full OA road, discounted but limitless as far as costs are concerned.

How transformative is the DEAL deal?

We can point to four potential or actual transformations from the agreement which runs until the end of 2022 with an option at the discretion of DEAL for 2023. First, obviously, it is the construction of a demanding workflow to regulate all the exchanges of authorship, institutionnal and financial information not only between Springer and DEAL, but also between DEAL operating entity and the Member institutions. Indeed, as with other Publish & Read type contracts, the sums actually paid by the research intensive institutions will be much higher than in the past. and conversely, more teaching or practice-oriented institutions would pay less. What is the cost of such a workflow for both entities? Is it easily scalable for other publishers/consoria? How would some institution react to their growing costs?

Second, this agreement raises the issue of researchers’ enrolment to open access publishing, even if the money does not seem to come from their own pockets or grants in this case9. Will they agree to publish in hybrid OA? Will they, on the other hand, remain insensitive to the total cost of APCs? Will they assume the position of correspondent author more than their foreign colleagues? What will be the associated institutional policies: more obligation to publish in open access or, on the contrary, a logic of individual choice? Answering these questions will make it possible to observe whether, indeed, open access is becoming the norm for German researchers in their publications at Springer.

Third, in direct connection with the previous transformation, the parties took calculated risks by signing this agreement. Springer may see its sales fall by between 15% and 20% in 2022 (APC discount at constant volume, minimum Hybrid Publishing price) in the event of failure with researchers, workflow problems or major disagreements within DEAL. Symmetrically, DEAL members risk a significant increase in the total price with a maximum of 20% Hybrid Publishing price and an explosion of full APC OA if production is moved to these journals. Transformative action at constant cost, because there is “enough money in the system”as OA2020 stated in 2015, is therefore not at all guaranteed.

Finally remains the question of the state of things at the end of the contract. If all goes well in their view, DEAL will validate the 2023 option, but what happens beyond that? And if they don’t, what will be their negotiating power? Will Springer be happy if both OA deals don’t have enough success to maintain their currents profits? Will the use of the “flagship journal” listed in the Wiley agreement to put some competition on Springer? Will Springer journals still be predominantly hybrid journals? Will the coalition S ultimatum on the lack of funding for APCs for this type of journals in 2024 be credible? There is nothing in the agreement to give answers to those questions, and in particular there is no commitment from Springer to flip its journals then. So, contrary to the recent ACM Open Model , this agreement does not constitute an irreversible transformation to open access. If things go south, subscriptions could be back at the very heart of the next agreement..

  1. The agreement is availabe on the Projekt Deal dedcated webpage with its own DOI. Announced at the beginning of the year by both parties, the full agreement was discreetly added in mid-February. Thanks to Quentin Dufour for flagging this document []
  2. According to this presentation by the European University Association, more than one billion euros a year for its members, including 700 millions for journals []
  3. Typically the section “business models” of the Scholarly Kitchen website. []
  4. On these two points, see the remarkable article by Shaun Yong-Seng Khoo, “Article processing charge hyperinflation and price insensitivity: An open access sequel to the serials crisis.” Liber Quarterly 29.1 (2019). []
  5. Technically, it is not an APC as stated in the FAQ page: “different from an Article Processing Charge (APC), the PAR fee, paid centrally by participating institutions for each article to appear under the DEAL agreement, covers the cost of the open access publishing services rendered and, to a lesser degree, reading access in Springer Nature subscription journals.” []
  6. In the Wiley deal, if I understood it correctly, the baseline payment is guaranted, unless it is shown that Wiley technically limits the actual publication of Hybrid OA ; but there is no max limit for the payment of €2,750. per article []
  7. I do not go into detail here about the type of article and in particular “Non Research Articles”, the price of which is €917 []
  8. Notably by the shift of corresponding author from a foreign researcher to a German one. []
  9. The actual source of money for APCs is not addressed at all in the contract, it is probably part of DEAL’s internal financial mechanics which are not public to my knowledge []

The Coming of Age of Open Access (I) or… Where are the alternative journals 18 years after the BOAI?

For most of us, February 14th is Valentine’s Day; for open access activists and lovers, It is also the celebration of the BOAI anniversary. It was 18 years ago, they were sixteen, meeting in Budapest in December 2001. Far from agreeing on everything, yet they co-signed a landmark declaration published on February 14th, 2002. 18 years later, it is the coming of age for Open Access, a time to look at what has been changed, redifined, gained and missed. To start with, we have to remember that the BOAI really defined open access, as a virtually unlimited re-use of academic documents:

By “open access” to this literature, we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited ((BOAI, 14th February 2002)).”

They also put together what was largely separated before, the soon named “green road” and “gold road”. Nevertheless, contrary to the popular belief, the supposed “original version” of the BOAI reproduced in lots of copies on the web, they were not calling them “self-archiving” and “open access journals”. Indeed, the reference above is not the true original version, but one slightly changed in the summer of 2002. The former one, which can be seen on the Web Archive, stated : “Open access to peer-reviewed journal literature is the goal. Self-archiving (I.) and a new generation of open-access alternative journals (II.) are the ways to attain this goal. So following BOAI, we will deal with this coming of age in two successive posts : this first one will focus on alternative journals, the second one on the triumph of organized archives over self-archiving.

A revolution with no defined business model

How are described these alternative journals? Why and how are they alternative and to what? The main answer is given in a long paragraph of the BOAI: it is our actual starting point, from which the history of these journals shall be analyzed. To ease the reading, we have divided it in three parts.

Second, scholars need the means to launch a new generation of alternative journals committed to open access, and to help existing journals that elect to make the transition to open access.

The two ways for journals to commit were already happening at the beginning of the century. On the one hand, very early electronic journals, often without publisher but what we now call a platform, didn’t go the subscription way and were established as free for readers. On the other, the 2001 PLOS letter/petition pushed publishers to change their ways and open their content, with lots of signees but few positive answers apart from BMC. So the BOAI reminds them that they could still “filp” to open access. But what does it mean exactly?

Because journal articles should be disseminated as widely as possible, these new journals will no longer invoke copyright to restrict access to and use of the material they publish. Instead they will use copyright and other tools to ensure permanent open access to all the articles they publish. Because price is a barrier to access, these new journals will not charge subscription or access fees, and will turn to other methods for covering their expenses.”

The alternativeness doesn’t come from the way journals should be run (editorial boards, scope, peer review) but from their economic model. Journals are qualified as “alternative” because they shouldn’t anymore rely on the property of content and subscription as the main route to pay for journal expenses (and profit). More than that, they would have extra costs as they have to maintain open access through time. With the vanishing of current and future revenues, on what shall the new business model rest?

There are many alternative sources of funds for this purpose, including the foundations and governments that fund research, the universities and laboratories that employ researchers, endowments set up by discipline or institution, friends of the cause of open access, profits from the sale of add-ons to the basic texts, funds freed up by the demise or cancellation of journals charging traditional subscription or access fees, or even contributions from the researchers themselves. There is no need to favor one of these solutions over the others for all disciplines or nations, and no need to stop looking for other, creative alternatives.

This is probably the most important part of the whole BOAI declaration, besides the open acess definition. Three main points should be retained: firstly, the idea of a diversity of sources of income, in an optimistic vision of the means of financing a journal undoubtedly fuelled by the success of the open source software movement. Secondly, this diversity is reinforced by the final sentence which supports the absence of a “one best way” even when the exploration of possibilities would have fully taken place. Finally, thirdly, the idea of recourse to a payment by authors is tconceived as a last resort (“or even”). In other words, not only does the alternative economic model remained unclear and uncertain, but the paying author’s proposal was not considered a priority at all.

The stories of 11 pioneers

But with such vagueness, who then ventured to go on the alternative and how did they settle their journals once launched? On the George Soros site, funder of the BOAI meeting through the Open Society Institute, a very short list of 11 entities was then available, even if it was supposed to be only examples1.

11 journals

So, a first way to fulfill the promise made by the title of this post is to investigate the trajectory of these 11 journals, or rather publication websites, so great is their diversity. We will treat them in groups, according to their destiny:

  1. Still free of charge mathematicians & computer scientists journals: Algebraic & Geometric Topology and Geometry & Topology were respectively founded in 1997 and 2001, have always published open access articles and are still community-based journals, published by MSP, which puts a very strong anti-APC statement on its website. Document Mathematica is the first journal of the Elibm platform, founded in 1996, which acts as a repository for maths proceedings and journals, free of charge for readers and authors. JMLR was created in 2001 in an independance movement by 40 members of the editorial board of Machine Learning, then owned by Kluwer and is a always hard-to-believe from the outside $10 per article cost kind of journal – thanks to huge volunteer work, Latex, open source software, no fancy website and outsourced micropublishing for paper versions with no financial exchange.
  2. Still owned by societies, but have switched to APC: The New Journal of physics founded in 1998, now published by IOP with an APC of 1630 €. It was a part of some “offset deals” (Austria, UK) and is still one of the journals of the SCOAP3 agreement. The Journal of Insect Science was supported by the University of Arizona, launched in 2001, it changed with the death of its editor-in-chief in 2014, owned by a society but published by Oxford with a 1176 € APC.
  3. Bought by Springer platforms: Living Reviews in Relativity was founded in 1998 by a Max Planck institute, it published only reviews, which were “living” as authors could update them as new literature could be taken into account. It was sold to Springer in 2015, which kept the same formula with, remarkably, no APC . The trajectory of BioMedCentral is probably well-known to readers, let us just remind that it was founded in 2000, cosigned BOAI through Jan Velterop, its then director, was the first “big” publisher to bet on APC and was finally sold by its owner, Vitek Tracz, to Springer in 2008.
  4. Popularizers of APC and inventors of the megajournal: PLOS didn’t really exist as a publishing place at the time of the BOAI. Its call/letter for Open Access the year before as almost only BMC responded positively. But they were already able to secure funds, cosigned BOAI through Michael Eisen and soon lauched PLOS Biology and then, in 2006, PLOS ONE which was the first megajournal, which climbed to more than 30,000 articles a year, invented new forms of peer review and supported article-level metrics againts journal-based metrics . It was also the launch of APC as a standard way to provide Open Access for large communities.
  5. The Platform that used to promote open access among publishers: Highwire has never been a journal nor a platform-journal, but rather a hosting service which develops tools and software for publishers. Founded in 1995 and based at Standford University, it used to be the largest archive of free full-text science on Earth with more than 2,4 million articles. Bought by an equity fund in 2014 (a minority share is still owned by Stanford), this “free texts” webpage stopped its counting on the 25th March, 2015 and the webpage was not maintained after 2018.
  6. Terminated by its learned society: Psycoloquy had been launched and supported by the American Psychological Association, with Stevan Harnad at its helm, who translated some of the features he developed in his previous journal, BBS, notably open peer commentary, into the electronic form. It stopped publishing new articles by 2002.

Other journals or platforms could have been indicated as examples in early 2002. One can notably think of Scielo which was already working very well in South America, Erudit was growing up in Quebec as well as Revues.org in France. But the BOAI was rather focused on STM and English-language journals, and the alternative journals of the BOAI are also located within a world already dominated by an oligopoly of big publishers that was to be changed or at least challenged. Despite these limitations, the 11 stories nevertheless show the diversity of actual trajectories, the adoption of economic models that had yet to be defined and implemented and the adoption of the alternative by some big publishers.

From Gold to Diamond:
when the alternative remains alternative

Above and beyond these examples, what trends could be drawn from these last 18 years? We have to consider a wide range of moves from public policies, learned societies, universities & libraries, research funders and finally of course publishers in order to give a second answer to the titile of this post. Of course, the first evolution is the invention of a locution, soon after the BOAI : open access journal, which replaced the “alternative” ones.

Then, as with 4 of the 11 listed, we observed a massive rise of the APC model, from BMC and PLOS pioneers. The idea that authors would accept to pay to publish was not to be taken for granted, would it be in principle or in practice with questions about the accounting circuit, the actual source of funding (authors, labs, departments, universities…), the level of price, etc. And still in some disciplines, being forced to pay is putting a low-quality stamp on the output. The Wellcome Trust in the UK and the ERC programs in the European Union played a huge role in experimenting with the possibility of paying APC through grants, which made them a “normal cost”, especially in well-granted disciplines (biomedecine, physics…). The UK official public policy, after the Finch Report in 2012, also injected money to pay for APCs.

It not only fueled the growth of relatively new publishers – BMC, PLOS but also MDPI, Frontiers in, Hindawi – but pushed “traditional” big publishers to adopt APC and make their journals “hybrid”, with a “basic funding” by subscription and “extra funding” through APC. Springer began its “Open Choice Program” in 2007, which name deeply reflects the liberal-market vision of open access. These two evolutions led to very harsh critiques of the whole Gold OA project : on the one hand, it raised the question of the birth of predatory publishing through APC ; on the other hand, hybrids meant double dipping and the deepening of the serial crisis.

Hybrid journals were conceived as transition tools to open access, as the then director of SPARC Europe theorized them2 So these private and public policies of APC funding were conceived as a way to reach a tipping point after which the Open Access, now renamed full open access journal, would happen. How naive wrote Richard Poynder in a recent essay3! Some powerful actors came to the same conclusion, so they recently try to impose new radical changes in the funding of journals, most notably Max Planck Gesellschaft in 2015, then the now famous Coalition S, which aim is to accelerate the transition to open access by in fact killing the subscription model, having a CC-BY license to authors for content. Does this sound familiar?

So it seems to go full circle: almost twenty years later, trying to get rid of the traditional economic modela for journals and to do that, talking with big publishers in order to sign “transformative agreements”. Open access has gone mainstream, Elsevier even now present open access as a standard. If changes happened, it was more on the way journals were run, most notably open peer review4. But wait a minute, if the alternative has gone mainstream, where is the new alternative? In fact, the support and success of the APC modeal made the impression on a lot of commentors and actors that the Gold way was now the equivalent of an author-payor model. That led some activists to coin new names for “no APC journals”. Would it be Diamond or Platinium, it meant that it was also free for authors, and not only readers.

Scielo, Erudit, Open Edition were already mentioned, just as 5 of our 11 pioneers. But we could add Open Library of Humanities or Redalyc as “big platforms” for journals5. They are the majority, as no APC journals still represents more than 70% of entries in the DOAJ, their business models are diverse, from bricolage to strong institutional support, just like the BOAI predicted. So the alternative is still alternative, though it has vastly grown in the last 18 years. Getting to adulthood, we will see whether OA journals coexist into two genres, non-APC and APC, or whether one of them in not sustanaible in the long run. Unless, of course, the other open access road gets us into a post-journal world through preprint servers and open archives. To be continued…

  1. this list didn’t evolve a lot in the next two years, Highwire and PLOS were removed, while two MDPI journals were added []
  2. Prosser, David C. “From here to there: a proposed mechanism for transforming journals from closed to open access.” Learned publishing 16.3 (2003): 163-166. []
  3. Poynder, Richard. “Open access: Could defeat be snatched from the jaws of victory?.” (2019). []
  4. Which means lots of different things, see Ross-Hellauer, Tony. “What is open peer review? A systematic review.F1000Research 6 (2017). []
  5. Not to mention the ones for books which are catalogued into the DOAB. []

From sharing to versioning to citing to retracting or… How preprints became quasi-articles

The forms of communication in academic communities are very diverse: articles, seminars, books, colloquia, mailing lists, posters, letters, workshops, proceedings,… The reasons why each one is chosen are multiple and the formats live their own life with new uses, far beyond the initial intentions of their creators. As we will see, preprints, though they have a relatively short history, followed complex patterns, to become something more than shared documents.

It is first necessary to agree on the designation of these entities: working papers, discussion papers, e-prints, preprints will be considered in this post as equivalents. All are written texts, produced by authors without any form of certification, and are made available without any paywall on a perennial web address. Contrary to Wikipedia, we don’t distinguish them on the basis of their future publiation in a journal. We will also ignore the issue of their licensing for two reasons: historically, preprints have existed long before the release of CC licenses – and many of them continue to be unlicensed; pragmatically, because our focus here is on the use of preprints, not their re-use.

Prior to the electronicization of scholarly communication, some disciplines had already experienced preprints, notably psychology and biomedical sciences. This meant that paper manuscripts circulated by mail, with associated quite high material costs (reproduction, stamps). This was not the primary reason for the cessation of certain practices: in biomedicine, publishers were vigilant and their editors-in-chief allies declared a ban on the publication of manuscripts that had already been circulated. On the other hand, physics, especially high-energy physics, pioneered these practices and continued to generate these mail flows before transferring them to the electronic world in the 1970s. Using the compactness of the TEX format, these preprints started to be distributed by email and then Paul Ginsparg had the idea of building an automatic BBS, basically inventing ArXiv.

E-prints servers as competitors of journals?

Until then, in all disciplines, usage has essentially been the same: to facilitate the consideration and discussion of recent research and results, by circumventing the obstacle of delays of publication in journals. Admittedly, a large number of conferences had adopted the practice of proceedings, thus allowing a reduction in this delay, but they remained then very largely attached to the world of paper printing. Following the success of ArXiv, several e-print services were launched in the mid-1990s and Steven Harnad predicted their pre-eminence over journals as the central venue for distribution:

“the best people start putting stuff and readers start saying :’Why wait for the journal to come out? I have to teach this stuff, I have to know this stuff, I can get it to the archive’ and then the libraries come around and say ‘should we order this journal?’ and the scientist says ‘I don’t care, I no longer read in paper’.”

It seems obvious that this prediction did not come true, far from it, and the 2000s saw a world divided between a few disciplines massively practicing preprinting (physics, mathematics, computer science, economics) and the rest of the academic world ignoring them superbly. Nevertheless, their uses – both on the authors’ and the readers’ side – have started to compete with the journals ones. In a peer reviw fashion, the “raw” circulation of a manuscript for discussion regularly produced new versions of a preprint. On ArXiv, more than a third of the preprints exist in 2 versions, and more than 10% exist in 3 versions; or even more as Hirsch’s famous manuscript inventing the h-index had 5 versions, 4 of them before submission to PNAS. On the readers’ side, researchers soon started to cite not only published papers, but also preprints – then often called e-prints, on a massive scale.

These new reading and referencing practices have led to a vast literature on the citation advantage of open access articles over those available only through subscription and its paywalls1. Beyond this possible advantage – monetarised by big publishers for their hybrid journals in a commercial version of open access – these practices shed light on the change in status from simple “manuscripts” to texts integrated into the published literature. To completly get them out of their grey literature status, Paul Ginsparg had proposed as early as 1996 to add overlaid information on preprint servers, which led on the one hand to the creation of journals overlays proper, and on the other hand to various recommendation devices for preprints, among other texts.

The accelerated life cycle of preprints

The “standardization” of preprints through citation or certification is not the only notable development. Indeed, the recent disciplinary extension of preprints servers, in what is often described as a second wave2) is a significant development and has consequences for their uses. Let us take the example of life sciences, with the development of biorXiv, a platform launched in 2013 and published 30,000 preprints in the year 2019.

From this video put online at the time of the platfom inauguration, we will retain two elements: fastness and discussion. If high energy physicists, because of the weight of the infrastructures, work organization and authorship practices are used to live in a world with little publishing competition3 , this is not the case for many computer scientists who already published on ArXiv, especially in the artificial intelligence branch. Also, flag-planting to estabilish priority and (thoretically) gain the scientific credit has been a common operation on ArXiv, the use of timestamp by the server being a certfication of the order of arrival. If this fastness is also important in life sciences to avoid getting scooped, it shall be equally considered in contrast with the slowness of journals : speed of publication has often been an argument for different outputs, and the tension between rapid dissemination and quality of certification is at the heart of the history of the peer review in journals4.

For life scientists and especially early carreer researchers with short-term contracts, speed is less a question of priority than to simply see their results being widespread to be able to build some credit for their next assignment. Until preprints, no publications meant no credit. Now, they have at least something, especially since some organization have recognized preprints as legitimate outputs for CVs in grant applications. Of course, they still need publication in journals, which leads us to the role of discussions. As we have seen, in the case of ArXiv, discussions often feed a release cycle in the form of new preprints. In life sciences, this is apparently much less the case: a recent study by Kent Anderson5. shows that the majority of preprints were posted after they were submitted to a journal, so the “discussion » rather than feedback from the readers of the preprint takes the form of a peer review within a given journal

From fastness to emergency:
Preprints can be retracted too

At this point, we need to address the question of the targeted audiences for preprint servers: if it was initially pure academic community exchanges, things have changed with the popularity of social networks. Indeed, the cited Knowledge Exchange report highlighted the crucial role played by Twitter in the dissemination of preprints by their authors or platforms themselves. This dissemination to fringe and non-academic audiences has several consequences, such as the reuse of preprints by maginalised communities or communities with minority knowledge and beliefs. This is also the case for links to blogs included in ArXiV trackbacks for which it is very difficult to reach a consensus on the “serious” or “eccentric” character of a website.6. If Anderson concluded that the promise of a discussion was not kept within the platform in the case of biorXiv, it doesn”t necessarly mean that it is limited to journal peer review, as an unexpected event has just shown us.

In fact, the 2019-nCov coronavirus has been a test for biorXiv as it became the forefront of scientific information. Yet, since the 2003 SARS virus, the international health community, strongly pushed by WHO, seemed to have favored data and information sharing over scientific credit or patents. In recent epidemics, even the paywalls of big publishers have been opened in order to maximise the sharing of the existing knowledge. Now that biorXiv has been strongly established, it is the easiest legal way to combine sharing, speedness and some credit coming from priority7 And indeed, the preprint server has been flooded with coronavirus papers.

This new disclaimer – which specifies in the current case a general policy stated at the top of each preprint – emphasizes the potential audience of preprints, media. For long, the majority of senior life scientists have feared that uncertified preprints would be taken for granted and that a flow of “bad science” would be given to lay audiences. And their strongest fears apparently came true, as an article suggesting the artificial nature of the current virus quickly fed the conspiracy sites and flows, “proving” the epidemic could only be, at the very least, the result of a failed experiment. But the preprint publicity is more ambiguous : as its links spread, it was severely criticized, in a very well-argued way, by colleagues. Moreover, biorXiv is one of the few preprint servers that has included a comment feature attached to the preprints it hosts. And this paper has received a lot of them! So much so that the preprint was retracted less than 2 days after its publication – or more exactly the authors withdrew it following all these comments, whereas previously the retraction of a preprint was envisioned only in case his published heir would have previously endured this exact fate.

The interpretation of this ultra-fast life cycle is of course contrasted: the creators of Retraction Watch see it as a victory for science in preprint mode, while K. Anderson and others consider that such an article would never have appeared in a top-level journal. But the outcome of this debate on journals vs. preprint servers quality should not obscure the profound transformations of preprints. The Harnad vision began to come into reality more than 20 years later, but in a twisted way. While preprint servers didn’t replace journals, preprints have become quasi-articles: used for priority, have a DOI, generate some scientific credit, read and cited, change through at least informal discussion processes, appear on CVs and are archived, generate media interest. And now even if by name they are pre-publications, they are submitted to the stringest post-publication peer review decision.


  1. This literature is so vast and contradictory that Ben Wagner has made an annotated bibliography of it []
  2. see the very good synthesis funded by Knowledge Exchange, Chiarelli, Andrea, et al. “Accelerating scholarly communication: the transformative role of preprints.”(2019 []
  3. In her groundbreaking 1988 book, Sharown Traweek stated that publications were not important for them, as they were only archives, record-keepingof the things that really matters []
  4. see Pontille, David, and Didier Torny, “From manuscript evaluation to article valuation: the changing technologies of journal peer review.“. Human Studies 38.1 (2015): 57-79. []
  5. “bioRxiv: Trends and analysis of five years of preprints.” Learned Publishing (2019). []
  6. see Ritson, Sophie. “‘Crackpots’ and ‘active researchers’: The controversy over links between arXiv and the scientific blogosphere.” Social studies of science 46.4 (2016): 607-628. []
  7. On the illegal side, activists have built a specialized archive based on Scihub. []

The Political Economy of Academic Publications

This blog is part of a vast research program on the political economy of scientific publication, which has been strongly transformed over the last twenty years by the electronic dissemination of journals. It considers publishers, editorial committees and journals as socio-political actors to be studied in three complementary aspects detailed below.

Firstly, they are analysed as economic actors defining publishing markets. The conditions under which these markets were created have been the subject of much criticism, and strong transnational mobilisations around open access have been deployed, which has influenced the construction of public policies that are contrasted internationally. New economic models have emerged, of which direct payment by the authors (APC), is only the most visible, but not the most frequent. The multiplication of coloured labels (Green, Gold, Platinum, Bronze, Diamond) to designate these models does not fully account for their subtle differences, nor for the sustainability of the associated business model, compared to the classic subscription model which has led to a “serial crisis” over the last 20 years, with the massive increase in the cost of access to publications for libraries


Secondly, journals and publishers are studied as places of production, including innovations in evaluation technologies (open peer review, technical soundness based review…). In particular, it is the growing debate on post-publication peer review policies, including withdrawing articles, that will be examined, as well as the emergence of platforms for public discussion of their validity such as PubPeer. The question of the centrality of journals for peer review or their marginalization (overlay journals, recommendations…) will also be addressed.


Thirdly, journals are treated as places of valorisation, seeking to attract authors and promote their position through the use of different measures (citation, referencing, uses…), which they highlight or criticise. In addition to the recurring debates on the Journal Impact Factor, a measure that is currently much decried, there will be discussions on alternative metrics, or even on responsible metrics, which are supposed to better represent academic production and its uses.


These three aspects aim in particular at sheding light on new forms of self-regulation by academic actors (systematisation of advertising for the withdrawal of articles, generalisation of post-publication peer review, stigmatisation of predatory publishers, uses of creative commons licenses…), the innovative and argumentative work of publishers and platforms, whether public, para-public or private, and the redefinition of public policies in the field of academic publication.