Matilda is finally available… or how open academic search engines are a key part of open science

Matilda homepage, 6th Ocotber 2023.

There was a time, towards the end of the 20th century, when things were simple. If you just wanted to count the publications of an author, an institution, a country, you had to refer to the databases of the Institute for Scientific Information (ISI), created and directed by Eugene Garfield. The most famous of these, the Science Citation Index, was built on the idea of selecting the most relevant journals to capture the heart of science, in the already long tradition of bibliotheconomics. And these core journals wer sufficient to draw a relevant picture of the whole of scientific content. Taking the part for the whole raised many questions about the representativeness of the journals present, data and calculation errors, biases in favour of certain disciplines, languages and countries, but as Margaret Thatcher said about her economic world: ‘There is no alternative’

25 years later, commercial competition is fierce between Clarivate’s Web of Science, Elsivier’s Scopus and (almost) Springer’s Dimensions to capture the most money available from Higher Education & Research institutions . In another world, Google Scholar has woven its web, Google Scholar (GS), the only corporate service without advertising or direct tracking of usage. But these systems still have their drawbacks: the commercial databases still are still excluding machines, deciding what is “searchable” among the whole literature; GS services are restricted in their uses (eg no massive downloads) and its sources are neither described nor open.

This is the landscape in which Matilda was created, thanks to Huma-num and an ANR grant. If you want to know more about how it was envisionned in 2019, there is an “origins” paper1. For now, let’s get straight to the tutorial. The video below is all you need to use it, no API coding, no computer skills, just an idea of what you are searching for as an academic or someone avid to find academic sources.

Open citations at the heart, open data everywhere.

Matilda: is one the outcomes of the “open citations” movement: Originally, in 2010, it was a reference data corpus, the Open Citation Corpus (Pironi et al. 2015), before these remarkable precursors2 were joined by various organizations demanding the release of Crossref citation data to publishers. The I4OC collective has consequently obtained the availability, under a CC0 license, of the whole CrossRef database by default., But what to do with this pile of data? A number of tools, including the VOS Viewer developed by Leiden University, use them.. However, they hope that other actors would take them and build services on this new shared resource. Like the Open Citations databases3, they often presuppose professional users, either experts in API manipulation or interested in very advanced bibliometric developments. Matilda took a different approach by making the simplest tool possible,

Follow an author, make citation tracking of a core text in your field, search for texts with a given expression in their title, download full metadata to zotero, download a copy of the text if it is legally available, create an alert through a RSS feed that is publicly available, share it in your team project through a Zotero group, all this and more with just a few clicks. It is free, reusable so are results, because the metadata has been liberated thanks to these activists and to the collective movement that followed, including publishers.

Almost real time, always get freshest texts

Even if there is almost no literature on how academics practically search for their sources, we assume that when they know their field, they are searching for new information, that is texts that weren’t there yesterday but are available today. That has been the promise of many information devices, from the first academic journals to ISI Current Contents, from abstracts/review journals to contemporary Scopus/WoS alerts.

Beyond openness, one of the promises of Matilda is to offer you this freshness by going to the sources, applying YOUR search keys and deliver them to you in no time. In practice, that means that around 2 days after their creation in Crossref, RePeC, ArXiv, Pumed, you will get the relevant metadata in your Zotero RSS feed. As a mean, around 40,000 new texts appear in Matilda and some will probably interest you, that is discover the title, read the abstract, include it in your bibliography while other will be rejected.

What’s next and how you can help Matilda

The current version of Matilda is V. 2.0.2 and we have money to build the V3 with plenty of new features, the most spectacular being full-text search as we will index every found PDF so that you can add these results to those on the metadata. We also will add boolean operators for search – currently by default it is OR. In the long run, codes will be available – everything is open source software – and APIs will be open for direct reuse, for example instead of an uncheckable “WoS citations”, you will find a tracable “Matilda citations”.

We also think about adding new sources such as aggregated online archives as we wish to be inclusive as possible, so that YOU choose what is relevant for your research, not US.

The ultimate aim is clear: offer an alternative to current WoS/Scopus users, so that their institutions stop paying millions for tools that were not made for lay researchers – the bibliometrics uses of such platforms are debatable, though the Open Research Information spurring movement could also push them into history. Show that we need to decolonize scholarly metada that was for long limited to 1/ jorunals 2/ with articles written in English 3/ from Global North scholars 4/ and especially those owned or disseminated by big publishers. It also aims at providing an open alternative to Google Scholar, with open, tracable sources and enrichments and no limitations in download and uses. As everybody knows, Google can decide to shut down services in a day, so there is no long-term garuantee that GS will exist in the long run.

What can you do to help develop and sustain this open science platform? First, talk about it, create and share links, go to your institution head and show them that they could invest in open science rather than funding capitalistic vilains. Second, use it, test it, send us some feedback, good or bad, ask for features, explain what you need and expect from such a tool. Third, your IP addresses are not traced, but we have the aggregated image of RSS feeds, so even by just using it, you will help us.

  1. Didier Torny, Laurent Capelli, Lydie Danjean, Stéphane Pouyllau. Matilda: Building a bibliographic/metric tool for open citations and open science. ELPUB 2019 23rd edition of the International Conference on Electronic Publishing, Jun 2019, Marseille, France. ⟨10.4000/proceedings.elpub.2019.22⟩. ⟨hal-02141839⟩ []
  2. Disclaimer: I have been a member of the Advisory Board of Open Citations on behalf of the French Open Science Committeee since 2021 []
  3. see Heibi, I., Peroni, S. & Shotton, D. Software review: COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations. Scientometrics 121, 1213–1228 (2019). []

The absurd race for university rankings… or how publications are transformed into bad data

Lockdown or not, COVID-19 first wave in progress or over, universities open, teleworking or closed, it will still be out when August 15 comes. The AWRU Shanghai Ranking, like its cousins THE Rankings and QS Wold University Raankings and others of the same genre have their season, their inflexible communication, their teasers, their sports announcements of winners of the year and of emerging stars.

The recurrent criticisms that are made against them, the evisceration of their methods by the scientometers and rankings specialists1 will not change anything in their imperturbable march. So why write about it? Not to sum up their four decades history2, but because publications play a minor, but not negligible role, and their successive transformations into bad data is an interesting case study.. Let’s change this midsummer’s nightmare in an ironic and fun bedtime story for academics, mostly thanks to incredible French stories!

The tale of a French “emerging university”.
How to become ranked

The HER French system is incredibly ill-suited to fit these rankings. Indeed, not only do we have universities on one side and powerful research organisations on the other, but both share “joint research units” bringing together academics working for 2, 3, 4… up to 8 different employers. As a result, the signatures of scholarly publications are very long, contain multiple institutions for each author, and are subject to wide variation for a given lab or department.
Morever, from 2007 onwards, laws have defined the framework for “new universities”, regrouping institutions on a rather geographical basis. We will follow here the example of PSL3 (“Paris Sciences & Lettres”), whose name gives a foretaste of its diversity, with 11 establishments and 3 research organisations. Although there is only one university – Paris Dauphine – among all these institutions, some were already taken into account by rankers, such as the Ecole Normale Supérieure, very famous for its mathematics department. So how can they ensure that “PSL” becomes the brand name and that all academics outputs are counted under its name?

The only solution is to change the signature rules, homogenize them, and then measure their practical implementation by reluctant researchers. This is all the more difficult as the grouping of institutions did not make them disappear, and each one therefore retains its staff, budget, premises and laboratories. Thus, it took years of negotiations for an agreement to be signed by all the institutions, leading to about 70% of the publications signed in accordance with the model defined in 2015 three years later, where a “simplification” for the byline was carried out, to fashion the following model of affiliation :

Institution Name, PSL University, [institute or departement], Research Organization, [joint research unit number], University co-chair, Laboratory, [Team], [Address], Postal Code, Town, France

But this is not over as they have then to convince rankers and their underlying data sources (mostly Scopus from Elsevier and WoS from Clarivate) to pick the second place “PSL University” in this long list of possible affiliations in the byline. For that, they used the knowledge and work of a “ranking optimization policy officer”, Daniel Egret. former astrophysicist. This is no surprise: we have known for more than a decade that HER institutions react to rankings in different ways, trying to optimize their place if they think the goal is worth it4. Finally, like any other univeristy, they would cherry pick among all relevant rankings and boast about their amazing results on their website:

It’s somewhat ironic that PSL should be congratulating itself on its first place for “new universities” when most of its institutions are multi-secular, but PSL is just playing a game by pushing to the limit the “optimization” linked to changes in the signing of its affiliated researchers. However, as in fiscal matters, the limits between “opitimization” and “fraud” are tenuous and the management of the “Highly Cited Researcher” of the Web of Science shows us two examples of this.

It’s the authorship, stupid!
From private gaming to alternative facts

On this blog, we have already encountered this Web of Science tool, in the case of KC Chou, a serial peer review hacker who finally got caught after he became one of these HCR. At that time, like many, he had acquired a secondary affiliation: as Yves Gingras already noted six years ago, these “secondary institutions” tend to be concentrated in a few countries, and notably Saudi Arabia. However, by constructing a combination of indices, 20% of the Shanghai ranking derives directly from the number of HCR researchers at the university in question. In other words, it is no longer a question of directing publications one by one to the computing centre of a ranker just like PSL did, but of reassigning the most prolific research producers to a given university. In exchange for a given lumpsum, the researchers in question will therefore “sell” their authorship, possibly on the articles themselves, but especially in their response to the Web of Science’s query as to their affiliations. When HCR data became available, scientometers analyzed it and came to the conclusion that secondary affiliation should be left out to avoid massive ranking manipulation5.

The gaming had become too visible, forcing the ranker to change its methodology, as it stated that ” only the primary affiliations of new Highly Cited Researchers are considered in the calculation of an institution’s HiCi score for the new list.”. While this new method limits the direct interest of contractualisation between authors and institutions, it raises specific problems for French universities. Indeed, because of the multiplicity of affiliations described above, many authors dependent on research organizations indicate to Thomson Reuters/Clarivate/Web of Science group, that their main affiliation is CNRS, INSERM or INRA, which are not ranked. As a result, the research-intensive universities, united under the umbrella of the CURIF, have developed an extensive lobby to get these full-time researchers to indicate that University X is indeed their first affiliation and not the secondary one. After a long struggle, they obtained satisfaction in 2019 as the Higher Education and Research Minister herself, Frédérique Vidal, signed a letter which asked HCR researchers to pick their associated university rather than their own organization. This move has only one purpose, as detailed by Daniel Egret in the French economic press:

“The astrophysicist predicted a gain of 84 places at the University of Lorraine, 57 places for Toulouse-III and 26 places for Montpellier. The most “spectacular” effect of the new affiliation rules will mainly concern universities that are between the 200th and 300th place in Shanghai, according to Daniel Egret, because, at this level, “the scores are very close between universities, which can make them gain several dozen places”. The effect will be less visible for universities in the top 100, where the scores are “quite scattered”. Paris-Saclay would gain 3 places, and Sorbonne University, 5 places”. ((Les Echos, 28th Februray 2019))

There are no hidden practices, secret arrangements or small manipulations as in the reported data on former students’ salaries or employment levels, in the case of US management universities, even when it is UC Berkeley . It is just the very admission to fudge the data in order to skew the rankings so unfavourable to French universities and thus to produce alternative facts. Yes, these publications exist, they have been authored by Mr. X and Ms. Y., they just happen to work for a different organization, who cares?

Who are the users of these rankings?
Sold and actual markets

This example shows us how ambiguous university rankings are as a tool, not only because the measures they aggregate would be biased, false, or even falsified, but in their very objectives, whether on the side of their creators or their multiple users. Indeed, it is common knowledge that the purpose of mergers of French institutions was to move them up in the rankings, particularly that of Shanghai, even if the calculable effects were far from certain6. The reassignment of authorship is only the last adjustment, after signature changes, to obtain these good rankings through publications. Other strategies, such as lobbying voters for reputation rankings, follow the same logic. But why, how and to whom is it useful to be “top ranked”?

In theory and in the public display of the rankers, rankings are supposed to participate in the formation of students choice in order to pick their higher education institution. Supported by an audit society vision and the idea that something like a global market for universities actually exists, rankers are supposed to be third-party certified information providers. Which leads to two questions : on one hand, is it useful information for students, on the other hand, to which other stakeholders could it be? The first question is not easy to be answered, as Ellen Hazelkorn has recalled: the existing literature shows no clear sign of massive use by students in order to choose their places, except in very specific cases like some US schools7 I would add the absence of testimonies and anecdotal evidence in rankers ads and commmunication: you would never see Jim, Liu or Penelope videos on how they pick their wonderful university by examining their rankings. In fact, rankers probably don’t care so much about students, their target is elswhere. Or when they happen to make one such video, like THE below, the sound is so unprofessionnal that it is embarassing and, of course, students barely mention rankings as a factor in their decision.

If students are not the focus of the rankers’ attention, who are the rankings for? At least three types of users and uses can be listed:

  1. content that is easy to publish and comment on by the media
  2. direct objectives for the universities themselves and policymakers
  3. sources of revenue for rankers organizations

Without mentioning in detail sites specialising in this form such as Buzzfeed, Topito or WatchMojo, the general media have since the beginning of the 21st century integrated the dissemination of rankings produced by third party “rating agencies” on objects as diverse as holiday resorts, hospitals, public personalities or television series. So why not universities? More than complex information, the description and commentary of a ranking are objects that are easy to produce and can be appropriated by large audiences. On the other hand, the lack of popularisation of the modular European U-Multirank shows that classification must remain simple and “objective” in order to be widely disseminated.

We have already mentioned the study of the effects of rankings on the ranked organizations themselves. In addition to the question of optimisation and fraud, changes in practices in the recruitment and evaluation of academics have been observed, the presentation of universities themselves, the construction of coordinated regional, national and supra-national policies on the basis of indicators, etc.8. Whether these transformations are just visual makeup or whether they have a profound impact on universities is a subject of debate in many articles. As far as publications are concerned, China has proved to be a particularly fertile ground for observing the effects of rankings9.

Finally, we should conclude on something that is both obvious and not often discussed. The first stakeholders interested in rankings are rankers themselves. They not only organize conversations on their (free) productions and sell themselves as certified audit firms, but pave the way for services markets. Indeed, rankers offer services such as detailed rankings, training and help to get better ranked, communication or recruitment services to universities. Global rankings is less a market for students or universities than it is for service providers. You can mine for gold or you can sell pickaxes.

  1. as an example, see Billaut, Jean-Charles, Denis Bouyssou, and Philippe Vincke. “Should you believe in the Shanghai ranking? An MCDM view.” Scientometrics 84.1 (2010): 237-263. []
  2. A short version can be read here, Kehm, Barbara M. “Global University rankings–impacts and applications.” Gaming the Metrics (2020): 93. []
  3. Disclaimer: PSL is my “official affiliation” for publications []
  4. see this seminal article, Espeland, Wendy Nelson, and Michael Sauder. “Rankings and reactivity: How public measures recreate social worlds.” American journal of sociology 113.1 (2007): 1-40. []
  5. Bornmann, Lutz, and Johann Bauer. “Which of the world’s institutions employ the most highly cited researchers? An analysis of the data from highlycited. com.” Journal of the Association for Information Science and Technology 66.10 (2015): 2146-2148. []
  6. see Docampo, Domingo, Daniel Egret, and Lawrence Cram. “The effect of university mergers on the Shanghai ranking.” Scientometrics 104.1 (2015): 175-191. []
  7. see Hazelkorn, Ellen. “The impact of league tables and ranking systems on higher education decision making.” Higher education management and policy 19.2 (2007): 1-24. []
  8. See for example Stack, Michelle. Global university rankings and the mediatization of higher education. Springer, 2016. []
  9. Xu, Xin. “Performing under ‘the baton of administrative power’? Chinese academics’ responses to incentives for international publications.” Research Evaluation 29.1 (2020): 87-99. []
Search OpenEdition Search

You will be redirected to OpenEdition Search