Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Matilda is finally available… or how open academic search engines are a key part of open science

Matilda homepage, 6th Ocotber 2023.

There was a time, towards the end of the 20th century, when things were simple. If you just wanted to count the publications of an author, an institution, a country, you had to refer to the databases of the Institute for Scientific Information (ISI), created and directed by Eugene Garfield. The most famous of these, the Science Citation Index, was built on the idea of selecting the most relevant journals to capture the heart of science, in the already long tradition of bibliotheconomics. And these core journals wer sufficient to draw a relevant picture of the whole of scientific content. Taking the part for the whole raised many questions about the representativeness of the journals present, data and calculation errors, biases in favour of certain disciplines, languages and countries, but as Margaret Thatcher said about her economic world: ‘There is no alternative’

25 years later, commercial competition is fierce between Clarivate’s Web of Science, Elsivier’s Scopus and (almost) Springer’s Dimensions to capture the most money available from Higher Education & Research institutions . In another world, Google Scholar has woven its web, Google Scholar (GS), the only corporate service without advertising or direct tracking of usage. But these systems still have their drawbacks: the commercial databases still are still excluding machines, deciding what is “searchable” among the whole literature; GS services are restricted in their uses (eg no massive downloads) and its sources are neither described nor open.

This is the landscape in which Matilda was created, thanks to Huma-num and an ANR grant. If you want to know more about how it was envisionned in 2019, there is an “origins” paper1. For now, let’s get straight to the tutorial. The video below is all you need to use it, no API coding, no computer skills, just an idea of what you are searching for as an academic or someone avid to find academic sources.

Open citations at the heart, open data everywhere.

Matilda: is one the outcomes of the “open citations” movement: Originally, in 2010, it was a reference data corpus, the Open Citation Corpus (Pironi et al. 2015), before these remarkable precursors2 were joined by various organizations demanding the release of Crossref citation data to publishers. The I4OC collective has consequently obtained the availability, under a CC0 license, of the whole CrossRef database by default., But what to do with this pile of data? A number of tools, including the VOS Viewer developed by Leiden University, use them.. However, they hope that other actors would take them and build services on this new shared resource. Like the Open Citations databases3, they often presuppose professional users, either experts in API manipulation or interested in very advanced bibliometric developments. Matilda took a different approach by making the simplest tool possible,

Follow an author, make citation tracking of a core text in your field, search for texts with a given expression in their title, download full metadata to zotero, download a copy of the text if it is legally available, create an alert through a RSS feed that is publicly available, share it in your team project through a Zotero group, all this and more with just a few clicks. It is free, reusable so are results, because the metadata has been liberated thanks to these activists and to the collective movement that followed, including publishers.

Almost real time, always get freshest texts

Even if there is almost no literature on how academics practically search for their sources, we assume that when they know their field, they are searching for new information, that is texts that weren’t there yesterday but are available today. That has been the promise of many information devices, from the first academic journals to ISI Current Contents, from abstracts/review journals to contemporary Scopus/WoS alerts.

Beyond openness, one of the promises of Matilda is to offer you this freshness by going to the sources, applying YOUR search keys and deliver them to you in no time. In practice, that means that around 2 days after their creation in Crossref, RePeC, ArXiv, Pumed, you will get the relevant metadata in your Zotero RSS feed. As a mean, around 40,000 new texts appear in Matilda and some will probably interest you, that is discover the title, read the abstract, include it in your bibliography while other will be rejected.

What’s next and how you can help Matilda

The current version of Matilda is V. 2.0.2 and we have money to build the V3 with plenty of new features, the most spectacular being full-text search as we will index every found PDF so that you can add these results to those on the metadata. We also will add boolean operators for search – currently by default it is OR. In the long run, codes will be available – everything is open source software – and APIs will be open for direct reuse, for example instead of an uncheckable “WoS citations”, you will find a tracable “Matilda citations”.

We also think about adding new sources such as aggregated online archives as we wish to be inclusive as possible, so that YOU choose what is relevant for your research, not US.

The ultimate aim is clear: offer an alternative to current WoS/Scopus users, so that their institutions stop paying millions for tools that were not made for lay researchers – the bibliometrics uses of such platforms are debatable, though the Open Research Information spurring movement could also push them into history. Show that we need to decolonize scholarly metada that was for long limited to 1/ jorunals 2/ with articles written in English 3/ from Global North scholars 4/ and especially those owned or disseminated by big publishers. It also aims at providing an open alternative to Google Scholar, with open, tracable sources and enrichments and no limitations in download and uses. As everybody knows, Google can decide to shut down services in a day, so there is no long-term garuantee that GS will exist in the long run.

What can you do to help develop and sustain this open science platform? First, talk about it, create and share links, go to your institution head and show them that they could invest in open science rather than funding capitalistic vilains. Second, use it, test it, send us some feedback, good or bad, ask for features, explain what you need and expect from such a tool. Third, your IP addresses are not traced, but we have the aggregated image of RSS feeds, so even by just using it, you will help us.

  1. Didier Torny, Laurent Capelli, Lydie Danjean, Stéphane Pouyllau. Matilda: Building a bibliographic/metric tool for open citations and open science. ELPUB 2019 23rd edition of the International Conference on Electronic Publishing, Jun 2019, Marseille, France. ⟨10.4000/proceedings.elpub.2019.22⟩. ⟨hal-02141839⟩ []
  2. Disclaimer: I have been a member of the Advisory Board of Open Citations on behalf of the French Open Science Committeee since 2021 []
  3. see Heibi, I., Peroni, S. & Shotton, D. Software review: COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations. Scientometrics 121, 1213–1228 (2019). https://doi.org/10.1007/s11192-019-03217-6 []

The short history of the h-index or… being one click away to determine whether you are a succesful scientist

Sometimes, newness really happens in the academic world. Take bibliometrics indicators: for at least a century they have been usally proposed and discussed by specialized scientists from a scientometrics background in their field journals (currently JASIST, Scientometrics, …) and nobody else cared, at least for some time. But in 2005, something very peculiar happened: the h-index was coined by a total stranger to that field and had an instant success, which endures until now. How did that happen and why such a success?1

Physicist J.E. Hirsch proposed the h-index in a working paper posted on August 3rd, 2005 on arXiv, the famous open archive developed by physicists in Los Alamos. In his manuscript, he discussed the issue of comprehensively evaluating a researcher and wished, in a radical way, to subsume their whole career into a simple and practical measurement: an integer number. To do so, he considered that the production and its uses had to be taken into account through a citation measurement. So, number h is the greatest number for which h articles by an author have at least h citations. For example, for 5 articles cited at least 5 times, h is 5; likewise, for 50 articles cited at least 50 times each, h is 50.

Yet, the use of algorithms concerning authors was not new. Eugene Garfield, the founder of the Institut for ScIentific Information (ISI), claimed to regularly predict Nobel prizes using the Science Citation Index and had then developed the “ISI highly cited”, presenting results for a tiny fraction of “top”researchers ; similarly, a few disciplines like economics and management had a long tradition of ranking “top authors”. But, to our knowledge, no algorithm had to that date been specifically designed to evaluate authors . This novelty also comes from the lack of consideration Hirsch displayed for the existing litterature: there were only four references in his manuscript and only one in the field.

This departing from the scientometric tradition enabled Hirsch to make several shifts. Firstly, he did not take into account journals in which articles are published, probably because in high energy physicists, journals are used to archive knowledge more than to make discoveries public. Secondly, he excluded the pitfalls of the number and order of coauthors, which are of little relevance in physics but crucial in biomedical research for individual evaluation. Thirdly, his index combined two elements considered as heterogeneous in the scientometric tradition: production on the one hand, and use on the other. And finally fourthly, whereas scientometricians are always very cautious about individual analysis and save it for “outliers”, Hirsch proposed a measurement which applies to all researchers, and as a cherry on the cake, argued that it would be of some use for the allocation of research funds.

How did he make such a bold move? Based on numbers crunched in the case of high-energy physicists, HIrsch forged a model of the “successful scientist”. As the result of his algorithm very much depends on the duration of a researcher’s career, the h divided by the number of years of the career was considered as a good indicator by Hirsch.


“An h index of 20 after 20 years of scientific activity, characterizes a successful scientist. […] an h index of 40 after 20 years of scientific activity, characterizes outstanding scientists, likely to be found only at the top universities or major research laboratories. […] an h index of 60 after 20 years, or 90 after 30 years, characterizes truly unique individuals”.


At that point, Hirsch woud have probably been considered as a bibliometrics crackpot, a talented physicist that happens to crunch citations numbers in his pastime and posting his “personal views” on a website.

An instant success,
an impressive series of implementations

To the surprise (and probably horror) of the scientometrics community, the h-index was taken up at a staggering rate. As soon as his manuscript had been available on ArXiv, Hirsch received extensive feedback from his physicist colleagues and, in view of the shared enthusiasm, an open archive specialized in high energy physics, SPIRES (Stanford Physics Information REtrieval System), implemented the algorithm on its dataset only two weeks later. The same day, August 17th, 2005, Nature presented Hirsch’s proposition and highlighted his colleagues’ enthusiasm, while an editorial entitled “Rating Games” discussed the respective role of metrics and peer review. Shortly afterwards, in November 2005, two of Hirsch’s colleagues published the manuscript as an article2 in the Proceedings of the National Academy of Sciences, thus confirming physicists’ keen interest in this new measurement.

The popularization of the h-index took a new turn a yer later with a bibliometric tool developed by Ann-Will Harzing, Publish or Perish (PoP). In October 2006, this management professor at the University of Melbourne put online a small software, operationalizing the h-index calculation. That way, for any author whose name is entered by the user, irrespective of the discipline, PoP calculates their h-index in a single click, based on the nascent Google Scholar dataset. This tool could be downloaded for free and has thus allowed the magic algorithm to reach users far beyond audiences specialized in scientometrics. Researchers and institutions adopted this bibliometric tool so fast that the British Medical Journal published a spoof article describing the different pathologies it generates. Meanwhile, in May 2007 Elsevier had included the h-index in Scopus; Thomson-Reuters likewise had changed its “ISI Highly Cited” and integrated this index into the WoS in 2008. Thus, in just three years, individual measurement became a practical operation drawing on bibliometric tools easily accessible to each researcher.

This cycle of implementation was provisionally finalized by the opening of Google Scholar Citations (GSC) in the summer of 2011. With this new service, every academic could create and have control on her/his profile page on Google Scholar, and could decide to make it public or not. Whatever the choice, GSC would then automatically compute three metrics:
the widely used h-index, the i-10 index, which is the number of articles with at least ten citations, and the total number of citations to your articles.
At this point, the definition of the h-index wasn’t anymore needed and thousands of academics quickly made theirs available online. As Paul Wouters and Rodrigo Costas soon noted in their 2012 manuscript, this was a typical example of what they named “technologies of narcissism”, a mirror through which you and others would look in order to measure your influence, evaluate your importance, worry about your deficiencies.

From journals to articles to authors:
new policies for evaluation

Then, despite professional scientometricians harshly criticizing the h-index for its crudeness, pointing out its variations and limits or taming it to make a g-index or a v-index, the utopic/dystopic vision of Hirsch had come into reality. Conversely, it is because of its very crudeness that made it easier to implement in databases & more readable to lay researchers. But its availability is not sufficient to explain the duration and intensity of uses – the orignial paper will probably pass the 10.000 citations mark in 2020. There are two main and very different reasons for its popularity, which we must analytically distinguish. The first one is the implementation of the algorithm to new objects, such as research groups or even journals. Confronted with its unexpected success and eager to use the new data avaialable, scientometricians got into an h-index frenzy3. While the dominant popular bibliometrics index, both within the community and outside of it, had been for 30 years the Journal Impact Factor (JIF) then prouced and owned by Thomson ISI, the growing success of the h-index made it a challenger in the bibliometrical index “market”. Many papers compared the pros and cons of each algorithm, based on different datasets.

Nevertheless, this “good index” competition shouldn’t hide a second reason for which the h-index became so popular and discussed. Its use was sustained by a political agenda in assesment and evaluation, best represented by the San Francisco Declaration on Research Assessment (DORA) published in 2013. This complex text is often subsumed as an anti-bibliometrics statement, in which signing institutions promise they won”t use the JIF as a way to evaluate research, would it be for hiring, promoting, giving grants, etc. Beyond this simple vision, there are more nuanced recommandations that oppose journal-based metrics, but don’t refuse bibliometrics as a whole. JIF is seen as a bad way to perform quantified assesment, where article-level metrics, whatever they are (citations, downloads, views, social networks mentions…) are more realistic of the “impact” of a given research.

The problem with these new metrics is that nobody really knows what they are used for and what they really mesure4. Consequently, the most popular article-level metric remains the number of citations in a given database (Web of Science, Scopus, Crossref, Google Scholar). Rather than summing these numbers for a given journal, the aggregation is made on a given author. It is so simple to perform on the same databases, that what was absurd a few years ago has become natural. The “h revolution” therefore went beyond Hirsch’s own vision on two points. Firstly, his h-index was to be used for senior scientists, it is now also being applied to/by early and mid-carrer researchers as a more “ethical way” to judge their impact. Secondly, its extension goes hand in hand with a potential transformation of the model of scientific communication, to a post-journal world, in which any kind of text could be cited and counted. Rather than highlighting the fact that you actually passed the test of supposed prestigious journals, you just now give your h number and academic age, so everybody checks whether you really are the successful scientist you pretend to be . ((Please cite selected papers of the author of this post so he may finally become one)).

  1. This post is partially adapted from Pontille David, Torny Didier, « La manufacture de l’évaluation scientifique. Algorithmes, jeux de données et outils bibliométriques », Réseaux, 2013/1 (n° 177), p. 23-61. DOI : 10.3917/res.177.0023 []
  2. There are almost no differences between the ArXiv manuscript from mid-August (V3) and the published PNAS paper []
  3. See for an early litterature review, Bornmann, Lutz, and Hans‐Dieter Daniel. “The state of h index research.” EMBO reports 10.1 (2009): 2-6. []
  4. See Haustein, Stefanie, Timothy D. Bowman, and Rodrigo Costas. “Interpreting” altmetrics”: Viewing acts on social media through the lens of citation and social theories.” on ArXiv []