Lockdown or not, COVID-19 first wave in progress or over, universities open, teleworking or closed, it will still be out when August 15 comes. The AWRU Shanghai Ranking, like its cousins THE Rankings and QS Wold University Raankings and others of the same genre have their season, their inflexible communication, their teasers, their sports announcements of winners of the year and of emerging stars.
The recurrent criticisms that are made against them, the evisceration of their methods by the scientometers and rankings specialists1 will not change anything in their imperturbable march. So why write about it? Not to sum up their four decades history2, but because publications play a minor, but not negligible role, and their successive transformations into bad data is an interesting case study.. Let’s change this midsummer’s nightmare in an ironic and fun bedtime story for academics, mostly thanks to incredible French stories!
The tale of a French “emerging university”.
How to become ranked
The HER French system is incredibly ill-suited to fit these rankings. Indeed, not only do we have universities on one side and powerful research organisations on the other, but both share “joint research units” bringing together academics working for 2, 3, 4… up to 8 different employers. As a result, the signatures of scholarly publications are very long, contain multiple institutions for each author, and are subject to wide variation for a given lab or department.
Morever, from 2007 onwards, laws have defined the framework for “new universities”, regrouping institutions on a rather geographical basis. We will follow here the example of PSL3 (“Paris Sciences & Lettres”), whose name gives a foretaste of its diversity, with 11 establishments and 3 research organisations. Although there is only one university – Paris Dauphine – among all these institutions, some were already taken into account by rankers, such as the Ecole Normale Supérieure, very famous for its mathematics department. So how can they ensure that “PSL” becomes the brand name and that all academics outputs are counted under its name?
The only solution is to change the signature rules, homogenize them, and then measure their practical implementation by reluctant researchers. This is all the more difficult as the grouping of institutions did not make them disappear, and each one therefore retains its staff, budget, premises and laboratories. Thus, it took years of negotiations for an agreement to be signed by all the institutions, leading to about 70% of the publications signed in accordance with the model defined in 2015 three years later, where a “simplification” for the byline was carried out, to fashion the following model of affiliation :
Institution Name, PSL University, [institute or departement], Research Organization, [joint research unit number], University co-chair, Laboratory, [Team], [Address], Postal Code, Town, France
But this is not over as they have then to convince rankers and their underlying data sources (mostly Scopus from Elsevier and WoS from Clarivate) to pick the second place “PSL University” in this long list of possible affiliations in the byline. For that, they used the knowledge and work of a “ranking optimization policy officer”, Daniel Egret. former astrophysicist. This is no surprise: we have known for more than a decade that HER institutions react to rankings in different ways, trying to optimize their place if they think the goal is worth it4. Finally, like any other univeristy, they would cherry pick among all relevant rankings and boast about their amazing results on their website:
It’s somewhat ironic that PSL should be congratulating itself on its first place for “new universities” when most of its institutions are multi-secular, but PSL is just playing a game by pushing to the limit the “optimization” linked to changes in the signing of its affiliated researchers. However, as in fiscal matters, the limits between “opitimization” and “fraud” are tenuous and the management of the “Highly Cited Researcher” of the Web of Science shows us two examples of this.
It’s the authorship, stupid!
From private gaming to alternative facts
On this blog, we have already encountered this Web of Science tool, in the case of KC Chou, a serial peer review hacker who finally got caught after he became one of these HCR. At that time, like many, he had acquired a secondary affiliation: as Yves Gingras already noted six years ago, these “secondary institutions” tend to be concentrated in a few countries, and notably Saudi Arabia. However, by constructing a combination of indices, 20% of the Shanghai ranking derives directly from the number of HCR researchers at the university in question. In other words, it is no longer a question of directing publications one by one to the computing centre of a ranker just like PSL did, but of reassigning the most prolific research producers to a given university. In exchange for a given lumpsum, the researchers in question will therefore “sell” their authorship, possibly on the articles themselves, but especially in their response to the Web of Science’s query as to their affiliations. When HCR data became available, scientometers analyzed it and came to the conclusion that secondary affiliation should be left out to avoid massive ranking manipulation5.
The gaming had become too visible, forcing the ranker to change its methodology, as it stated that ” only the primary affiliations of new Highly Cited Researchers are considered in the calculation of an institution’s HiCi score for the new list.”. While this new method limits the direct interest of contractualisation between authors and institutions, it raises specific problems for French universities. Indeed, because of the multiplicity of affiliations described above, many authors dependent on research organizations indicate to Thomson Reuters/Clarivate/Web of Science group, that their main affiliation is CNRS, INSERM or INRA, which are not ranked. As a result, the research-intensive universities, united under the umbrella of the CURIF, have developed an extensive lobby to get these full-time researchers to indicate that University X is indeed their first affiliation and not the secondary one. After a long struggle, they obtained satisfaction in 2019 as the Higher Education and Research Minister herself, Frédérique Vidal, signed a letter which asked HCR researchers to pick their associated university rather than their own organization. This move has only one purpose, as detailed by Daniel Egret in the French economic press:
“The astrophysicist predicted a gain of 84 places at the University of Lorraine, 57 places for Toulouse-III and 26 places for Montpellier. The most “spectacular” effect of the new affiliation rules will mainly concern universities that are between the 200th and 300th place in Shanghai, according to Daniel Egret, because, at this level, “the scores are very close between universities, which can make them gain several dozen places”. The effect will be less visible for universities in the top 100, where the scores are “quite scattered”. Paris-Saclay would gain 3 places, and Sorbonne University, 5 places”. ((Les Echos, 28th Februray 2019))
There are no hidden practices, secret arrangements or small manipulations as in the reported data on former students’ salaries or employment levels, in the case of US management universities, even when it is UC Berkeley . It is just the very admission to fudge the data in order to skew the rankings so unfavourable to French universities and thus to produce alternative facts. Yes, these publications exist, they have been authored by Mr. X and Ms. Y., they just happen to work for a different organization, who cares?
Who are the users of these rankings?
Sold and actual markets
This example shows us how ambiguous university rankings are as a tool, not only because the measures they aggregate would be biased, false, or even falsified, but in their very objectives, whether on the side of their creators or their multiple users. Indeed, it is common knowledge that the purpose of mergers of French institutions was to move them up in the rankings, particularly that of Shanghai, even if the calculable effects were far from certain6. The reassignment of authorship is only the last adjustment, after signature changes, to obtain these good rankings through publications. Other strategies, such as lobbying voters for reputation rankings, follow the same logic. But why, how and to whom is it useful to be “top ranked”?
In theory and in the public display of the rankers, rankings are supposed to participate in the formation of students choice in order to pick their higher education institution. Supported by an audit society vision and the idea that something like a global market for universities actually exists, rankers are supposed to be third-party certified information providers. Which leads to two questions : on one hand, is it useful information for students, on the other hand, to which other stakeholders could it be? The first question is not easy to be answered, as Ellen Hazelkorn has recalled: the existing literature shows no clear sign of massive use by students in order to choose their places, except in very specific cases like some US schools7 I would add the absence of testimonies and anecdotal evidence in rankers ads and commmunication: you would never see Jim, Liu or Penelope videos on how they pick their wonderful university by examining their rankings. In fact, rankers probably don’t care so much about students, their target is elswhere. Or when they happen to make one such video, like THE below, the sound is so unprofessionnal that it is embarassing and, of course, students barely mention rankings as a factor in their decision.
If students are not the focus of the rankers’ attention, who are the rankings for? At least three types of users and uses can be listed:
- content that is easy to publish and comment on by the media
- direct objectives for the universities themselves and policymakers
- sources of revenue for rankers organizations
Without mentioning in detail sites specialising in this form such as Buzzfeed, Topito or WatchMojo, the general media have since the beginning of the 21st century integrated the dissemination of rankings produced by third party “rating agencies” on objects as diverse as holiday resorts, hospitals, public personalities or television series. So why not universities? More than complex information, the description and commentary of a ranking are objects that are easy to produce and can be appropriated by large audiences. On the other hand, the lack of popularisation of the modular European U-Multirank shows that classification must remain simple and “objective” in order to be widely disseminated.
We have already mentioned the study of the effects of rankings on the ranked organizations themselves. In addition to the question of optimisation and fraud, changes in practices in the recruitment and evaluation of academics have been observed, the presentation of universities themselves, the construction of coordinated regional, national and supra-national policies on the basis of indicators, etc.8. Whether these transformations are just visual makeup or whether they have a profound impact on universities is a subject of debate in many articles. As far as publications are concerned, China has proved to be a particularly fertile ground for observing the effects of rankings9.
Finally, we should conclude on something that is both obvious and not often discussed. The first stakeholders interested in rankings are rankers themselves. They not only organize conversations on their (free) productions and sell themselves as certified audit firms, but pave the way for services markets. Indeed, rankers offer services such as detailed rankings, training and help to get better ranked, communication or recruitment services to universities. Global rankings is less a market for students or universities than it is for service providers. You can mine for gold or you can sell pickaxes.
- as an example, see Billaut, Jean-Charles, Denis Bouyssou, and Philippe Vincke. “Should you believe in the Shanghai ranking? An MCDM view.” Scientometrics 84.1 (2010): 237-263. [↩]
- A short version can be read here, Kehm, Barbara M. “Global University rankings–impacts and applications.” Gaming the Metrics (2020): 93. [↩]
- Disclaimer: PSL is my “official affiliation” for publications [↩]
- see this seminal article, Espeland, Wendy Nelson, and Michael Sauder. “Rankings and reactivity: How public measures recreate social worlds.” American journal of sociology 113.1 (2007): 1-40. [↩]
- Bornmann, Lutz, and Johann Bauer. “Which of the world’s institutions employ the most highly cited researchers? An analysis of the data from highlycited. com.” Journal of the Association for Information Science and Technology 66.10 (2015): 2146-2148. [↩]
- see Docampo, Domingo, Daniel Egret, and Lawrence Cram. “The effect of university mergers on the Shanghai ranking.” Scientometrics 104.1 (2015): 175-191. [↩]
- see Hazelkorn, Ellen. “The impact of league tables and ranking systems on higher education decision making.” Higher education management and policy 19.2 (2007): 1-24. [↩]
- See for example Stack, Michelle. Global university rankings and the mediatization of higher education. Springer, 2016. [↩]
- Xu, Xin. “Performing under ‘the baton of administrative power’? Chinese academics’ responses to incentives for international publications.” Research Evaluation 29.1 (2020): 87-99. [↩]
OpenEdition suggests that you cite this post as follows:
Didier Torny (August 14, 2020). The absurd race for university rankings… or how publications are transformed into bad data. The political economy of academic publications. Retrieved March 19, 2025 from https://doi.org/10.58079/sy3c
One Reply to “The absurd race for university rankings… or how publications are transformed into bad data”