Introduction

“…when you can measure what you are speaking about and express it in numbers you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind”. (Kelvin as cited in Merton, Sills, and Stigler (1984)).

Kelvin’s dictum has been the guiding principle for many generations of scientists, not in the least for economists.Footnote 1 Measurement is science. It is somewhat of an irony that this dictum has been inverted and trickled down in the everyday practice of many scholars in valuing their contribution (Moosa 2018): a scientific contribution only counts as ‘science’ if and only if its impact can be expressed in numbers. And to paraphrase Kelvin: if you cannot express the impact in numbers, your contribution is of an unsatisfactory kind. Deans, department heads, science foundations, accreditation bodies, grant reviewers, they all rely tacitly or explicitly on the science metrics as the number of publications has become excessively large and the different fields within economics too specialized to appraise. This so-called ‘metric tide’ in science, as described and weighted by Wilsdon (2016), has progressed. Especially in economics the love for measuring ‘productivity’, competition and ‘ranking’ is noted to be higher than in other disciplines (Fourcade et al. 2015). However, the metric tide seems to have reached its limits. For instance, Heckman and Moktan (2020) argue that the excessive focus on top journals in economics has become dysfunctional.Footnote 2 The increased competition among scientists is a reality for most universities and has implications for research assessments, accreditation rounds,Footnote 3 and individual funding for research. In particular the latter increases the pressure for individuals as the competition for grants has become fierce and especially for starting academics this is often their only ticket for staying on in academia. If one wants to earn a livelihood as a researcher it is either ‘funding or famine’ (Stephan 2012) and this drive for funds is generally felt to be strongly connected to a publication record: reviewers are often asked to assess the scientific merits of a researcher based on his or her publication record.Footnote 4 Others also note how engaging in ranking games or the grabbing of attention (Klamer and van Dalen 2002) can potentially harm the way scholars practice science, disregarding promising methods and topics (Akerlof 2020); neglecting key tasks such as teaching or academic citizenship (Miller et al. 2011; Osterloh and Frey 2015), and disregarding one’s own ideas and publishing what the ‘market’ demands (Frey 2003).

The central research question in this paper revolves around how the publish-or-perish pressure affects the views and perceptions of economist about their practice of science. This aim is split up in three questions. First, how high is the publication pressure of economists and what factors can explain this pressure at the individual level? Second, is there a widespread consensus on the pros and cons of the publish-or-perish principle among economists or can one detect differences? And a third question, how does this assessment of the publish-or-perish principle affect the view of economists on their practice of economic science?

To shed light on how the publication pressure permeates academic life, we will use an extensive survey held in 2015–2016 among economists affiliated with Dutch universities. To put the position of Dutch economic faculties in context, these institutions achieved a top position within the economics hierarchy in Europe (Kalaitzidakis et al. 2003; Lubrano et al. 2003) and the case of the Dutch could function as an appropriate case study for other European countries as well because most universities outside the Ivy League have similar ambitions in moving up the various rankings. Furthermore, one should take note of the fact that economics at Dutch universities is rapidly internationalizing and is certainly no longer a Dutch affair: 43% of the Dutch economics faculty consists of foreign born members (Rathenau Institute 2018), most classes at economics departments are taught in English, and like most US faculties international job markets at European and American venues are actively used to attract foreign talent.

The setup of this paper is as follows. First, we will offer a brief overview of pros and cons of the publish-or-perish principle and how it can possibly affect academic work and science in general (“Publish-or-perish principle in context” section). In third section, we will introduce the data and methods used in this paper. Fourth section covers the measurement of the work pressure, in which the publication pressure figures prominently. Subsequently in fifth section we will perform a latent class analysis to see whether economists differ in their assessment of the pros and cons of the publish-or-perish principle, as well as examining how different classes of economists perceive the circumstances under which they work. Final section concludes and will put the findings in perspective.

Publish-or-perish principle in context

The publish-or-perish principle is not novel idea. The eminent science scholar Garfield (1996) pointed to the first printed usage of this term in the work of sociologist Wilson (1942) who wrote: “The prevailing pragmatism forced upon the academic group is that one must write something and get it into print. Situational imperatives dictate a 'publish or perish’ credo within the ranks” (p. 197). He guessed that Wilson, being a student of the renowned sociologist of science Robert Merton, was expressing a feeling that must have been present among American faculty. For the ambitious scholars ‘publish-or-perish’ was initially seen as a sound principle. As Beard (1965) expressed it: “advancement and academic recognition shall depend in part upon one’s contribution to the published literature of his academic field.” It was seen as good and non-controversial step, although Beard was not blind to the downsides of this policy and how it can jeopardize academic obligations such as teaching. As he notes: “the road to institutional distinction is also strewn with tragedies, tragedies that have resulted when an institution's ambitions have far exceeded its resources.” (p. 458).

Within the early economics and sociology of science literature, stressing publication as an academic requirement is also perceived as a sound principle. Getting your work into print is closely aligned with the priority principle stressed by Merton (1973): the goal of scientists is to be the first to communicate an advance in science. Today this communication is done primarily in journals managed by scientists who consult their peers to review a contribution. A journal publication can hence be seen as the recognition awarded by the scientific community for being first. This ‘race to priority’ is very similar to what economists call patent races or winner-takes-all contests. Being first in claiming a discovery can be rewarded by citations or by means of eponymy or more formal prizes like the Nobel Prize. However, as Stephan (1996) remarks this economic focus neglects the idea that puzzle solving may be an equal important motivating force that explains why people participate in science and why winning races is not everyone’s goal in life. However, with the emergence of research universities it became necessary to pay close attention to the composition of staff that has a taste for advancing science and that is not only interested in the satisfaction of solving puzzles. Universities had to create a work environment in which the forces of competition and selection play a major role. The tenure system, also known as up-or-out contracts (Kahn and Huberman 1988) are nowadays a common element in most universities, although in European universities this system has remained up and the till the turn of the century a ‘foreign’ idea. Being able to publish articles that gain wide recognition by one’s peers is seen as a precondition of being awarded tenure. Publications and citations could support this decision making. Initially scholars and bibliometricians were quite optimistic that citations measured quality. For instance, Cole and Cole (1973) claim that “the data available indicate that straight citation counts are highly correlated with virtually every refined measure of quality.” And in economics, Stigler and Freidland (1979) make the explicit assumption that “The quality of a scholar’s work is properly related to the frequency of its citations by his colleagues.” (p. 1).

However, when the metrics became the most common measuring rod in characterizing the pecking order in science, bibliometricians warned time and again: impact is not the same as quality (Hicks et al. 2015; Martin and Irvine 1983; Moed 2006) and as Adler and Harzing (2009) state their concern about the current ranking systems used by universities: “[these] systems are dysfunctional and potentially cause more harm than good.” The optimism that surrounded the use of these indicators may have given economists the idea that selection is greatly improved by relying on metrics. Practice turns out that such decisions are not that simple. This type of disappointment is also illustrated in the paper by Brogaard et al. (2018) who produce evidence that the tenure system does not seem to bring the promise of selecting those scholars who continue producing groundbreaking research. As they formulate their conclusion: “It does not appear that academic economists respond to the greater professional and intellectual freedom that tenure should provide by sustaining earlier research effort or by taking chances that lead to more home run research.” Part of the answer why we see a decline after tenure is in a sense logical as undertaking path-breaking work is generally done in the very early stages of a career (Jones 2010; Van Dalen 1999), although as Weinberg and Galenson (2019) show this may differ in economics on the type of research, ‘conceptual economists’ peaking far earlier than what they call ‘experimental economists’. An alternative explanation that Brogaard et al. do not consider is the possibility that the amount of work pressure increases over a career. The implicit assumption is that tenure is the moment in a career when the ‘trial period’ is over, one can tackle any idea one wants. The sample period that Brogaard et al. consider is namely also a period in which the publish-or-perish culture has become more widespread and more intense. And this could have the implication that the rat race in academia never stops, even if you have obtained tenure.

The publish-or-perish culture also resounds in the work by Niles et al. (2020) who show how young scholars at academic institutions in the US and Canada value the impact factor of journals, the number of publications and other metrics at a far higher rate than older and tenured scholars. For those scholars who are involved in review processes concerning promotion and tenure these factors are virtually the only ones that count but—as Niles et al. make clear—deep down they only care about their work being read by their colleagues who work in similar niches in their discipline. They interpret this as a disconnectedness among scientists: people who still have to strive for tenure or promotion have to believe in the value of impact factors and Hirsch-indexes because that is what counts and that is what reviewers of grant proposals will take on board in their evaluation. Contrary to the younger faculty, the older and tenured faculty care less about the conventional metrics and they choose topics and areas irrespective of whether this attracts a lot of citations and hence they disconnect from what they perceive their peers might value.

This divide noted by Niles et al. is intriguing. Not only may their research explain the findings by Brogaard et al. (2018)—why tenure does not seem to work as envisioned—it also suggests that one can benefit by taking a look at how actual scientists perceive their work conditions. The debate about the publish-or-perish principle is broader than simply incentives and productivity. This paper tries to enrich this debate by taking a closer look at how academic economists of different ranks evaluate the work pressure in the modern-day university.

Method and data

To assess the impact of the publish-or-perish principle on the perceived work pressure of economists and their view on how this principle affects their scientific practice, data were collected by means of a survey (in English), distributed among faculty members of all economic departments at Dutch universities. In line with privacy regulations, the survey was distributed by the deans of the separate economics departments among its faculty with a supporting email letter from the dean. The group of respondents did not only include tenured faculty, but also non-tenured personnel, like PhD students and tenure track assistant professors, post-docs or teaching faculty with short-term contracts. The field work was carried out between November 2015 to January 2016 and the overall response was 453, giving a response rate of 24%. This is a low percentage compared to population wide surveys or surveys that rely on incentives, but this response is comparable to similar surveys among experts or professionals (Bertrand 2019; Klein and Stern 2005; May et al. 2014; Ricketts and Shoesmith 1992; Van Dalen and Henkens 2012b).Footnote 5 The survey contained a substantial number of questions shedding light on the different tasks that faculty perform within their universities as well as their opinion on the how performance is evaluated and perceived within their university and their perception of the pros and cons of using publication and citation metrics, or how personal values impact scientific practice (Van Dalen 2019). These attitudes and opinion questions will be introduced later on, but at this point we want to introduce the variables which are important to see whether the position one has in academia might impact one’s perception of the work pressure.

For now, it suffices to sum up the most salient characteristics (see Table 1) of our sample of economists. The average age in our sample is 41.6 years, 34% of the sample has a foreign nationality and 20% of this sample is female. The various positions that one can fulfill is varied but reflect adequately the various positions in Dutch academia. The average respondent has reported that he or she has published 1.8 articles in international refereed journals (with an impact factor of Web of Science) in the past year, which is more or less in line with the norm that some universities use to grant tenure.Footnote 6 Assistant professorships can cover fixed term contracts (tenure track) or permanent contracts. Associate and full professors are always tenured. Special endowed chairs at Dutch universities (‘extraordinary professor’) can be funded through external funds, i.e. private companies or foundations. These ‘professors by special appointment’ are often appointed on a fixed-term and part-time basis, and often have a full-time position in a firm, government agency or another university/research institute.

Table 1 Descriptive statistics of explanatory variables

Measuring and explaining the work pressure

How high is the publication pressure among economists? This question may seem trivial given the amount of attention that is paid to publication pressure, but it is not often explicitly measured. And how does this pressure compare to other academic responsibilities? The work pressure measurements listed in Table 1 show unequivocally that of all the regular academic tasks the publication pressure is perceived to be the highest with a value of 7.8 on a 10-point scale (with 1 = no pressure at all; to 10 = extremely high pressure). Publications are frequently used in national research assessments, rankings, internal allocation of funds across departments within universities and, of course, in internal performance reviews. The pressure to teach (6.4), acquiring research funds (6.2) and administrative duties (5.5) are substantially lower. The fact that on average these tasks generate less pressure than the task of publishing is plausible because certain ranks within the universities (e.g., PhDs in their start-up years) are not thoroughly involved with acquiring funds, teaching or administration.

Figure 1 brings across how the pressure mounts across career positions within economics and to focus solely on the extremes, the percentage of high pressure (graded 8–10) has been included in the figure. When we talk about high work pressure this pressure is felt not only by junior faculty, but virtually by all academics who want to pursue a career in science. As one can detect in Fig. 1, the publication pressure is highest among those who want to attain tenure or are set on becoming a full professor. What makes things complicated is that the various pressures are jointly felt: all tasks are positively correlated. The fact that the pressure to publish and the pressure to acquire grants are interrelated (Waaijer et al. 2018) is perhaps self-evident because obtaining tenure depends having obtained grants and reviewers of grants (at the time of measurement) are always asked to look at the track record of applicants.Footnote 7 Teaching and administrative duties are often left out of the equation but they are tasks that are inherently tied to being an academic. Leaving out these elements would give an incomplete picture, because in todays’ universities in Western countries mass education has become the rule in which faculty have to deal with rising student numbers. Hence, when the pressure goes up in, e.g. teaching, this is positively associated with a higher pressure in publication or in the acquisition of research funds. Table 2 offers a set of equations (simultaneously estimated) that offer some insights as to which characteristics of an economist are of importance in explaining the work pressure.

Fig. 1
figure 1

The perceived high work pressure in Dutch economics departments for a number of academic positions, 2015–2016. Note Very high pressure is here defined as respondents reporting an 8 or higher on the 10-points scale of pressure in teaching, publication, acquiring funds, and administration

Table 2 Explaining the pressure to publish, acquire funds, teach and administer (based on 1–10 scale)

The publication pressure is perceived to be the highest among assistant and associate professors. This accords well with a study by Haven et al. (2019) who focus on different disciplines at four academic institutions in Amsterdam. These are indeed the crucial periods in an academic career when tenure and promotion depend to a large extent on one’s publication record. What is perhaps more noteworthy is that the actual publication productivity – as a proxy for publication skills—does not soften the pressure: whether you are able to publish a lot or just one or two articles in internationally refereed journals the pressure does not subside. Although the publication pressure coefficients differ from assistant to full professor, equality tests show that differences between these coefficients are not statistically significant. This is an indication that the publication pressure obviously does not subside substantially once one becomes an insider in academia. The same applies to the task of teaching, there are no clear differences in pressure between the insiders of academia. This equality of pressure across ranks is no longer visible when one turns to the tasks of administration and the acquisition of research funds. Here you can see that the academic position of full professor is of crucial importance: compared to PhDs the funding pressure are higher among assistant and associate professors, but once you become full professor the funding pressure is again substantially increased. The same may be said of administrative duties, where the pressure increases with every step that one rises within the hierarchy of the university.

Some differences are also to be noted with respect to the university of employment as measured in this setting by the worldwide ranking position in economics (see note in Table 1). As is perhaps to be expected, economists working at universities that rank relatively high on the worldwide list of universitiesFootnote 8 feel more pressure than those economists working at universities which have a relatively low ranking. Working at highly ranked university comes with higher expectations and this apparently has is reflected in a higher publication pressure. The ranking position of the university has, however, no effect on the other tasks.

Finally, with respect to gender we cannot detect any clear pressure differences across male and female academics. This effect may be counterintuitive for close observers of the position of women in economics. In the Netherlands there are mounting complaints about the barriers that female academics experience in becoming full professor or getting tenure, in particular in economics. It is hard to give a reason why these complaints are not revealed by self-reported work pressure variables, but it could very well be the case that this dissatisfaction may have to do with the work culture and practices that are not gender neutral (Lundberg and Stearns 2019) and that were not directly measured in the current survey.

The consequences of the publish-or-perish principle

Are economists divided on the pros and cons of publish-or-perish?

To gauge the effects of how the publish-or-perish principle affects academic life, we first want to discover how economists perceive the consequences of the pressure to publish in international refereed journals in general. In short, do they see only the merits of this pressure or are they skeptic and do they also see the downsides of this principle? Table 3 gives an impression based on five key elements of the pressure to publish in international refereed journals.

Table 3 The pros and cons of pressure to publish in peer-reviewed journals

The publish-or-perish principle can have benefits, such as the possibility to make the meritocratic principles do their work and be less dependent on old boys’ networks and give everyone the chance to move upward in the hierarchy and improve the quality of research by peer review. However, each of these building blocks of science can be assessed differently in practice. Think of the excessive number of publications that are not cited and hardly read as a reflection of the competition for attention (Laband and Tollison 2003; Nicolaisen and Frandsen 2019; Van Dalen and Henkens 2004). And the lack of attention becomes different when your promotion or grant application depends on it, and it may change the choice of topics or a tendency to neglect national issues for scholars working in non-English countries (Van Dalen and Henkens (2012a), or more directly because it is not seen or ‘counted’ by university management as a scientific activity. The strong increase in number of scientists has led to an increasing number of people wanting to get published, leading to congestion in the review process: finding suitable reviewers, long waiting times for articles being printed/published, the rise of fake and low quality journals (Altbach and Rapple 2012; Huisman and Smits 2017). And of course, one can have fundamental concerns about how reviewers can err in rejecting classic ideas of scholars (Shepherd 1995) or take these mistakes for granted be optimistic about the benefits of peer review (Card and DellaVigna 2020; Szenberg and Ramrattan 2014). But of course, the most worrisome side effect of publication pressure can be traced to the increase in scientific misconduct or unethical publication behavior like data manipulation, plagiarism or fraud (Fanelli 2010; Fang et al. 2012; Martin 2013; Petersen 2019; Seeber et al. 2019).

The impression based on Table 3 is that most economists perceive both the positive – upward mobility and improvement of the quality of research—and negative sides of publication pressure—turning your back on national issues, excessive publication and unethical behavior. The percentage of respondents (fully) agreeing for all the items varies between 60 and 70%. This suggestion of a consensus among economists could be a false impression as not every respondent weights each item equally. To explore this issue in more depth, a latent class analysis (LCA) is performed to test whether we can detect a divide into different groups among economists.Footnote 9 Table 4 shows that there are two clear types of economists: those skeptical of the publish-or-perish principle: the positive sides receive lower weights than the negative sides. This is quite different among the supporters or the ‘true believers’ of the publish-or-perish principle: the positive sides are clearly perceived by this group, whereas the downsides are given short shrift. Besides the fact of having two clear classes, the distribution should of course also be noted: 66% belongs to the class of skeptics and 34% to those who are supportive of the principle.

Table 4 Latent class marginal means for a two-class model of economistsa

Close inspection of some of the characteristics of economists shows that in particular the position in the hierarchy of a university matters in how economists view the principle. Full professors are more supportive of incentive mechanisms that are behind the publish-or-perish principle: 47% of the full professors are supportive of this principle as against 31% among PhD students, 34% among assistant professors, and 31% for associate professors. Of course, part of this result could be the result of survival bias as the sample is, of course, a reflection of the fact only those who have crossed the hurdles of academia and feel at ease with publishing regularly are still in the sample, whereas those that did not make the mark have left academia. Still, the outcome of full professors being more in favor of the publish-or-perish principle remains robust and this is also more clearly revealed by studying the individual items (see Appendix Table 6), where they are more convinced than other faculty members that this principle improves upward mobility and the quality of research, and they see far less than lower ranked faculty members that it leads to excessive number of unread papers or unethical behavior. This finding is in line with the answer given by Osterloh and Frey (2020) to the question why science metrics such as impact factors are still so influential despite strong criticism by scholars and institutions. Vested interests are part of the answer and this may be one of the reasons why full professors support the publish-or-perish principle as an important selection mechanism in science.

The effects on the work environment of economists

The previous results show that the majority of economists see both the pros and the cons tied to stressing publication in internationally refereed journals. But how do economists – skeptics and supporters—perceive the effects of publication pressure on their own work environment? Table 5 gives an overview of the levels of agreement and disagreement across each and every statement for the two classes of economists as well as for the total group of economists.

Table 5 Perceptions about work environment related to the views on publishing pressure (latent classes of economists)

Although there are statements on which both classes of economists are more or less in agreement, the most interesting statements involve strong differences or even conflicting positions. To give an example of the latter, skeptics are not strongly motivated (40% disagrees) by citations or respect of other scholars, whereas supporters are in large part (46% agrees) motivated by these forms of recognition. Clear differences in work practice are to be noticed in the degree how universities are perceived as appreciating the content of the work of respondents: 72% of the skeptics agree that universities don’t care about the content they write about, whereas supporters leave more doubt: 45% agrees with this statement. In that respect one can also understand why skeptics are leaning more to the position that universities are managed as if they are a firm (see statement 4) than the supporters who on balance disagree with this statement. Strong dissatisfaction can also be traced in the way public funds are allocated in Dutch science (in which the national science foundation takes a dominant position); 66% of the skeptics disagrees with the statement that these public funds flow the most original researchers. But even among supporters one can see dissatisfaction as 48% disagrees with the statement.

With strong divergent opinions between skeptics and supporters, it may come as no surprise that skeptics show a far stronger inclination to exit academia: 40% has thought about leaving academia, against 21% of the supporters. Part of this can be explained by the fact that full professors are the ones who are relatively more supportive of the current system. Furthermore, the professors have survived all the hurdles during their career and must be more at ease with getting their work published than those starting their career, like assistant professors or post-docs.

Conclusions and discussion

The economist and Nobel laureate Paul Samuelson (1962) once summarized what intrinsically motivates scientists: “In the long run, the economic scholar only works for the only coin worth having – our own applause.” This idealized version of how science works and the underlying motivations of scholars can be traced in the early literature on the economics of science (see for a summary Stephan (1996)). The race to solving the great puzzles of a science as well as gaining recognition by one’s peers was highly prized; money or employment was of secondary importance or at most a spinoff. However, with the increasing importance of bibliometrics in driving rewards, promotion and tenure in everyday university life (Stephan 2012) “the applause” of peers has become instrumental in securing lifetime income and employment. To act in accordance with these metrics has become a dominant strategy for academics (Casadevall and Fang 2014). Competition for funding, prestige and positions within academia are so strong (Anderson et al. 2007) that the pressure to publish is always present. In the process of writing grant proposals, it had become more or less standard practice to include the impact factor of the published articles to inform and persuade reviewers.

The current paper has focused on whether this instrumental use of indicators of science—summarized in this paper as a publish-or-perish principle—has left its mark on how academic economists perceive their work environment and the scientific integrity of their discipline. First of all, the pressure to publish is considered high by the majority of faculty. And contrary to common wisdom—that this pressure only affects the young and precarious like PhDs and post-docs—this study shows that in particular assistant and associate professors experience high pressure and significantly more so than PhD students. This pressure also colors one’s outlook on the academic environment. Although most academics agree that the pressure to publish in international refereed journals has its intended merits it also is perceived to have clear unintended negative consequences. Among economists we discover a clear divide between the skeptics and the true believers of the publish-or-perish principle, with the skeptics representing two third of the respondents. In particular, the perception of skeptics that their employer – the university – only cares about how much one publishes and in which journal and not about the content of their publication is a tell-tale sign of disconnectedness.Footnote 10 Finally, the prospect of leaving academia is to a large degree inspired by the dissatisfaction with publish-or-perish principle as well as one’s (lack of) ability to publish.

These findings have, of course, their limitations as the data are restricted to a cross-section of economists working in universities in a European country, i.e. the Netherlands, and not in the United States; the country that still dominates the face of economics and where the publish-or-perish principle and concomitant up-or-out tenure contracts are more or less ‘invented’. Furthermore, statistical analysis of cross-sectional data naturally cannot cover issues of causality or trace how careers and attitudes develop over time and over careers. Still, the attitudes and opinions stated by these economists cannot be easily dismissed and some findings may trigger further research and offer food for thought for economists, but also for scientists and managers of science in general.

The unintended consequences of the publish-or-perish principle can be detrimental to the way a social science like economics is practiced. Economics is both a science and an art and it takes all sorts of scholars intend to solve grand puzzles and transfer knowledge. Excessive focus on science indicators may lead management to overvalue certain type of scientists and undervalue other types. The making and education of economists may lead to a monoculture in which the Academic Professional dominates and has lost touch with the Political Economist (Colander 2011). The different tasks of an academic economist encompass so many dimensions that are not easily measured or weighted, and common metrics as a management tool may only give non-specialists the illusion that they have taken an informed decision. Misrecognition of qualities is a serious impediment to economics as a science. For instance, in case of institutions concerning promotion and tenure are heavily influenced by tenured scientists who display homophily—they favor tenure candidates who adhere to their paradigm—sciences lacking experimental evidence can become dominated by people adhering to what Akerlof and Michaillat (2018) call ‘false paradigms’. It is a matter of judgement whether economics can be described as this type of science, but scholars like Fourcade et al. (2015) and Colander (2015) have noted that economics has all the traits of being trapped in the bubble of an elite set of universities. Furthermore, institutions and social norms within a science may push scientists in roles that do not match their qualities or take advantage of their comparative advantages. The critique of Akerlof (2020) is in that respect relevant. He points out that the current institutions of publication and promotion offer biased incentives that lead to what he calls ‘the sins of omission’: economics as discipline tends to ignores important topics and problems that are difficult to measure in a ‘hard’ way. Qualitative research is, for instance, more difficult to publish than quantitative research. And scholars who like to offer interdisciplinary insights often attain slower recognition as it appears harder to obtain appreciation for their contributions, as Leahey et al. (2017) show.

But then the inspiration for scientists in general. This paper shows that most academics are skeptical if not outright negative about the publish-or-perish principle. The logical question would then be: why are changes then so difficult to enact? Some piecemeal change is under way as the San Francisco Declaration of Research Assessment (DORA), initiated in 2012, has been signed by numerous academic organizations. In the meantime, this has led to proposing ‘good practices’ with the one overarching recommendation:

“Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.”

The main difficulty with denouncing metrics is that “the genie is out of the bottle” and putting it back inside the bottle is fraught with pitfalls. First of all, the science metrics have become part of the business model of universities and scholars may have become addicted to these indicators. To refrain from using science metrics is like asking Facebook or Twitter to delete their ‘like’ or ‘share/retweet’ button. Second, it may lead to the use of more refined metrics covering more desired dimensions which in turn will also lead to some form of goal displacement and counterstrategies or ‘gaming the system’ (Biagioli and Lippman 2020; Frey 2003; Haley 2017). Third, accountability practices in science rely to a large extent on metrics in demonstrating to the public that public money is well spent. Rankings are in that respect easy to understand for politicians, managers and lay people in general. Given that science has become so highly specialized and fragmented, replacing the story told by metrics by an extensive ‘narrative’ requires more effort by the university and the receiver of reports. The temptation to resort to old metrics and measures will be hard to get rid of.

The main policy question is for now and the years to come is how is the modern-day university best governed without resort to science metrics? What is the alternative? It may start with getting away from the ranking games at individual and institutional levels (Adler and Harzing 2009; Biagioli and Lippman 2020; Osterloh and Frey 2015). A real appreciation of scholars cannot be summarized looking up one’s H-index or field weighted impact factor in the Web of Science, Google Scholar, Scopus or any other citation database. To return to the advice of Samuelson which he gave by expressing that implicit incentives—applause is our only coin worth having—are at the heart of practicing economic science. A real appreciation of a scholarly achievement starts with having intimate knowledge of the field and a patience to see ideas tested and tried.Footnote 11 And in designing ‘incentive’ structures in science perhaps there is only one good advice: be aware that scientific knowledge is not a private good and science is not a market. Embracing competition based on imperfect science metrics is basically a recipe for the management folly that Kerr (1975) once described so vividly: the folly of rewarding A (publications), while hoping for B (scientific ideas). The phenomenon of ‘goal displacement’ has evolved and universities have achieved to select and educate members with a ‘taste for publication’ and not necessarily those with a ‘taste for science’. Rewarding output in the form of publications was initially a way to get rid of the academic oligarchy of the old boys’ network. The alternative to this form of governance by output control would be a governance by input control: select, educate, and socialize members with a ‘taste for science’. Needless to say, this model of governance has its flaws as it may regenerate the problems of the old days. This is well acknowledged by supporters of this route (Osterloh and Frey 2015). But when universities want to strive for scientific innovation, the route of input control may close the gap between reward (A) and hope (B) better than the playing of ranking games with imperfect metrics.