Much of the evidence in modern health economics is built on experiments, which have been critical to the understanding of everything, ranging from the behaviour of the insured and preferences for health and healthcare to health behaviours and lifestyle choices [1]. A problem for the discipline is that the publication process creates a persistent bias in that novel positive findings are more likely to be published [2]. Researchers can create positive findings by reanalysing the data until statistical significance is achieved or by conducting multiple analyses and then focusing on the most striking results. Well planned and executed experiments often produce “negative” findings that are more robust but harder to publish [3]. The Centre for Open Science has recently followed the initiative of the journal Cortex in championing a method of providing more reliable evidence: registered reports [4, 5]. Here we argue why registered reports should be taken up by health economic journals.

Probably the most famous experiment in health economics is the RAND health insurance experiment [6], which started back in the late 1960s and was designed to test how people’s behaviour responded to different types of insurance. Since then, randomised controlled trials have continued to be applied in health economics (such as the Oregon Experiment looking at the effects of Medicaid on clinical outcomes [7]), alongside the proliferation of other experimental approaches such as laboratory and online experiments (commonly, but not exclusively, discrete choice experiments [8]).

An inherent problem in the process of journal publication is a filter that introduces a bias into the process of selecting the research to be published. Generally, humans find the unexpected interesting. This means that researchers sometimes select the results to include in manuscripts submitted for publication. Often this takes the form of p-hacking, which occurs when researchers continue to collect data or undertake alternative analyses until they find a statistically significant result [9]. Editors and reviewers also favour results that reject the researcher’s null hypothesis [2]. All of this means that studies which produce “positive” results with statistically significant p values are more likely to be published than “negative” ones.

There have been efforts to reduce publication bias through the pre-publication of protocols or the registration of trials, but these practices are not standard in health economics (though there have been calls for registers, e.g. [10]). As a result, researchers are free to retrofit study results with glossier research questions, cherry-pick favourable findings, perform unlimited numbers of analyses and exclude inconvenient data in order to generate (what are perceived to be) more interesting findings. In the absence of pre-publication protocols, then, any given finding in the health economics literature may just be a chance outcome that is not useful for evidenced-based policy.

Recently, the editors of several health economic journals published a statement encouraging the publication of negative results [11]. This is a positive step, and evidence is emerging that this statement was having some impact [12], but a statement alone cannot fix the issues. Importantly, reviewers and journal editors are not blinded to the results when assessing a manuscript for publication. So, while negative results may not be used as grounds for explicitly rejecting a manuscript, the direction and level of the reported statistical significance of the results can play a role unconsciously. There is a possible unintended consequence of the statement: poorer quality research may be favoured in an attempt to actively respond to it.

Registered reports are a new approach to journal publishing [13]. This approach involves blinding peer reviewers to the results when considering the merits of an experiment. The authors submit a report to a journal based on the background, methods and proposed analyses before the experiment is conducted. The protocol can be modified based on the peer reviewers’ recommendations, but it can then be provisionally accepted by the journal and the report is published. The authors then perform the experiment and expand the article to include both the results and discussion, which are again reviewed. However, providing the authors have implemented the agreed protocol, publication should be guaranteed. Post hoc analyses are permitted, but are clearly labelled as such, with reviewers and readers alike able to judge these findings accordingly.

As noted by Chambers [13], registered reports are not appropriate for all studies. However, there are many studies within the field of health economics that could have been, or indeed would be, eligible. To determine the scope for registered reports, we examined the latest three editions of five prominent health economic journals. We considered the proportion of studies that would have been eligible for registered reports under two scenarios: a stricter criterion in which only randomised controlled trials (RCTs) were eligible, and a second in which the eligibility was broadened to include quasi-experimental (QE) approaches. On average, 3% of the studies in these journals could have been published as a registered report under the strict eligibility criteria, while 29% of the studies were eligible with the broader criteria. The journal-specific results are presented in Fig. 1.

Fig. 1
figure 1

Proportion of published studies in HE journals that would have been eligible for publishing as registered reports. Journals: HE Health Economics, JHE Journal of Health Economics, MDM Medical Decision Making, Pharmacoecon Pharmacoeconomics, VIH Value in Health. RCT randomised controlled trials, QE quasi-experimental

In this analysis, natural experiments are excluded because there is no primary collection of data in these studies, which means that the prospective process of registered reporting is difficult to apply. However, registered reports could readily be applied prospectively, such as before a policy change.

The take-up of registered reports has varied considerably across fields. As of June 2019, over 200 journals offered registered reports [13]. Psychology leads the way, perhaps in part due to the recognition by this discipline of a reproducibility crisis [14]. This followed the Reproducibility Project, involving a collaboration of 270 contributing authors who attempted to reproduce 100 published psychological studies. The results, which were published in the journal Science, found that only 36% could be reproduced [15]. The response of the field of psychology has been swift, with the rapid adoption of registered reports across a range of journals.

While there is a recognition by economists of the importance of experiments, as demonstrated by the awarding of the 2019 Nobel Prize in economics for experimental work [16], it is highly likely that economics also suffers from a reproducibility crisis [17], and the adoption of registered reports in this discipline has been slow. Further, there has been little discussion of the reproducibility crisis by health economists and what steps might be taken to address it; for instance, are there any plans among health economists for a collaboration of health economists to replicate key empirical studies [18]? The Centre for Open Science currently lists only one economics journal (Journal of Development Economics [5]) and no health economic journals that accept registered reports.

Beyond the scientific benefits of registered reports in minimising the potential for publication bias, there is potentially an important efficiency gain [19]. While registered reports may mean more administrative work for journals, it is likely to save a considerable amount of research funding overall. This is nicely illustrated by a recent controversy regarding a new value set for the five-level version of the EQ 5D, a health status instrument [20]. This study has been criticised by a group at the University of Sheffield [21] partly on methodological grounds and partly because the new algorithm produces results that are different from the existing results. Surely a better and more efficient approach would have been for the review to take place before the field-work component of the study had been conducted.

Registered reports could be extended to tackle the reproducibility crisis, with replication studies submitted as registered reports that could be peer reviewed by the original authors. This would need support and encouragement from journals. Guaranteeing the publication of replication experiments significantly changes the incentives to reward researchers undertaking the important job of validation.

A more radical approach would be to allow authors to make registered reports available on a journal website to encourage empirical researchers to implement them. This would shift a discipline such as health economics closer to a discipline such as physics, which has a clear separation between theoretical and applied researchers, with no expectation that the theoretical researchers conduct experiments to test their hypotheses. It is hard to know if Peter Higgs ever envisaged that the particle he predicted in 1964 would be found 49 years later [22]. Similarly, health economists developing new theories could propose hypotheses that would be tested by others.

Experiments are a powerful tool, but the process of publication means that there is no guarantee that the results of experiments will be reported without bias. We believe that registered reports, in which the protocol is peer reviewed before the experiment has been conducted, will greatly mitigate bias and reduce research waste. Rather than follow the crowd, it is time for health economics as a discipline to adopt registered reports and lead the way for economics as a whole.