Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Mechanisms and impact of public reporting on physicians and hospitals’ performance: A systematic review (2000–2020)

  • Khic-Houy Prang ,

    Roles Data curation, Formal analysis, Writing – original draft

    khic-houy.prang@unimelb.edu.au

    Affiliation Centre for Health Policy, Melbourne School of Population and Global Health, The University of Melbourne, Carlton, Australia

  • Roxanne Maritz,

    Roles Formal analysis, Writing – original draft

    Affiliations Centre for Health Policy, Melbourne School of Population and Global Health, The University of Melbourne, Carlton, Australia, Rehabilitation Services and Care Unit, Swiss Paraplegic Research, Nottwil, Switzerland, Department of Health Sciences and Health Policy, University of Lucerne, Lucerne, Switzerland

  • Hana Sabanovic,

    Roles Data curation, Writing – review & editing

    Affiliation Centre for Health Policy, Melbourne School of Population and Global Health, The University of Melbourne, Carlton, Australia

  • David Dunt,

    Roles Conceptualization, Funding acquisition, Writing – review & editing

    Affiliation Centre for Health Policy, Melbourne School of Population and Global Health, The University of Melbourne, Carlton, Australia

  • Margaret Kelaher

    Roles Conceptualization, Funding acquisition, Supervision, Writing – review & editing

    Affiliation Centre for Health Policy, Melbourne School of Population and Global Health, The University of Melbourne, Carlton, Australia

Abstract

Background

Public performance reporting (PPR) of physician and hospital data aims to improve health outcomes by promoting quality improvement and informing consumer choice. However, previous studies have demonstrated inconsistent effects of PPR, potentially due to the various PPR characteristics examined. The aim of this study was to undertake a systematic review of the impact and mechanisms (selection and change), by which PPR exerts its influence.

Methods

Studies published between 2000 and 2020 were retrieved from five databases and eight reviews. Data extraction, quality assessment and synthesis were conducted. Studies were categorised into: user and provider responses to PPR and impact of PPR on quality of care.

Results

Forty-five studies were identified: 24 on user and provider responses to PPR, 14 on impact of PPR on quality of care, and seven on both. Most of the studies reported positive effects of PPR on the selection of providers by patients, purchasers and providers, quality improvement activities in primary care clinics and hospitals, clinical outcomes and patient experiences.

Conclusions

The findings provide moderate level of evidence to support the role of PPR in stimulating quality improvement activities, informing consumer choice and improving clinical outcomes. There was some evidence to demonstrate a relationship between PPR and patient experience. The effects of PPR varied across clinical areas which may be related to the type of indicators, level of data reported and the mode of dissemination. It is important to ensure that the design and implementation of PPR considered the perspectives of different users and the health system in which PPR operates in. There is a need to account for factors such as the structural characteristics and culture of the hospitals that could influence the uptake of PPR.

Introduction

It is becoming increasingly common for healthcare systems internationally to measure, monitor and publicly release information about healthcare providers (i.e. hospitals and physicians) for greater transparency, to increase accountability, to inform consumers’ choice, and to drive quality improvement in clinical practice [13]. In theory, public performance reporting (PPR) is hypothesised to improve quality of care via three pathways: selection, change and reputation.

  • In the selection pathway, consumers compare PPR data and choose high-quality providers over low-quality providers, thereby motivating the latter to improve their performance.
  • In the change pathway, organisations identify underperforming areas, leading to performance improvement. These pathways are interconnected by providers’ motivation to maintain or increase market share [4].
  • In the reputation pathway, PPR can negatively affect the public image of a provider or an organisation. Reputational concerns will therefore motivate providers or organisations to protect or improve their public image by engaging in quality improvement activities [5].

Given these different pathways, it is therefore not surprising that the measurement of PPR is complex. The quality indicators used (e.g. healthcare structure, processes, and patient outcomes), the mode of data publications (e.g. report cards) and the level of reporting (e.g. physician, unit or hospital level) vary widely across different healthcare systems and countries [6,7]. For example, in the United States (US) and the United Kingdom (UK), quality indicators such as mortality, infection rates, waiting times and patient experience are reported in the form of star ratings, report cards and patient narratives at the hospital and individual physician levels [8,9]. In Australia, performance of all public hospitals is publicly reported on the MyHospitals website [10]. Quality indicators reported include infections rates, emergency department waiting times, cancer surgery waiting times and financial performance of public hospitals. Reporting to MyHospitals is mandatory for Australian public hospitals but voluntary for private hospitals. Australia does not currently report at the individual physician level [11,12].

Research on the impact of PPR though is growing, as characterised by the large number of reviews published [7,1322]. Previous reviews suggest that PPR has limited impact on consumers’ healthcare decision-making and patients’ health outcomes [16,22]. In contrast, there is evidence that PPR exerts the greatest effect among healthcare providers by stimulating quality improvement activities [13,15,23].

Yet, the effects of PPR on healthcare processes, consumers’ healthcare choice and patients’ outcomes still remain uncertain or inconsistent. For example, PPR affects consumers’ selection of health plans but not selection of individual physicians or hospitals [13,15,20]. This may be because consumers do not always perceive differences in quality of healthcare providers, and they do not trust or understand PPR data [23,24]. Furthermore, it is often not clear how consumers’ healthcare choices are constrained by systems-level (e.g. lack of choice due to geographical distance) and socio-cultural barriers (e.g. poor consumer health literacy). This uncertainty reflects the complexity surrounding PPR including the different healthcare choices consumers are asked to make and how this can ultimately influence various health outcomes.

Further, considering healthcare providers behaviours and quality improvement, there is some discrepancy on this position [16,22]. The discrepancy among the reviews likely reflects the complexity with various characteristics of PPR examined. For example, some reviews focused on the mechanisms by which PPR exerts influence [7,15] without differentiating between the heathcare choices consumers are asked to make, while others focused on impact [18,19] with the inclusion of a variety of patients outcomes across a range of healthcare settings or conditions. Furthermore, issues in the design and implementation of PPR (e.g. level of reporting, indicators and dissemination), type of audiences (e.g. consumers, providers, and purchasers) and primary purposes (e.g. selection of physician or hospital and change in clinical processes), are likely to lead to different effects (Table 1).

thumbnail
Table 1. Classification of public performance reporting by mechanisms and audiences.

https://doi.org/10.1371/journal.pone.0247297.t001

As a point of departure from previous reviews, the goal of this systematic review was to address these discrepancies. It does so by differentiating the effects of PPR by users and providers across various healthcare settings and conditions to provide greater conceptual clarity surrounding the impacts and utility of PPR. Therefore, the aim of this systematic review was to provide an updated evidence summary of the impact of PPR on physicians and hospitals’ performance, focusing on the mechanisms (selection and change pathways) by which PPR exerts its influence.

Methods

The study was conducted as part of a wider review of the impacts of PPR on outcomes among healthcare purchasers (public and private), providers (organisations and individual physicians) and consumers. The results of the other parts of the wider review are reported elsewhere [20,21]. The review was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (S1 Checklist) guidelines [25].

Search strategy

Five databases were searched from their dates of inception to 16th April 2015: Medline; Embase; PsycINFO; the Cumulative Index to Nursing and Allied Health Literature (CINAHL); and Evidence-Based Medicine Reviews (EBMR). The search strategy was based on Ketelaar et al. [16] (limited to experimental study designs) and extended to include observational study designs if they conformed to the Meta-analysis of Observational Studies in Epidemiology guidelines (MOOSE) [26]. Search terms were amended with the assistance of a librarian (see S1 Appendix for Medline search strategy). Results of searches were downloaded into Endnote X9.

A second search of the databases above was conducted on 14th November 2016 to include non-standard epidemiological descriptors (e.g. health economics literature) as previous search did not capture such studies: experimental studies; non-randomised studies; observational cohort; time trend; and comparative studies. Articles from previous systematic reviews on PPR were also screened [6,13,1517,27,28]. A third search of the databases above was conducted on 3rd April 2020 to include additional studies published from 2016 to 2020.

Inclusion and exclusion criteria

Articles were included if: 1) they examined the effect of PPR on outcomes among purchasers, providers or consumers; and 2) the study design was observational or experimental. Articles were excluded if: 1) performance reporting was not publicly disclosed; 2) they reported hypothetical choices; 2) the study design was qualitative; 3) it was published in languages other than English; 4) it was published prior to the year 2000 as the practice of PPR has change significantly since then due the widespread use of online PPR; 5) where pay-for-performance effects were not disaggregated from PPR; 6) they involved long-term care (e.g. nursing homes); and 7) studies perceived to be of low methodological quality following risk of bias assessment.

Two authors independently screened titles and abstracts for relevance and then assessed the eligibility of the full-text articles using a screening guide adapted from a previous meta-analysis [29] (see S2 Appendix). The methodological quality assessment was then conducted on the final selection of eligible full-text articles by two authors. Discrepancies between authors were discussed between them and if they remained unresolved, a third author made the final decision.

Methodological quality assessment

The methodological quality of observational studies was assessed with the Newcastle-Ottawa Scale (NOS) [30] and RCT studies with the Cochrane Collaboration’s tool for assessing risk of bias [31]. The NOS uses a star system based on three domains: the selection of the study groups; the comparability of the groups; and the ascertainment of either the exposure/outcome of interest. The Cochrane Collaboration’s tool uses six domains to evaluate the methodological quality of RCT studies: selection bias; performance bias; detection bias; attrition bias; selective reporting; and other sources of bias. The methodological quality of each study was graded as low, moderate or high (see S3 Appendix). For cohort and quasi-experimental studies, a maximum of nine stars can be awarded: nine stars was graded as high methodological quality; six to eight stars as moderate methodological quality; and less than five stars as low methodological quality. For cross-sectional studies, a maximum of 10 stars can be awarded: nine to 10 stars was graded as high methodological quality; five to eight stars as moderate methodological quality and less than four stars as low methodological quality.

Data extraction and synthesis

The following information was extracted from the articles: authors; year of publication; country; study design; study population; sample size; type of PPR data; outcome measures; statistical analysis; and findings including estimates. Studies considered to be of low methodological quality were excluded from the synthesis, however the characteristics and main findings of these studies are available in S4 Appendix. Given the high level of methodological heterogeneity and the heterogeneity of outcomes between the studies, no meta-analysis was performed. Instead, a systematic critical synthesis of the moderate and high methodological quality studies based on S1 Checklist guidelines was conducted. The strength of the evidence was determined using a rating system similar to that used in previous similar systematic reviews [7,19]. We defined a positive effect in favour of PPR. We considered strong evidence if all studies showed significant positive effects, moderate evidence if more than half the studies showed significant positive effects, low evidence if a minority of studies showed significant positive effects, and inconclusive evidence if there were inconsistent findings across the studies (i.e. half of the studies showed significant positive effects and the other half significant negative effects) or insufficient findings (i.e. less than two studies).

Results

Inclusion of studies and quality assessment

In the first and second search, 8,627 articles were identified from five databases and eight previous reviews, resulting in 5,961 articles following removal of duplicates and those published prior to 2000 (Fig 1). In the third search, an additional 12,087 articles were identified from five databases, resulting in 9,603 articles following removal of duplicates. A total of 15,564 titles and abstracts were screened, with 15,447 articles excluded, leaving 117 articles for full-text screening. Following full-text screening, a total of 74 articles were included in the synthesis (59 and 15 articles from the previous searches and third search, respectively). Articles were categorised into three groups: 1) health plans; 2) coronary artery bypass graft (CABG) and percutaneous coronary intervention (PCI) and; 3) physicians and hospitals’ performance. In this paper, results of physicians and hospitals’ (n = 45) performance are presented. Nine studies were rated as high methodological quality and the rest as moderate methodological quality. The results are presented by mechanisms and impact of PPR:

  • user and provider responses to PPR (selection of patients, physicians and hospitals including adverse selection, and organisational quality improvement) and
  • impact of PPR on quality of care (improvement in clinical outcomes and patient experiences).

Seven studies examined both the mechanisms and impact of PPR and are therefore included in both sections [3238].

Description of studies

Characteristics of the 45 studies are described in Table 2. Of these, nine studies examined the selection of patients, physicians and hospitals [3947], 15 examined organisational quality improvement [4862], and 14 examined the impact of PPR [6376]. Seven studies investigated both user and provider behaviours to PPR and the impact of PPR [3238]. All studies were published between 2002 and 2020. All studies were published in academic journals, except for three studies which were PhD dissertations [51,71,75]. Studies were predominantly conducted in the US (n = 26), followed by five from China, two from Canada, Japan, the Netherlands, and the UK, one from Australia, Germany, India, Italy, Korea, and Taiwan. Study designs included quasi-experimental (n = 26), cohort (n = 8), experimental (n = 9) and cross-sectional (n = 2) studies. Quasi-experimental studies involved interrupted times series with/without comparison (n = 9) and controlled/non-controlled before-after designs (n = 17). The study populations comprised patients in primary care clinics (n = 7), in outpatient medical care (n = 2), in units within hospitals or in hospitals (n = 29), consumers (n = 2), providers (n = 4) and purchasers (n = 1). The most common type of PPR were report cards (e.g. CABG report cards) (n = 12), reports (n = 13) and hospital comparisons websites (e.g. CMS Centres for Medicare & Medicaid Services) (n = 13). PPR quality indicators were predominantly reported at the hospital level (n = 30), followed by individual physician/primary care clinics level (n = 14), and at the village level (n = 1). Nineteen studies examined mandatory PPR, 10 voluntary PPR, 15 compared PPR with no PPR and 1 compared mandatory PPR with voluntary PPR.

thumbnail
Table 2. Characteristics and main findings of included studies.

https://doi.org/10.1371/journal.pone.0247297.t002

User and provider responses.

Selection of patients, physicians and hospitals. Eight studies examined the effects of PPR on the selection of physicians and hospitals by patients, consumers, healthcare purchasers and providers [3942,4447]. Two studies examined if there were detrimental effects of PPR on adverse selection of patients by physicians [36,43]. Yu et al. [45], Mukamel et al. [40], and Martino et al. [41] reported positive effects of PPR on the selection of hospitals, cardiac surgeons, and primary healthcare physicians by patients/consumers. Gouveritch et al. [46] reported no effects of PPR on the selection of hospitals with lower caesarean delivery rates by pregnant women. Similarly, Fabbri et al. [47] reported no effects of PPR on the proportion of women who received maternal and neonatal health care services. Epstein et al. [44] reported no effect of PPR on the selection of cardiac surgeons by physicians when referring patients. In contrast, Ikkersheim and Kohlmann [42] reported that publicly reporting quality indicators and patient experiences positively influenced general practitioners’ choice of hospital when referring patients. Mukamel et al. [39] reported that cardiac surgeons with low risk-adjusted mortality rates (RAMR) were more likely to be contracted by managed care organisations than those with high RAMR. Werner et al. [43] reported that publicly reporting individual’s surgeon performance resulted in an increase in racial and ethnic disparities in CABG use in New York compared to other States without PPR. Surgeons avoided operating on high-risk patients. In contrast, Vallance et al. [36] found no evidence that publicly reporting individual’s surgeon 90-day postoperative mortality in elective colorectal cancer surgery has led to risk averse behaviours in England. The proportion of patients undergoing elective colorectal cancer surgery before and after the introduction of PPR remained the same. In summary, half of the studies reported positive effects of PPR, with one a negative effect and the rest no effect. These findings suggest moderate level of evidence for PPR and selection of patients, physicians and hospitals.

Organisational quality improvement. Twenty-one studies examined the effects of PPR on quality improvement activities in primary care clinics (n = 7), outpatient medical care (n = 2) and hospitals (n = 12). Among primary care clinics, Smith et al. [53] found that publicly reporting diabetes care performance led to an increase in the number of diabetes quality improvement interventions implemented. Interventions included patient (e.g. education), provider (e.g. performance feedback) or system-directed (e.g. guidelines) interventions. Similarly, Leerapan [51] found that publicly reporting the rankings of primary care clinics improved the quality of diabetes care provided, in particular among lower rank clinics. Wang et al. [56], Yang et al. [57], and Lui et al. [60] found that publicly reporting both primary care clinics and individual physicians’ prescription rates reduced their prescription rates of antibiotics and injections, thereby potentially reducing medication overuse. Using the same data derived from Lui et al.’s study [60], Tang et al. stratified the analysis by health conditions [61] and physician’s prescribing performance level [62]. The effect of PPR varied by health conditions, with a reduction in antibiotics and injections prescriptions for patients with gastritis compared to patients with bronchitis or hypertension [61]. There was a decrease in the rate of antibiotics prescriptions following PPR across all physician’s prescribing performance level, with the effect largely attributed to average and high antibiotic prescribers [62].

Among outpatient medical care, Lind et al. [59] found that publicly reporting imaging efficiency indicator resulted in an improvement in the appropriate use of conservative therapy and imaging among patients with low back pain. In contrast, Bishop et al. [52] found no associations between PPR of practice measures and 12 quality indicators related to preventative care, diabetes mellitus, heart failure and coronary artery disease, except for one preventative care measure—weight reduction counselling for overweight patients (see S5 Appendix for full list of measures).

Among hospitals, Besley et al. [48] reported that mandatory PPR with targets and sanctions (naming and shaming) in England reduced waiting times for elective care, compared to Wales which did not implement these initiatives. However, there was some evidence of moving patients around to meet targets in England. Similarly, Reinecke et al. [35] found that PPR in California reduced post-acute care use but increased acute care hospital transfer rates among intensive care unit (ICU) patients compared to other States without PPR. Werner et al. [33] reported an improvement in all process measures for acute myocardial infarction, heart failure and pneumonia following PPR, particularly in hospitals with low baseline performance (see S5 Appendix). Similarly, both Kraska et al. [58] and Selvaratnam et al. [37] found an improvement in care delivery processes following PPR of clinical quality indicators (see S5 Appendix). Renzi et al. [54] and Ukawa et al. [55] reported that hospitals who participated in PPR had better performance in several process measures than hospitals who did not (see S5 Appendix). Specifically, Renzi et al. [54] found that PPR resulted in an increase in PCI and hip fracture operations within 48 hours but minimal impact on caesarean section rates. Jang et al. [50] also reported no impact of PPR on caesarean section rates beyond the first release of PPR. Werner et al. [49], Tu et al. [32] and Dahlke et al. [34], and Yamana et al. [38] reported limited or no impact of PPR on a number of process measures related to heart attack and failure, pneumonia and surgical care (see S5 Appendix). In particularly, Werner et al. [49] noted that hospitals with high percentages of Medicaid patients had smaller improvements in hospital performance than those with low percentages of Medicaid patients. In summary, all studies reported positive effects of PPR for primary care (although the findings of three studies appeared to be derived from one RCT [6062], and half of the studies reported positive effects of PPR for outpatients and hospitals. These findings suggest strong and moderate level of evidence for PPR and quality improvement activities in primary care and hospitals, respectively but inconclusive evidence for outpatients given the low number of studies.

Impact of PPR on quality of care.

Improvement in clinical outcomes. Nineteen studies examined the impact of PPR on clinical outcomes. The most common clinical outcome indicator was mortality (n = 16) [3238,6368,7375]. Seven studies reported no effects of PPR on mortality in general inpatient care [34,38,64,65,73,74] and intensive care [35]. In contrast, six studies reported that PPR reduced mortality in general inpatient care [32,33,36,66,67] and perinatal care [37]. Three studies showed mixed effects of PPR on mortality depending on the health conditions [63,68] and the level of reporting (State or Federal) [75].

Other clinical outcome indicators included readmission rates, infection rates and falls. Werner et al. [33] reported that PPR was associated with a decline in 30-day readmission rates among patients with AMI, heart failure or pneumonia, whilst both Dhalke et al. [34] and DeVore et al. [73] reported no PPR effects. The conflicting results may be due to the different time periods investigated. Both Danemann et al. [69] and Marsteller et al. [70] reported that mandatory PPR of hospital-acquired infection rates reduced infection rates in hospitals. Similarly, Noga et al. [71] found that hospitals who volunteered to publicly report their patients’ falls with and without injuries had a decrease in patients’ falls.

Improvement in patient experience. Three studies examined the impact of PPR on non-clinical outcomes such as patient experience. Mann et al. [76] reported that patient satisfaction with physician communication increased following mandatory public reporting of patients’ perception of hospital care survey, with the largest improvement occurring among hospitals in the lowest quartile of satisfaction scores. Ikkersheim et al. [72] found that hospitals that were ‘forced’ to publicly publish their Consumer Quality Index results by health plans insurers had better patient experiences than those who did not. In contrast, Dahlke et al. [34] reported mostly no effects of PPR on patient experiences (with the exception of “definitely recommending the hospital”) between hospitals who volunteer to publicly publish their performance and those who do not. In summary, the majority of the studies reported positive effects of PPR on clinical outcomes including mortality (six of 16) and readmission rates, infection rates and falls (four of six), and patient experience (two of three). These findings suggest moderate level of evidence for PPR and clinical outcomes and some evidence for PPR and patient experience, albeit the low number of studies.

Discussion

This systematic review summarises the evidence on the mechanisms and impacts of PPR on physicians and hospitals’ performance. Among user and provider behavioural responses studies, five of 10 studies reported a positive effect of PPR on the selection of healthcare providers by patients, physicians and purchasers; 15 of 21 studies reported positive effects of PPR on quality improvement activities in primary care clinics and hospitals. Among impacts of PPR studies, 10 of 19 studies reported positive effects of PPR on clinical outcomes and two of three studies on patient experience. Only one study reported a negative effect of PPR on the selection of patients by healthcare providers.

Previous PPR reviews have yielded conflicting results; early reviews demonstrated associations between PPR and improvement in processes of care and clinical outcomes [13,15,23], although follow-up reviews showed limited associations [16,22]. There were also inconsistent associations between PPR and selection of healthcare providers [13,15,16,19,22]. Given that PPR may exert different effects across healthcare settings and health conditions, our reviews extend these results by considering the effects of PPR by procedures for specific condition [21], consumer choice pertaining to health plans [20] and physicians and hospitals performance focusing on the mechanisms and impacts of PPR, in which the findings are reported here. Consistent with previous reviews [13,15,18,19,23], we found that PPR stimulate quality improvement activities and improve clinical outcomes including mortality.

The majority of studies showed that PPR positively influenced the selection of healthcare providers (i.e. individual physician, hospital) by patients, providers, and purchasers. This is consistent with the findings of reviews conducted by Chen et al. [15] and Vukovic et al. [19] but not others [13,22]. The discrepancy between reviews likely reflects the healthcare choices consumers and healthcare providers are asked to make, as some reviews incorporated selection of healthcare providers, health plans and nursing homes together, and used hospital’s surgical volume and market share as measures of selection. All studies included in our review focused on actual consumer choice behaviour in the hospital and physician sector of health services. The findings related to the selection of health plans [20] and market share associated with CABG/PCI [21] are reported separately. Although the findings suggest that consumers are aware of PPR data, understand it and use it to make an informed choice, the results warrant cautious interpretation given the small number of studies across consumer types. Across the studies, quality indicators in the report cards included a mix of process and outcome measures for a specific health condition or procedure reported at the individual physician or hospital level. Previous studies have demonstrated that patients are interested in interpersonal aspect of care indicators (e.g. patient experience and satisfaction) reported at the individual physician level [7779]; whereas providers and purchasers considered processes and outcomes measures (e.g. surgical complications and mortality) to be important indicators that should be publicly reported [80,81]. Consumer-focused frameworks and best practice guidelines have also been developed for presenting, promoting and disseminating PPR data to improve their comprehensibility and usability [24,82].

The effects of PPR on quality improvement activities appeared to be dependent on the healthcare setting, type of process indicators publicly reported and the clinical areas it is reported for. Among primary care clinics, publicly reporting individual physician and clinic care performance and ranking their performance resulted in positive behavioural changes [51,53,56,57,6062]. This suggests that PPR improves performance via a feedback loop. Similar positive effects of PPR on quality improvement activities were observed in hospitals, however the effects varied across clinical areas [33,35,37,48,54,55,58]. The differential effects of PPR across clinical areas may be related to the type of process indicators reported, as some may be more amenable to behavioural change. For example, the cardiac and orthopaedic process measures focus on the proportion of patients treated with a surgical procedure within recommended time or medication at admission or discharge from hospital which may allow for timely targeted behavioural change [33,54,55]. In comparison, obstetrics and respiratory process measures such as the proportion of women with primary caesarean and pneumococcal vaccination quantify the measures but provide no guidance on how to improve caesarean and pneumococcal vaccination rates [34,46,49,50,54]. Given there can be substantial variation in quality of care across the different departments of a hospital, implementing and tracking relevant evidence-based process metrics for individual clinical areas are necessary to drive quality improvement and reduce variation in care delivery.

Although process measures may drive quality improvement activities, it remains unclear whether they lead to successful clinical outcomes. This is likely to be dependent on whether the process measures are evidence-based or not. Evidence-based process measures generally reflect accepted recommendations for clinical practice [83]. Furthermore, strict adherence to process measures, in the form of ‘targets’, may be detrimental to clinical outcomes and lead to unintended consequences such as ‘gaming’ (i.e. shuffling of patients to meet targets), ‘cream skimming’ (i.e. admitting healthier patients), and risk aversion. Two of three studies in our review found evidence of gaming associated with targets and sanctions [48], and risk aversion behaviours by surgeons [43]. In support, previous reviews have reported similar unintended and negative consequences of PPR on patients and healthcare providers [8486]. To mitigate the unintended consequences of PPR, Marshall et al. [87] suggested a broader assessment of performance beyond process measures that reflect the effectiveness and quality of care, such as clinical outcomes, patient experience and satisfaction measures. Custers et al. [88] proposed using incentive structure (e.g. payments for targets or penalties for gaming) alongside PPR to influence healthcare providers’ attitudes. In support, a previous US study found that hospitals subject to both PPR and financial incentives improved quality more than hospitals engaged only in PPR [89].

The majority of studies showed positive impact of PPR on the improvement of clinical outcomes, in particular mortality. Mortality is considered an objective endpoint that is easily measurable and understandable by the public [90]. Despite this, it is unclear what quality improvement activities individual physicians and hospitals undertook to improve their mortality rates as using clinical outcome measures alone can make it difficult to identify a specific gap in care. As such, measurement of processes rather than outcomes of clinical care has been proposed as a more reliable and useful measure for quality improvement purposes [91]. However, as discussed above using solely process measures may be more susceptible to unintended consequences. Having a balance of relevant process and outcomes measures is preferable to minimise negative consequences [87].

Other clinical outcomes such as functioning (i.e. the lived experience of health) [92], health-related quality of life, patient-reported outcomes and experiences were rarely investigated. In our review, only three studies [34,72,76] examined patient experience and two found positive effects of PPR on patient experience [72,76]. Previous reviews reported positive effects of PPR on patient experience, but this was limited to one or two studies involving hospital reimbursements linked to patient experience scores [19,27]. We did not include pay for performance studies in our review as these effects could not be disaggregated from PPR. Given the growth in patient-centred care, many healthcare systems such as the US and UK are publishing inpatient hospital experience [3]. The impact of publishing them appeared to be positive to date but further empirical studies are warranted given the low number of studies.

Additional factors that could have an influence on the impact of PPR on quality improvement activities and clinical outcomes include the structural characteristics and culture of the hospitals. Two studies in our review examined hospital structural characteristics [34,55]. Both Ukawa et al. [55] and Dahlke et al. [34] found that hospitals which voluntarily participated in PPR had higher baseline performances. Aside from this, there were few hospital structural characteristics differences between hospitals that voluntary participated in PPR and those that did not. This suggests that past hospital’s performance may influence the initial decision to voluntary participate in PPR but may not be the sole driver. Previous studies had shown that hospitals with strong quality and safety culture were more likely to engage in quality improvement activities and tended to have higher publicly reported hospital rating scores [93,94]. A qualitative study of hospital Medical Directors’ views identified strong leadership and organisational cultures that encourage continuous quality improvement and learnings as important for open and transparent reporting of performance data [95].

Implications

Public reporting of hospital performance data has become a common health policy tool to inform consumer healthcare choice, as well as stimulate and maintain quality improvement in clinical practice. When devising a PPR strategy, health policy makers must identify who the intended audience (i.e. consumers, providers, purchasers) and the objectives (i.e. selection, quality improvement, transparency/accountability) of PPR are to increase its effectiveness [96].

For consumers, PPR can facilitate choice in selecting a physician or a hospital that appeared to have better outcomes if 1) the indicators are disseminated through the appropriate channel to increase reach and awareness and 2) the indicators reported meet their decision-making needs. Meeting these prerequisites for PPR to be effective are dependent on consumers’ characteristics that influence information-seeking and decision-making behaviours such as their health condition (urgency of care), level of education and health literacy. As such, health policy makers responsible for the development and dissemination of PPR must ensure that the indicators publicly reported are relevant and meaningful, publicised and published in accessible formats, easily understood and made readily available [97].

For providers, PPR data can be used to assess the performance of their organisation or their individual staff member when implementing quality improvement initiatives. PPR is a complex improvement intervention of which the actual ‘change’ mechanism that translate PPR into quality improvement initiatives is not yet well understood. This is key to understanding which quality improvement initiatives work under what condition and will ensure learnings are transferred and adopted across healthcare settings. However, PPR is only one strategy for the continuous improvement of hospital quality and safety. The US and several European countries are increasingly moving toward pay-for performance as a quality improvement strategy [98,99].

Finally, an assessment of whether PPR will be successful needs to consider the healthcare delivery system in which PPR operates. Most of the literature included in this review was derived from the experience of PPR in the US, which may not be applicable to other countries. The US healthcare system is a private insurance system that promotes healthcare choice and market competition. In contrast, the UK and Australia have universal health care systems with dual public and private healthcare sectors, where voluntary private insurance reduces access fees. Although citizens have free access to the universal public system, they may have fewer choice in their medical specialist and place of care than the private system. Furthermore, in these countries and others European countries, general practitioners (GPs) are generally gatekeepers to secondary care with patients requiring their referral for access [100]. There have been few studies examining whether PPR of hospital data influences GPs referral behaviour [80,101,102]. Given the growth of PPR outside of the US, health policy makers must consider other potential users of PPR beyond patients such as the intermediate role that GPs play in connecting patients with hospitals.

Strengths and limitations

Whilst the search was extensive and included a wide range of relevant electronic databases, it did not include studies in languages other than English, grey literature, or qualitative studies. Studies that did not explicitly describe their research design may have also been missed. However, to minimise this risk, the search strategy was conducted with the assistance of a librarian and a second search was conducted to include non-standard epidemiological terminology. Although some risk of bias can be drawn from the methodological quality summary scores, they are a subjective judgment and have been previously criticised for ascribing equal weight to each of the nominated criteria [103]. Given that there is a lack of consensus on which is the best tools to assess the methodological quality of observational studies, the NOS was considered to be appropriate. We acknowledged that the methodological quality of the included studies should be interpreted with caution. We attempted to disentangle the effects of PPR by reporting the results by mechanisms and impacts across a range of users, healthcare settings and clinical areas. However, the small number of studies across users and clinical areas limit the strength of the evidence and the results warrant cautious interpretation. Due to the high level of heterogeneity in settings and outcomes between the studies, it was not possible to pool the results and conduct a meta-analysis. Finally, the literature has overwhelmingly been derived from one country and one health system (US).

In summary, we have found moderate evidence that PPR informed choice of healthcare providers, increased quality improvement activities, improved clinical outcomes, and patient experience (albeit the low number of studies), with some variations across healthcare settings and conditions. Ultimately, for PPR to be effective, the design and implementation of PPR must considered the perspectives and needs of different users, as well as the values and goals of the healthcare system in which PPR operates. There is a need to account for systems-level barriers such as the structural characteristics and culture of the hospitals that could influence the uptake of PPR. Accounting for these contextual elements have the potential to substantially increase the impact of PPR in meeting its objectives of increased transparency and accountability within the healthcare system, informing healthcare decision-making and improving the quality of healthcare services.

Supporting information

S4 Appendix. Data extraction for studies considered to be of low methodological quality following risk of bias assessment.

https://doi.org/10.1371/journal.pone.0247297.s005

(DOCX)

S5 Appendix. Quality indicators reported in the studies.

https://doi.org/10.1371/journal.pone.0247297.s006

(DOCX)

Acknowledgments

The authors thank Dr Stuart McLennan who conducted the first search, Dr Angela Nicholas and Andrea Timothy for screening the titles and abstracts from the first search, Angela Zhang for conducting risk of bias assessment and data extraction of studies from the third search as a second assessor, and Jim Berryman for assisting in the search strategies.

References

  1. 1. Smith PC, Mossialos E, Papanicolas I. Performance measurement for health system improvement: experiences, challenges and prospects: Cambridge University Press; 2008.
  2. 2. Cacace M, Ettelt S, Brereton L, Pedersen JS, Nolte E. How health systems make available information on service providers: Experience in seven countries. Rand Health Quarterly. 2011;1(1):11. pmid:28083167
  3. 3. Rechel B, McKee M, Haas M, Marchildon GP, Bousquet F, Blümel M, et al. Public reporting on quality, waiting times and patient experience in 11 high-income countries. Health Policy. 2016;120(4):377–83. pmid:26964783
  4. 4. Berwick DM, James B, Coye MJ. Connections between quality measurement and improvement. Medical Care. 2003;41(1):I-30–I-8. pmid:12544814
  5. 5. Hibbard JH, Stockard J, Tusler M. Hospital performance reports: Impact on quality, market share, and reputation. Health Affairs. 2005;24(4):1150–60. pmid:16012155
  6. 6. Faber M, Bosch M, Wollersheim H, Leatherman S, Grol R. Public reporting in health care: How do consumers use quality-of-care information?: A systematic review. Medical Care. 2009;47(1):1–8. pmid:19106724
  7. 7. Totten AM, Wagner J, Tiwari A, O’Haire C, Griffin J, Walker M. Closing the quality gap: Revisiting the state of the science (vol. 5: public reporting as a quality improvement strategy). Evidence report/technology assessment. 2012(2085):1. pmid:24422977
  8. 8. Marshall MN, Shekelle PG, Davies HT, Smith PC. Public reporting on quality in the United States and the United Kingdom. Health Affairs. 2003;22(3):134–48. pmid:12757278
  9. 9. Chatterjee P, Maddox KJ. Patterns of performance and improvement in US Medicare’s Hospital Star Ratings, 2016–2017. BMJ Quality & Safety. 2019;28(6):486–94.
  10. 10. AIHW. MyHospitals 2017 [Available from: http://www.myhospitals.gov.au/.
  11. 11. Canaway R, Bismark MM, Dunt D, Kelaher MA. Public reporting of clinician-level data. The Medical Journal of Australia. 2017;207(6):231–2. pmid:28899319
  12. 12. Ahern S, Hopper I, Evans SM. Clinical quality registries for clinician-level reporting: strengths and limitations. Medical Journal of Australia. 2017;206(10):427–9. pmid:28566065
  13. 13. Fung CH, Lim Y-W, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Annals of Internal Medicine. 2008;148(2):111–23. pmid:18195336
  14. 14. Schauffler HH, Mordavsky JK. Consumer reports in health care: Do they make a difference? Annual Review of Public Health. 2001;22(1):69–89.
  15. 15. Chen J. Public reporting of health system performance: A rapid review of evidence on impact on patients, providers and healthcare organisations. Evidence Check. 2010.
  16. 16. Ketelaar NA, Faber MJ, Flottorp S, Rygh LH, Deane KH, Eccles MP. Public release of performance data in changing the behaviour of healthcare consumers, professionals or organisations. The Cochrane Library. 2011. https://doi.org/10.1002/14651858.CD004538.pub2 pmid:22071813
  17. 17. Mukamel DB, Haeder SF, Weimer DL. Top-down and bottom-up approaches to health care quality: The impacts of regulation and report cards. Annual Review of Public Health. 2014;35:477–97. pmid:24159921
  18. 18. Campanella P, Vukovic V, Parente P, Sulejmani A, Ricciardi W, Specchia ML. The impact of public reporting on clinical outcomes: A systematic review and meta-analysis. BMC Health Services Research. 2016;16(1):296. pmid:27448999
  19. 19. Vukovic V, Parente P, Campanella P, Sulejmani A, Ricciardi W, Specchia ML. Does public reporting influence quality, patient and provider’s perspective, market share and disparities? A review. The European Journal of Public Health. 2017;27(6):972–8. pmid:29186463
  20. 20. Kelaher M, Prang K-H, Sabanovic H, Dunt D. The impact of public performance reporting on health plan selection and switching: A systematic review and meta-analysis. Health Policy. 2019;123(1):62–70. pmid:30340906
  21. 21. Dunt D, Prang K-H, Sabanovic H, Kelaher M. The impact of public performance reporting on market share, mortality, and patient mix outcomes associated with coronary artery bypass grafts and percutaneous coronary interventions (2000–2016): A systematic review and meta-analysis. Medical Care. 2018;56(11):956–66. pmid:30234769
  22. 22. Metcalfe D, Rios Diaz A, Olufajo O, Massa M, Ketelaar N, Flottorp S, et al. Can the public release of performance data in health care influence the behaviour of consumers, healthcare providers, and organisations? Cochrane Database of Systematic Reviews. 2018(9).
  23. 23. Marshall MN, Shekelle P, Leatherman S, Brook R. The public release of performance data: What do we expect to gain? A review of the evidence. JAMA. 2000;283(14):1866–74. pmid:10770149
  24. 24. Hibbard JH, Greene J, Daniel D. What is quality anyway? Performance reports that clearly communicate to consumers the meaning of quality of care. Medical Care Research and Review. 2010;67(3):275–93. pmid:20093399
  25. 25. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Annals of Internal Medicine. 2009;151(4):264–9. pmid:19622511
  26. 26. Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, et al. Meta-analysis of observational studies in epidemiology: A proposal for reporting. JAMA. 2000;283(15):2008–12. pmid:10789670
  27. 27. Berger ZD, Joy SM, Hutfless S, Bridges JF. Can public reporting impact patient outcomes and disparities? A systematic review. Patient Education and Counseling. 2013;93(3):480–7. pmid:23579038
  28. 28. Pearse J, Mazevska D. The impact of public disclosure of health performance data: A rapid review. Sydney: Sax Institute. 2010.
  29. 29. Paradies Y, Ben J, Denson N, Elias A, Priest N, Pieterse A, et al. Racism as a determinant of health: A systematic review and meta-analysis. PloS one. 2015;10(9):e0138511. pmid:26398658
  30. 30. Wells G, Shea B, O’Connell D, Peterson J, Welch V, Losos M, et al. The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses: The Ottawa Hospital Research Institute; [Available from: http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp.
  31. 31. Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928. pmid:22008217
  32. 32. Tu JV, Donovan LR, Lee DS, Wang JT, Austin PC, Alter DA, et al. Effectiveness of public report cards for improving the quality of cardiac care: the EFFECT study: a randomized trial. JAMA. 2009;302(21):2330–7. pmid:19923205
  33. 33. Werner RM, Bradlow ET. Public reporting on hospital process improvements is linked to better patient outcomes. Health Affairs. 2010;29(7):1319–24. pmid:20606180
  34. 34. Dahlke AR, Chung JW, Holl JL, Ko CY, Rajaram R, Modla L, et al. Evaluation of initial participation in public reporting of American College of Surgeons NSQIP surgical outcomes on Medicare’s Hospital Compare website. Journal of the American College of Surgeons. 2014;218(3):374–80. e5. pmid:24468223
  35. 35. Reineck LA, Le TQ, Seymour CW, Barnato AE, Angus DC, Kahn JM. Effect of public reporting on intensive care unit discharge destination and outcomes. Annals of the American Thoracic Society. 2015;12(1):57–63. pmid:25521696
  36. 36. Vallance AE, Fearnhead NS, Kuryba A, Hill J, Maxwell-Armstrong C, Braun M, et al. Effect of public reporting of surgeons’ outcomes on patient selection,“gaming,” and mortality in colorectal cancer surgery in England: population based cohort study. BMJ. 2018;361.
  37. 37. Selvaratnam R, Davey MA, Anil S, McDonald S, Farrell T, Wallace E. Does public reporting of the detection of fetal growth restriction improve clinical outcomes: A retrospective cohort study. An International Journal of Obstetrics Gynaecology. 2020;127(5):581–9. pmid:31802587
  38. 38. Yamana H, Kodan M, Ono S, Morita K, Matsui H, Fushimi K, et al. Hospital quality reporting and improvement in quality of care for patients with acute myocardial infarction. BMC Health Services Research. 2018;18(1):523. pmid:29973281
  39. 39. Mukamel DB, Weimer DL, Zwanziger J, Mushlin AI. Quality of cardiac surgeons and managed care contracting practices. Health Services Research. 2002;37(5):1129–44. pmid:12479489
  40. 40. Mukamel DB, Weimer DL, Zwanziger J, Gorthy S-FH, Mushlin AI. Quality report cards, selection of cardiac surgeons, and racial disparities: a study of the publication of the New York State Cardiac Surgery Reports. INQUIRY: The Journal of Health Care Organization, Provision, and Financing. 2004;41(4):435–46. pmid:15835601
  41. 41. Martino SC, Kanouse DE, Elliott MN, Teleki SS, Hays RD. A field experiment on the impact of physician-level performance data on consumers’ choice of physician. Medical Care. 2012;50(Suppl):S65. pmid:23064279
  42. 42. Ikkersheim D, Koolman X. The use of quality information by general practitioners: does it alter choices? A randomized clustered study. BMC Family Practice. 2013;14(1):95. pmid:23834745
  43. 43. Werner RM, Asch DA, Polsky D. Racial profiling: the unintended consequences of coronary artery bypass graft report cards. Circulation. 2005;111(10):1257–63. pmid:15769766
  44. 44. Epstein AJ. Effects of report cards on referral patterns to cardiac surgeons. Journal of Health Economics. 2010;29(5):718–31. pmid:20599284
  45. 45. Yu T-H, Matthes N, Wei C-J, health p. Can urban-rural patterns of hospital selection be changed using a report card program? A nationwide observational study. International Journal of Environmental Research. 2018;15(9):1827.
  46. 46. Gourevitch RA, Mehrotra A, Galvin G, Plough AC, Shah NT. Does comparing cesarean delivery rates influence women’s choice of obstetric hospital? The American Journal of Managed Care. 2019;25(2):e33. pmid:30763041
  47. 47. Fabbri C, Dutt V, Shukla V, Singh K, Shah N, Powell-Jackson T. The effect of report cards on the coverage of maternal and neonatal health care: A factorial, cluster-randomised controlled trial in Uttar Pradesh, India. The Lancet Global Health. 2019;7(8):e1097–e108. pmid:31303297
  48. 48. Besley TJ, Bevan G, Burchardi K. Naming & Shaming: The impacts of different regimes on hospital waiting times in England and Wales. 2009.
  49. 49. Werner RM, Goldman LE, Dudley RA. Comparison of change in quality of care between safety-net and non–safety-net hospitals. JAMA. 2008;299(18):2180–7. pmid:18477785
  50. 50. Jang WM, Eun SJ, Lee CE, Kim Y. Effect of repeated public releases on cesarean section rates. J Prev Med Public Health. 2011;44(1):2. pmid:21483217
  51. 51. Leerapan B. The roles of repution in organizational response to public disclosure of health care quality: University of Minnesota; 2011.
  52. 52. Bishop TF, Federman AD, Ross JS. Physician incentives to improve quality and the delivery of high quality ambulatory medical care. The American Journal of Managed Care. 2012;18(4):e126. pmid:22554038
  53. 53. Smith MA, Wright A, Queram C, Lamb GC. Public reporting helped drive quality improvement in outpatient diabetes care among Wisconsin physician groups. Health Affairs. 2012;31(3):570–7. pmid:22392668
  54. 54. Renzi C, Sorge C, Fusco D, Agabiti N, Davoli M, Perucci CA. Reporting of quality indicators and improvement in hospital performance: the P. Re. Val. E. Regional Outcome Evaluation Program. Health Services Research. 2012;47(5):1880–901.
  55. 55. Ukawa N, Ikai H, Imanaka Y. Trends in hospital performance in acute myocardial infarction care: a retrospective longitudinal study in Japan. International Journal for Quality in Health Care. 2014;26(5):516–23. pmid:25107593
  56. 56. Wang X, Tang Y, Zhang X, Yin X, Du X, Zhang X. Effect of publicly reporting performance data of medicine use on injection use: a quasi-experimental study. PloS one. 2014;9(10):e109594. pmid:25313853
  57. 57. Yang L, Liu C, Wang L, Yin X, Zhang X. Public reporting improves antibiotic prescribing for upper respiratory tract infections in primary care: a matched-pair cluster-randomized trial in China. Health Research Policy and Systems. 2014;12(1):61. pmid:25304996
  58. 58. Kraska RA, Krummenauer F, Geraedts M. Impact of public reporting on the quality of hospital care in Germany: A controlled before–after analysis based on secondary data. Health Policy. 2016;120(7):770–9. pmid:27220517
  59. 59. Lind KE, Flug JA. Sociodemographic variation in the use of conservative therapy before MRI of the lumbar spine for low back pain in the era of public reporting. Journal of the American College of Radiology. 2019;16(4):560–9. pmid:30947888
  60. 60. Liu C, Zhang X, Wang X, Zhang X, Wan J, Zhong F. Does public reporting influence antibiotic and injection prescribing to all patients? A cluster-randomized matched-pair trial in china. Medicine. 2016;95(26). pmid:27367995
  61. 61. Tang Y, Liu C, Zhang X. Public reporting as a prescriptions quality improvement measure in primary care settings in China: variations in effects associated with diagnoses. Scientific Reports. 2016;6(1):1–8. pmid:28442746
  62. 62. Tang Y, Liu C, Zhang X. Performance associated effect variations of public reporting in promoting antibiotic prescribing practice: A cluster randomized-controlled trial in primary healthcare settings. Primary Health Care Research & Development. 2017;18(5):482–91. pmid:28606190
  63. 63. Baker DW, Einstadter D, Thomas CL, Husak SS, Gordon NH, Cebul RD. Mortality trends during a program that publicly reported hospital performance. Medical Care. 2002;40(10):879–90. pmid:12395022
  64. 64. Clough JD, Engler D, Snow R, Canuto PE. Lack of relationship between the Cleveland Health Quality Choice project and decreased inpatient mortality in Cleveland. American Journal of Medical Quality. 2002;17(2):47–55. pmid:11941994
  65. 65. Baker DW, Einstadter D, Thomas C, Husak S, Gordon NH, Cebul RD. The effect of publicly reporting hospital performance on market share and risk-adjusted mortality at high-mortality hospitals. Medical Care. 2003;41(6):729–40. pmid:12773839
  66. 66. Caron A, Jones P, Neuhauser D, Aron DC. Measuring performance improvement: total organizational commitment or clinical specialization. Quality Management in Healthcare. 2004;13(4):210–5. pmid:15532514
  67. 67. Hollenbeak CS, Gorton CP, Tabak YP, Jones JL, Milstein A, Johannes RS. Reductions in mortality associated with intensive public reporting of hospital outcomes. American Journal of Medical Quality. 2008;23(4):279–86. pmid:18658101
  68. 68. Ryan AM, Nallamothu BK, Dimick JB. Medicare’s public reporting initiative on hospital quality had modest or no impact on mortality from three key conditions. Health Affairs. 2012;31(3):585–92. pmid:22392670
  69. 69. Daneman N, Stukel TA, Ma X, Vermeulen M, Guttmann A. Reduction in Clostridium difficile infection rates after mandatory hospital public reporting: findings from a longitudinal cohort study in Canada. PLoS Medicine. 2012;9(7):e1001268. pmid:22815656
  70. 70. Marsteller JA, Hsu Y-J, Weeks K. Evaluating the impact of mandatory public reporting on participation and performance in a program to reduce central line–associated bloodstream infections: Evidence from a national patient safety collaborative. American Journal of Infection Control. 2014;42(10):S209–S15.
  71. 71. Noga P. Effects of voluntary public reporting on the nurse sensitve measure of falls and falls with injury in hospitals: A massachusetts perspective: University of Massachusetts Boston; 2011.
  72. 72. Ikkersheim DE, Koolman X. Dutch healthcare reform: did it result in better patient experiences in hospitals? A comparison of the consumer quality index over time. BMC Health Services Research. 2012;12(1):76. pmid:22443174
  73. 73. DeVore AD, Hammill BG, Hardy NC, Eapen ZJ, Peterson ED, Hernandez AF. Has public reporting of hospital readmission rates affected patient outcomes?: analysis of Medicare claims data. Journal of the American College of Cardiology. 2016;67(8):963–72. pmid:26916487
  74. 74. Joynt KE, Orav EJ, Zheng J, Jha AK. Public reporting of mortality rates for hospitalized Medicare patients and trends in mortality for reported conditions. Annals of Internal Medicine. 2016;165(3):153–60. pmid:27239794
  75. 75. Martin JE. Performance improvement in medical care: Do mandated reporting requirements work? New Brunswick, New Jersey: The State University of New Jersey; 2019.
  76. 76. Mann RK, Siddiqui Z, Kurbanova N, Qayyum R. Effect of HCAHPS reporting on patient satisfaction with physician communication. Journal of Hospital Medicine. 2016;11(2):105–10. pmid:26404621
  77. 77. Prang K-H, Canaway R, Bismark M, Dunt D, Miller JA, Kelaher M. Public performance reporting and hospital choice: a cross-sectional study of patients undergoing cancer surgery in the Australian private healthcare sector. BMJ Open. 2018;8(4). pmid:29703855
  78. 78. De Groot I, Otten W, Dijs-Elsinga J, Smeets H, Kievit J, Marang-van de Mheen P, et al. Choosing between hospitals: the influence of the experiences of other patients. Medical Decision Making. 2012;32(6):764–78. pmid:22546750
  79. 79. Sofaer S, Crofton C, Goldstein E, Hoy E, Crabb J. What do consumers want to know about the quality of care in hospitals? Health Services Research. 2005;40(6p2):2018–36. pmid:16316436
  80. 80. Prang K-H, Canaway R, Bismark M, Dunt D, Kelaher M. The use of public performance reporting by general practitioners: a study of perceptions and referral behaviours. BMC Family Practice. 2018;19(1):29. pmid:29433449
  81. 81. Canaway R, Bismark M, Dunt D, Kelaher M. Public reporting of hospital performance data: views of senior medical directors in Victoria, Australia. Australian Health Review. 2018;42(5):591–9. pmid:28988569
  82. 82. Bhandari N, Scanlon DP, Shi Y, Smith RA. Why do so few consumers use health care quality report cards? A framework for understanding the limited consumer impact of comparative quality information. Medical Care Research and Review. 2018:1077558718774945. pmid:29745305
  83. 83. Kahn JM, Gould MK, Krishnan JA, Wilson KC, Au DH, Cooke CR, et al. An official American Thoracic Society workshop report: Developing performance measures from clinical practice guidelines. Annals of the American Thoracic Society. 2014;11(4):S186–S95. pmid:24828810
  84. 84. Mannion R, Braithwaite J. Unintended consequences of performance measurement in healthcare: 20 salutary lessons from the English National Health Service. Internal medicine journal. 2012;42(5):569–74. pmid:22616961
  85. 85. Behrendt K, Groene O. Mechanisms and effects of public reporting of surgeon outcomes: a systematic review of the literature. Health Policy. 2016;120(10):1151–61. pmid:27638232
  86. 86. Werner RM, Asch DA. The unintended consequences of publicly reporting quality information. JAMA. 2005;293(10):1239–44. pmid:15755946
  87. 87. Marshall MN, Romano PS, Davies HT. How do we maximize the impact of the public reporting of quality of care? International Journal for Quality in Health Care. 2004;16(suppl_1):i57–i63. pmid:15059988
  88. 88. Custers T, Hurley J, Klazinga NS, Brown AD. Selecting effective incentive structures in health care: A decision framework to support health care purchasers in finding the right incentives to drive performance. BMC Health Services Research. 2008;8(1):66. pmid:18371198
  89. 89. Lindenauer PK, Remus D, Roman S, Rothberg MB, Benjamin EM, Ma A, et al. Public reporting and pay for performance in hospital quality improvement. New England Journal of Medicine. 2007;356(5):486–96. pmid:17259444
  90. 90. Lilford R, Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away. BMJ. 2010;340:c2016. pmid:20406861
  91. 91. Brook RH, McGlynn EA, Shekelle PG. Defining and measuring quality of care: a perspective from US researchers. International Journal for Quality in Health Care. 2000;12(4):281–95. pmid:10985266
  92. 92. Stucki G, Bickenbach J. Functioning: the third health indicator in the health system and the key indicator for rehabilitation. European Journal of Physical and Rehabilitation Medicine. 2017;53(1):134–8. pmid:28118696
  93. 93. Smith SA, Yount N, Sorra J. Exploring relationships between hospital patient safety culture and Consumer Reports safety scores. BMC Health Services Research. 2017;17(1):143. pmid:28209151
  94. 94. Pham H, Coughlan J, O’Malley A. The impact of quality-reporting programs on hospital operations. Health Affairs. 2006;25(5):1412–22. pmid:16966741
  95. 95. Canaway R, Bismark M, Dunt D, Kelaher M. Medical directors’ perspectives on strengthening hospital quality and safety. Journal of Health Organization and Management. 2017;31(7–8):696–712. pmid:29187081
  96. 96. Canaway R, Bismark M, Dunt D, Prang K-H, Kelaher M. “What is meant by public?”: Stakeholder views on strengthening impacts of public reporting of hospital performance data. Social Science & Medicine. 2018;202:143–50. pmid:29524870
  97. 97. Hibbard J, Sofaer S. Best practices in public reporting no. 1: How to effectively present health care performance data to consumers. Rockville, MD: Agency for Healthcare Research Quality; 2010.
  98. 98. Eijkenaar F. Pay for performance in health care: an international overview of initiatives. Medical Care Research and Review. 2012;69(3):251–76. pmid:22311954
  99. 99. Milstein R, Schreyoegg J. Pay for performance in the inpatient sector: A review of 34 P4P programs in 14 OECD countries. Health Policy. 2016;120(10):1125–40. pmid:27745916
  100. 100. Schoen C, Osborn R, Huynh PT, Doty M, Peugh J, Zapert K. On the front lines of care: Primary care doctors’ office systems, experiences, and views in seven countries: Country variations in primary care practices indicate opportunities to learn to improve outcomes and efficiency. Health Affairs. 2006;25(Suppl1):W555–W71.
  101. 101. Doering N, Maarse H. The use of publicly available quality information when choosing a hospital or health‐care provider: The role of the GP. Health Expectations. 2015;18(6):2174–82. pmid:24673801
  102. 102. Ketelaar NA, Faber MJ, Elwyn G, Westert GP, Braspenning JC. Comparative performance information plays no role in the referral behaviour of GPs. BMC Family Practice. 2014;15(1):146. pmid:25160715
  103. 103. Higgins J, Altman D. Assessing risk of bias in included studies. In: Higgins J, Green S, editors. Cochrane handbook for systematic reviews of interventions. London: Wiley; 2008.