Skip to main content
  • Research article
  • Open access
  • Published:

A cross-country study of mis-implementation in public health practice

Abstract

Background

Mis-implementation (i.e., the premature termination or inappropriate continuation of public health programs) contributes to the misallocation of limited public health resources and the sub-optimal response to the growing global burden of chronic disease. This study seeks to describe the occurrence of mis-implementation in four countries of differing sizes, wealth, and experience with evidence-based chronic disease prevention (EBCDP).

Methods

A cross-sectional study of 400 local public health practitioners in Australia, Brazil, China, and the United States was conducted from November 2015 to April 2016. Online survey questions focused on how often mis-termination and mis-continuation occur and the most common reasons programs end and continue.

Results

We found significant differences in knowledge of EBCDP across countries with upwards of 75% of participants from Australia (n = 91/121) and the United States (n = 83/101) reporting being moderately to extremely knowledgeable compared with roughly 60% (n = 47/76) from Brazil and 20% (n = 21/102) from China (p < 0.05). Far greater proportions of participants from China thought effective programs were never mis-terminated (12.2% (n = 12/102) vs. 1% (n = 2/121) in Australia, 2.6% (n = 2/76) in Brazil, and 1.0% (n = 1/101) in the United States; p < 0.05) or were unable to estimate how frequently this happened (45.9% (n = 47/102) vs. 7.1% (n = 7/101) in the United States, 10.5% (n = 8/76) in Brazil, and 1.7% (n = 2/121) in Australia; p < 0.05). The plurality of participants from Australia (58.0%, n = 70/121) and the United States (36.8%, n = 37/101) reported that programs often mis-continued whereas most participants from Brazil (60.5%, n = 46/76) and one third (n = 37/102) of participants from China believed this happened only sometimes (p < 0.05). The availability of funding and support from political authorities, agency leadership, and the general public were common reasons programs continued and ended across all countries. A program’s effectiveness or evidence-base—or lack thereof—were rarely reasons for program continuation and termination.

Conclusions

Decisions about continuing or ending a program were often seen as a function of program popularity and funding availability as opposed to effectiveness. Policies and practices pertaining to programmatic decision-making should be improved in light of these findings. Future studies are needed to understand and minimize the individual, organizational, and political-level drivers of mis-implementation.

Peer Review reports

Background

Chronic diseases like diabetes, cancer, and heart disease are the largest causes of morbidity and mortality worldwide [1, 2]. The field of evidence-based public health, [3,4,5,6] namely evidence-based chronic disease prevention (EBCDP) seeks to address the challenge of chronic disease prevention by using the best available scientific evidence, applying program-planning frameworks, engaging the community in decision making, using data and information systems systematically, conducting sound evaluation, and disseminating what is learned [7, 8]. An evidence-based approach to prevention and control can significantly prevent and minimize chronic disease burden [9,10,11].

However, despite its enhanced ability to address chronic disease, EBCDP is not as widely used as it should be [7, 8, 12]. A considerable amount of the breakdown in the pipeline between evidence production and its application by public health practitioners takes place at the state and local public health levels, which, in the United States and other countries, have substantial authority over protecting the public’s health [13]. Studies have identified barriers impeding evidence-based public health practice at the individual (e.g., lack of EBCDP knowledge), agency/organizational (e.g., absence of leadership support for EBCDP), community (e.g., absence of critical community-based partnerships), sociocultural (e.g., lack of societal demand for evidence-based programs), and political (e.g., lack of buy-in from policymakers) levels in the United States as well as in other developed and developing countries [14,15,16,17].

Mis-implementation is defined as the state in which effective interventions are prematurely ended (mis-termination) or, alternatively, ineffective interventions remain in place (mis-continuation). While some literature has examined overuse of clinical interventions in a medical setting, [18,19,20,21] few studies have examined mis-implementation in public health [22]. Mis-implementation is likely an important factor in understanding the lag in EBCDP, as it points to the misallocation of resources, and inadequate funding is a commonly-cited barrier to EBCDP [23,24,25]. Mis-implementation may also be evidence of a culture that does not value or prioritize evidence when making programmatic decisions [26].

This study examined the perceived occurrence of EBCDP program mis-implementation and the most common reasons for program termination and continuation in four countries: Australia, Brazil, China, and the United States. These countries were selected because they represent an array of structures and systems of public health, which make them rich sources of insight into mis-implementation around the world. They also account for a large portion of the world’s chronic disease burden and population [27]. Lastly, the four countries are likely to represent different degrees of experience with EBCDP, based on the greater volume of empirical literature on the topic produced in Australia and the United States relative to Brazil and China [28,29,30,31,32,33,34,35,36,37]. We used a quantitative approach in the vein of O’Loughlin et al. [38], who used a survey design to extend the insights of generally case study of multiple case study-based approaches to investigating health promotion program sustainability.

Methods

Survey Development A 22-question, cross-sectional survey was developed based on a literature review of existing measures in EBCDP, [23, 39,40,41], a guiding framework based on previous work of the research team, [16, 41] as well as information gathered from 50 qualitative interviews of local public health practitioners across the four countries [24, 42]. The resulting instrument contained questions across seven domains derived from previous research on disseminating evidence-based interventions such as awareness of evidence-based public health, adoption of approaches for learning about evidence-based interventions, barriers to and facilitators of implementing evidence-based interventions, and mis-implementation (Additional file 1: Table S1). Where possible (e,g, the domains of awareness of EBCDP interventions and barriers and facilitators of EBCDP implementation), questions were adapted from existing literature. The mis-implementation questions consisted of four of the 22 questions and were novel operationalizations of the mis-termination and mis-continuation constructs, including the frequency of each and reasons for each. New operationalizations were deemed necessary due to the absence of existing options that interrogated the constructs of mis-termination and mis-implementation in a few questions as well as the absence of a gold standard by which to validate concepts of mis-implementation. Instruments examining facets of mis-implementation such as sustainability and de-adoption, which have traditionally been studied in isolation, tend to be longer than was deemed advisable for our instrument, which contained several other domains in addition to mis-implementation [43,44,45,46,47]. For example, the validated Program Sustainability Assessment Tool is 40 items long spread across eight sustainability domains [44]. The response options for the two reason questions were derived from the qualitative interviews as well as literature on common reasons programs are terminated and sustained.

Prior to deployment, 13 chronic disease prevention researchers including one male co-investigator, one female coordinator, and three graduate student research assistants from the United States; two female co-investigators and one female research assistant from Australia; one male co-investigator and one male research assistant from Brazil; and two male and one female co-investigator along with one female research assistant from China reviewed the survey. All of the authors were included among the reviewers. The survey was also forward- and backward-translated to Mandarin and Portuguese from English by members of the research team and pilot tested in each country to ensure contextual appropriateness. As a result, seven response items were found to be inapplicable to participants from China and were excluded from that version of the survey, but included in the versions used in Australia, Brazil, and the United States.

Study sample

Between November 2015 and April 2016, investigators in each country recruited convenience samples of chronic disease prevention practitioners working primarily at the local and regional levels. Sampling was largely carried out through national databases of chronic disease practitioners, which helped ensure that the geographic diversity of the invited participants reflected the distribution of public health infrastructure in each country. Response rates differed considerably across countries with 18% (n = 121/672) of those emailed completing the survey in Australia, 46% (n = 76/165) in Brazil, 58% (n = 101/174) in the United States, and 87% (n = 102/117) in China. Investigators deployed the survey to practitioners through a link embedded in an email. All practitioners provided informed consent. Practitioners in Australia and the United States had the option of accepting a $20 USD gift card for completing the survey. Investigators deemed such financial incentives to be culturally inappropriate in Brazil and China. The ethics review boards of The University of Melbourne, Pontifica Universidade Catolica do Parana, The Hong Kong Polytechnic University, and Washington University in St. Louis approved this study.

Measures

Participants were first asked a series of sociodemographic and employment history questions (e.g., age category, gender, tenure with their organization, educational credentials). They were then asked to rate their knowledge of EBCDP on a 5-point Likert scale. Two questions operationalized mis-implementation in both its forms (i.e., mis-termination and mis-continuation). These questions asked how often mis-termination and mis-continuation occurred with response options “never,” “sometimes,” “often,” “I do not know,” and “not applicable”. Two more questions then asked for the three most common reasons programs ended and continued with roughly a dozen different response options for each as well as an open-ended “other” option.

Statistical analysis

To assess bivariate differences in our key outcomes of interest, how often mis-termination and mis-continuation occurred and the reasons for program continuation and termination by country, as well as individual and agency characteristics by country, we used chi-square tests and Fisher’s Exact tests. Fisher’s Exact test was used for contingency tables with expected cell counts of fewer than five. All analyses were conducted using SPSS version 23. Missing data was minimal and excluded from analyses.

Results

Sample characteristics by country (Table 1)

The distribution of respondents differed significantly across countries by gender, age, and education (Table 1). Brazil was more evenly split among female and male participants (65.8%, n = 50/76) compared with Australia (88.4%, n = 107/121), China (71.7%, n = 71/102), and the United States (87.1%, n = 88/101), whose participants skewed female. Practitioners from Australia, the United States, and Brazil were concentrated and fairly evenly distributed between the ages of 30 and 59. Practitioners from China tended to be younger. Practitioners from Australia and the United States more commonly had advanced graduate degrees. The survey may have been inadequately customized to the educational credentials in Brazil due to the high rate of “other” responses. Most of those who endorsed this option reported working in a public health specialist role. Positions varied widely by country reflecting the diversity of ways in which each country staff public health.

Table 1 Differences in Participant and Agency Characteristics by Country

Evidence-based knowledge and Mis-implementation frequency by country (Table 2)

We found significant differences in knowledge of EBCDP across countries with upwards of 75% of participants from Australia (n = 91/121) and the United States (n = 83/101) reporting being moderately to extremely knowledgeable compared with roughly 60% (n = 47/76) from Brazil and 20% (n = 21/102) from China (Table 2). Significant differences in perceptions of mis-termination and mis-continuation frequency also existed. Far greater proportions of participants from China thought effective programs were never mis-terminated (12.2% (n = 12/102) vs. 1% (n = 2/121) in Australia, 2.6% (n = 2/76) in Brazil, and 1.0% (n = 1/101) in the United States) or were unable to estimate how frequently this happened (45.9% (n = 47/102) vs. 7.1% (n = 7/101) in the United States, 10.5% (n = 8/76) in Brazil, and 1.7% (n = 2/121) in Australia).. The majority of participants from Australia (56.4%, n = 68/121) thought mis-termination occurred often, compared to 36.8% (n = 28/76) in Brazil and 40.4% (n = 41/101) in the United States. Participants from all countries found it more challenging to estimate how frequently programs were mis-continued, with 37.8% (n = 46/121) in Australia, 14.5% in Brazil (n = 11/76), 52.0% (n = 53/102) in China, and 34.5% (n = 35/101) in the United States reporting they did not know. The plurality of participants from Australia (58.0%, n = 70/121) and the United States (36.8%, n = 37/101) reported that programs often mis-continued whereas most participants from Brazil (60.5%, n = 46/76) and one third (n = 37/102) of participants from China believed this happened only sometimes.

Table 2 Differences in Knowledge of EBCDP, Mis-implementation, and Reasons Programs End and Continue by Country

Reasons programs end and continue by country

To provide context to our examination of mis-implementation, we asked participants to select from a list (or suggest an alternative) the three most common reasons why programs ended and continued (Table 2). We documented a handful of nearly “universal” (i.e., commonly-cited across all countries) reasons for program termination including funding ending or being diverted and a lack of support from key stakeholders. In addition to these reasons, practitioners from Australia and Brazil reported that changes in political leadership often led to program termination (50.4%, n = 61/121 and 47.4%, n = 36/76 respectively). Among participants from Brazil, lack of support from agency leadership was also one of the most frequently cited reasons for programs ending (35.5%, n = 27/76). China’s top reasons differed significantly from the other countries’ and included that programs were difficult to maintain (48.0%, n = 49/102), programs were not demonstrating impact (42.2%, n = 43/102), and lack of support from the public (38.2%, n = 39/102). In the United States the prevailing issue was by far funding ending (84.2%, n = 85/101) or being diverted (36.6%, n = 37/101).

We observed less within-country consensus on why programs continued, as indicated by the fact that no single reason was endorsed by the majority of participants in any country. However, some of the same reasons did rise to the top across countries including sustained funding, the absence of alternative options, sustained support from agency leadership, and programs that were easy to maintain. Sustained support from policymakers seemed to be particularly influential for keeping programs running in Brazil, with 43.4% (n = 33/76) of participants citing this reason. Sustained support from the general public was a top reason for continuing programs in China (37.3%, n = 38/102) but not in Australia (15.7%, n = 19/121), Brazil (21.1%, n = 16/76), or the United States (15.8%, n = 16/101).

Discussion

Mis-implementation is an under-studied barrier to evidence-based practice. While de-adoption is being studied in the clinical space, where it goes by some four dozen names, [20, 21] less attention has been paid to it in the public health arena. In the field of public health, sustainability, or the continuation or discontinuation of a program or intervention once implemented and after the initial funding has ended, [48, 49] aligns to one half of mis-implementation. The dual nature of mis-implementation seems to be unexplored even in the domain of evidence-based medicine, where the focus is on disinvestment in low-value clinical practices [18,19,20,21].

We assert that mis-implementation is a two-sided practice that refers both to the de-adoption of effective programs, policies, or interventions (i.e., “mis-termination”) and to the continuation of ineffective programs, policies, or interventions that should end (i.e., “mis-continuation”). This exploratory study is likely the first to examine mis-implementation in both of its forms in an applied public health setting in multiple countries.

Our results suggest that mis-implementation occurs quite often and that mis-termination is more common—or more visible—than mis-continuation. Over 70% of practitioners surveyed in Australia, Brazil, and the United States reported that mis-termination happened sometimes or often. Among American practitioners, 40% (n = 40/101) thought mis-termination occurred often and 36.8% (n = 37/100) thought mis-continuation happened often. These findings generally support the only other published study to the authors’ knowledge that has examined mis-implementation in public health [22]. This cross-sectional study of over 900 public health practitioners at state and local public health departments found similar rates of mis-termination and mis-continuation with reasons for each differing somewhat at the state versus local level.

Interestingly, mis-continuation seemed to happen less often across all countries, with 37–68% of participants (n = 70/121 in Australia, n = 52/76 in Brazil, n = 34/102 in China, and n = 57/101 in the United States) reporting that it happened often or sometimes. This could point to a particular struggle with sustainment in the delivery of public health at the local level [50, 51]. However, the difference could also reflect a greater difficulty identifying mis-continuation relative to mis-termination. Indeed, a greater portion of practitioners across all countries did not know how often mis-continuation occurred compared to mis-termination. Mis-termination involves recalling instances when things came to an end, which is likely inherently more memorable than that the absence of such an ending (i.e., mis-continuation). This potential recall bias should be considered as research in the area of mis-implementation progresses and measures are optimized.

Practitioners from China were both more optimistic and more uncertain about the occurrence of mis-implementation relative to their colleagues in other countries. A greater proportion of them than in any other country thought mis-termination and mis-continuation never happened. However, the plurality of Chinese participants were unable to gauge how often either type of mis-implementation occurred. The top-down culture in China’s public health system may make observing mis-implementation more difficult. The participants from China predominantly worked for government-run hospitals. Because of the centralized health planning model used in China, wherein the central government has overall responsibility for national health policy and administration, local practitioners may be less involved in determining whether and why programs continue or end. Officials working in such an environment might not know how often mis-implementation occurs or might assume that programs are continuing or ending for good reasons (i.e., that mis-implementation does not occur often).

It is also worth noting that practitioners from China self-reported significantly lower knowledge of EBCDP, and that lack of knowledge might impede their ability to identify mis-continuation and mis-termination. The lower ratings may also reflect cultural differences in willingness to claim expertise in something. In Australia and the United States, where the large majority of participants tended to rate their knowledge to be moderate or excellent, mis-implementation was perceived as occurring far more often. This aligns with literature reporting that a country’s development status can predict structural differences in the provision of public health measures and clinical healthcare that influence their program implementation outcomes and their awareness of evidence-based practices [52,53,54]. Further research should investigate whether the positive correlation between knowledge and perceived rate of mis-implementation persists at the individual level and when controlling for other factors.

Consideration of the reasons participants gave for programs continuing and ending brings the phenomenon of mis-implementation into greater focus. “Grant funding ending” was the most commonly-cited reason for programs ending in Australia and the United States and the second most-common reason in Brazil. This reflects the growing concern around sustainment, or the continuation of a program once implemented and generally after initial funding from federal or state agencies has been exhausted [17]. In addition to funding, changes in political leadership and changes in priorities (which are often dictated by political authorities) were also common reasons programs end that align with the literature base [19, 22]. Reviews of the phenomenon of sustainment similarly find that organizational capacity, in addition to context, processes, and other factors influence whether a program is maintained [48, 55]. Scheirer [49] discusses three categories of factors that affect sustainability beyond securing new funding including aspects of project design and characteristics (e.g., whether the program is modifiable to meet local need), factors within the organizational setting (e.g., the presence of a program champion), and factors in the broader community environment (e.g., support from external community leaders. As found by Scheirer and confirmed by this study, staff tend to focus on challenges securing replacement funding as the primary obstacle to sustainment, potentially at the exclusion of some of these other factors.

Just as interesting as the most commonly cited reasons for program termination are the least commonly cited reasons. In both Australia and the United States, not being evidence-based was rarely the reason a program ended, which underscores the phenomenon of mis-continuation. Similarly, in Brazil and China, programs infrequently ended because they were picked up by other organizations, a viable approach to sustainment. Perhaps the most legitimate reason for a program to end is because it was evaluated and did not demonstrate impact. Less than a quarter of practitioners in Australia, Brazil, and the United States cited this as a top-three reason, suggesting that programs that end due to lack of funding, or lack of support, or any of the other most common reasons, are often terminated without a clear sense of whether they are effective.

Practitioners from all countries agreed that having sustained support from various key stakeholders (e.g., policymakers, agency leadership) was among the top reasons programs continued. Several practitioners from Australia and the United States used the open-ended response option to point to practitioner preferences and attachment to programs leading to the continuation of those programs. Sustained funding, the absence of alternatives, and ease of maintenance also led to the continuation of programs. Again, not being evidence-based or evaluated for effectiveness were amongst the least common reasons programs ended across all four countries.

While there was consistency in reasons programs end, the cross-country differences point to important contextual differences in the culture and structure surrounding public health that are important to keep in mind and further explore when seeking to enhance evidence-based public health around the world. In Brazil, for example, policymakers seem to be particularly influential at determining whether programs end and continue. There, a shift in political leadership was the top reason programs end and sustained support from policymakers was the most common reason programs continued. The support of agency leadership and program champions was also key. Practitioners from China reported that the support of the public was critical to keeping programs in place. In both Brazil and China, EBCDP seems to be in a more nascent stage than in Australia and the United States, as reflected by the greater degrees to which Brazil and China rely upon support from various stakeholder groups compared to the more autonomous systems in Australia and the United States and lower levels of self-attested knowledge of EBCDP. These differences in influences will be important to acknowledge when crafting strategies to improve evidence-based implementation in different countries.

Despite the cross-country differences, however, the prevailing theme from this study is that, across all countries, decisions about ending and continuing programs often seem to be made with incomplete consideration of whether the program in question was evidence-based or demonstrating impact. Instead decisions seem to be made based on what can be funded, what has support from key stakeholders, and how easy it is to maintain the status quo relative to the challenge of starting something new. These findings have potential implications for public health policy and practice. Decisions regarding the continuation or termination of programs should be at least partly a function of their impact and evidence base in addition to other more political and logistical/efficiency factors. These decisions should also be made in a transparent manner to ensure that staff have visibility into how program commitments are made or withdrawn. Such transparency may encourage greater adherence and to decision-making protocols and accountability.

Limitations

The findings reported here are exploratory and should be considered in light of the study’s limitations. We relied on a small set of questions pertaining to perceptions of mis-implementation, program termination and continuation, and knowledge of EBCDP that have not yet been psychometrically tested or independently validated against a gold standard. Selection bias is quite possible, given the non-randomized nature of the study, the adaptations to sampling strategies to accommodate country-specific differences, and the widely ranging response rates. While the survey instrument was forward- and backward-translated from English to Mandarin and Portuguese to ensure fidelity, some concepts and responses may have been lost in translation given the substantial social, cultural, and structural differences between the four countries. Self-reported perceptions of the frequency of and reasons for mis-implementation are also susceptible to recall bias. Additionally, perceptions of mis-implementation may vary by a number of individual and organizational factors, including tenure in position, job responsibilities, programmatic area, and organizational structure, some of which this study examined, but none of which were included in a multivariable model predicting mis-implementation due to small cell sizes.

Conclusions

Mis-implementation by definition involves the mis-allocation of scarce public health resources. This is the first cross-national study with standardized methods to examine patterns in mis-implementation. It found that public health practitioners across four diverse countries perceive mis-implementation fairly regularly as they seek to prevent chronic diseases at the local levels. While the reasons programs end and continue inappropriately vary from country to country, they generally support the common theme that the culture of public health practice seems to be too often focused on what is easy, familiar, and appealing to external stakeholders as opposed to what is impactful, evidence-based, or challenging. Future studies are needed to examine in closer detail the individual, organizational, and political-level predictors of mis-implementation as well as approaches to minimizing this mis-use of limited resources.

Abbreviations

EBCDP:

Evidence-based chronic disease prevention

References

  1. Vos T, Barber RM, Bell B, Bertozzi-Villa A, Biryukov S, Bolliger I, Charlson F, Davis A, Degenhardt L, Dicker D, Duan L, Erskine H, Feigin VL, Ferrari AJ, Fitzmaurice C, Fleming T, Graetz N, Guinovart C, Haagsma J, Hansen GM, Hanson SW, Heuton KR, Higashi H, Kassebaum N, Kyu H, Laurie E, Liang X, Lofgren K, Lozano R, MacIntyre MF, et al. Global, regional, and national incidence, prevalence, and years lived with disability for 301 acute and chronic diseases and injuries in 188 countries, 1990–2013: a systematic analysis for the global burden of disease study 2013. Lancet. 2015;386:743–800.

    Article  Google Scholar 

  2. Strong K, Mathers C, Leeder S, Beaglehole R. Preventing chronic diseases: how many lives can we save? Lancet (London, England). 2005;366:1578–82.

    Article  Google Scholar 

  3. Cochrane Collaboration [http://www.cochrane.org/].

  4. The Community Guide [http://thecommunityguide.org/index.html].

  5. Healthy People 2020 Structured Evidence Queries [https://www.healthypeople.gov/].

  6. HealthEvidence.org [http://www.healthevidence.org/].

  7. Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: a fundamental concept for public health practice. Annu Rev Public Health. 2009;30:175–201.

    Article  Google Scholar 

  8. Brownson R, Baker E, Leet T, Gillespie K, True W: Evidence-based public health. New York: Oxford University Press; 2010.

  9. Ellis P, Robinson P, Ciliska D, Armour T, Brouwers M, O’Brien MA, Sussman J, Raina P. A systematic review of studies evaluating diffusion and dissemination of selected cancer control interventions. Health Psychol. 2005;24:488–500.

    Article  Google Scholar 

  10. Hannon PA, Fernandez ME, Williams RS, Mullen PD, Escoffery C, Kreuter MW, Pfeiffer D, Kegler MC, Reese L, Mistry R, Bowen DJ. Cancer control planners’ perceptions and use of evidence-based programs. J Public Health Manag Pract. 2010;16:E1–8.

    Article  Google Scholar 

  11. Sanchez MA, Vinson CA, La PM, Viswanath K, Kerner JF, Glasgow RE. Evolution of Cancer control P.L.a.N.E.T.: moving research into practice. Cancer Causes Control. 2012;23:1205–12.

    Article  Google Scholar 

  12. Glasgow RE, Lichtenstein E, Marcus AC. Why Don’t we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. Am J Public Health. 2003;93:1261–7.

    Article  Google Scholar 

  13. Salinsky E: Governmental public health: an overview of state and local public health agencies. Washington, D.C.; 2010. [Background Paper].

  14. Glasgow RE, Marcus AC, Bull SS, Wilson KM. Disseminating effective cancer screening interventions. Cancer. 2004;101(5 Suppl):1239–50.

    Article  Google Scholar 

  15. Green LW, Glasgow RE. Evaluating the relevance, generalization, and applicability of research: issues in external validation and translation methodology. Eval Health Prof. 2006;29:126–53.

    Article  Google Scholar 

  16. Dreisinger ML, Boland EM, Filler CD, Baker EA, Hessel AS, Brownson RC. Contextual factors influencing readiness for dissemination of obesity prevention programs and policies. Health Educ Res. 2012;27:292–306.

    Article  Google Scholar 

  17. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Admin Pol Ment Health. 2011;38:4–23.

    Article  Google Scholar 

  18. Prasad V, Ioannidis JP, Elshaug A, Watt A, Mundy L, Willis C, Prasad V, Vandross A, Toomey C, Stamatakis E, Weiler R, Ioannidis J, Lenzer J, Hoffman J, Furberg C, Ioannidis J, Winstein K, Stergiopoulos K, Brown D, Katritsis D, Ioannidis J, Borden W, Redberg R, Mushlin A, Dai D, Kaltenbach L, Spertus J, Siontis G, Tatsioni A, Katritsis D, et al. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9(1).

  19. Massatti RR, Sweeney HA, Panzano PC, Roth D. The De-adoption of innovative mental health practices (IMHP): why organizations choose not to sustain an IMHP. Adm Policy Ment Heal Ment Heal Serv Res. 2008;35:50–65.

    Article  Google Scholar 

  20. Niven DJ, Mrklas KJ, Holodinsky JK, Straus SE, Hemmelgarn BR, Jeffs LP, Stelfox HT. Towards understanding the de-adoption of low-value clinical practices: a scoping review. BMC Med. 2015;13:255.

    Article  Google Scholar 

  21. Gnjidic D, Elshaug AG, Garner S, Littlejohns P, Elshaug A, Hiller J, Tunis S, Moss J, Niven D, Mrklas K, Holodinsky J, Straus S, Hemmelgarn B, Jeffs L, Wirtz V, Cribb A, Barber N, MacKean G, Noseworthy T, Elshaug A, Leggett L, Littlejohns P, Berezanski J, Morgan D, Brownlee S, Leppin A, Kressin N, Dhruva S, Levin L, Elshaug A, et al. De-adoption and its 43 related terms: harmonizing low-value care terminology. BMC Med. 2015;13:273.

    Article  Google Scholar 

  22. Brownson RC, Allen P, Jacob RR, Harris JK, Duggan K, Hipp PR, Erwin PC. Understanding Mis-implementation in public health practice. Am J Prev Med. 2015;48:543–51.

    Article  Google Scholar 

  23. Jacobs JA, Dodson EA, Baker EA, Deshpande AD, Brownson RC. Barriers to evidence-based decision making in public health: a national survey of chronic disease practitioners. Public Health Rep. 2010;125:736–42.

    Article  Google Scholar 

  24. Furtado K, Budd E, Ying X, Deruter A, Armstrong R, Pettman T, Reis R, Sung-Chan P, Wang Z, Saunders T, Becker L, Mui LST, Brownson R: Exploring political influences on evidence-based chronic disease prevention across four countries. In American Public Health Association Annual Meeting. Washington: Denver, CO; 2016.

  25. DeRuyter A, Ying X, Budd E, Furtado K, Sung-Chan P, Wang Z, Reis R, Pettman T, Armstrong R, Mui LST, Shi J, Becker L, Saunders T, Brownson R: Implementing evidence-based practices to prevent chronic disease: knowledge, knowledge acquisition, and decision-making across four countries. In 9th annual conference on the science of dissemination and implementation. Washington, D.C.; 2016.

  26. Budd E. Cross-country factors influencing evidence-based chronic disease prevention. In: 3rd biennial Australasian implementation conference (AIC). Australia: Melbourne. p. 2016.

  27. World Health Organization. Noncommunicable diseases country profiles, vol. 2011. New York: Geneva; 2011.

  28. Birnbaum MS, Jacobs ET, Ralston-King J, Ernst KC. Correlates of high vaccination exemption rates among kindergartens. Vaccine. 2013;31:750–6.

    Article  Google Scholar 

  29. Armstrong R, Pettman T, Burford B, Doyle J, Waters E. Tracking and understanding the utility of Cochrane reviews for public health decision-making. J Public Health (Oxf). 2012;34:309–13.

    Article  Google Scholar 

  30. Armstrong R, Waters E, Dobbins M, Anderson L, Moore L, Petticrew M, Clark R, Pettman TL, Burns C, Moodie M, Conning R, Swinburn B. Knowledge translation strategies to improve the use of evidence in public health decision making in local government: intervention design and implementation plan. Implement Sci. 2013;8:121.

    Article  Google Scholar 

  31. Armstrong R, Pettman TL, Waters E. Shifting sands - from descriptions to solutions. Public Health. 2014;128:525–32.

    Article  CAS  Google Scholar 

  32. Leeman J, Teal R, Jernigan J, Reed JH, Farris R, Ammerman A. What evidence and support do state-level public health practitioners need to address obesity prevention. Am J Health Promot. 28:189–96.

  33. Pettman TL, Armstrong R, Pollard B, Evans R, Stirrat A, Scott I, Davies-Jackson G, Waters E. Using evidence in health promotion in local government: contextual realities and opportunities. Health Promot J Austr. 2013;24:72–5.

    Article  Google Scholar 

  34. Brownson RC, Reis RS, Allen P, Duggan K, Fields R, Stamatakis KA, Erwin PC. Understanding administrative evidence-based practices: findings from a survey of local health department leaders. Am J Prev Med. 2014;46:49–57.

    Article  Google Scholar 

  35. Lacerda RA, Egry EY, da Fonseca RMGS, Lopes NA, Nunes BK, ADO B, Graziano KU, Angelo M, MML J, MAB M, Castilho V. evidence-based practices published in Brazil: identification and analysis studies about human health prevention. Rev Esc Enferm USP. 2012;46:1237–47.

    Article  Google Scholar 

  36. Brownson RC, Gurney JG, Land GH. Evidence-based decision making in public health. J Public Health Manag Pract. 1999;5:86–97.

    Article  CAS  Google Scholar 

  37. Glasziou P, Longbottom H. Evidence-based public health practice. Aust N Z J Public Health. 1999;23:436–40.

    Article  CAS  Google Scholar 

  38. O’Loughlin J, Renaud L, Richard L, Gomez LS, Paradis G. Correlates of the sustainability of community-based heart health promotion interventions. Prev Med (Baltim). 1998;27:702–12.

    Article  Google Scholar 

  39. Allen P, Sequeira S, Jacob RR, Hino AAF, Stamatakis KA, Harris JK, Elliott L, Kerner JF, Jones E, Dobbins M, Baker EA, Brownson RC. Promoting state health department evidence-based cancer and chronic disease prevention: a multi-phase dissemination study with a cluster randomized trial component. Implement Sci. 2013;8:141.

    Article  Google Scholar 

  40. Jacobs JA, Clayton PF, Dove C, Funchess T, Jones E, Perveen G, Skidmore B, Sutton V, Worthington S, Baker EA, Deshpande AD, Brownson RC. A survey tool for measuring evidence-based decision making capacity in public health agencies. BMC Health Serv Res. 2012;12:57.

    Article  Google Scholar 

  41. Brownson RC, Ballew P, Brown KL, Elliott MB, Haire-Joshu D, Heath GW, Kreuter MW. The effect of disseminating evidence-based interventions that promote physical activity to health departments. Am J Public Health. 2007;97:1900–7.

    Article  Google Scholar 

  42. Budd EL, deRuyter AJ, Wang Z, Sung-Chan P, Ying X, Furtado KS, Pettman T, Armstrong R, Reis RS, Shi J, Mui T, Saunders T, Becker L, Brownson RC. A qualitative exploration of contextual factors that influence dissemination and implementation of evidence-based chronic disease prevention across four countries. BMC Health Serv Res. 2018;18:233.

    Article  Google Scholar 

  43. Flynn BS. Measuring community leaders’ perceived ownership of health education programs: initial tests of reliability and validity. Health Educ Res. 1995;10:27–36.

    Article  CAS  Google Scholar 

  44. Luke DA, Calhoun A, Robichaux CB, Elliott MB, Moreland-Russell S. The program sustainability assessment tool: a new instrument for public health programs. Prev Chronic Dis. 2014;11:130184.

    Article  Google Scholar 

  45. Crone MR, Verlaan M, Willemsen MC, van Soelen P, Reijneveld SA, Sing RAH, Paulussen TGWM. Sustainability of the prevention of passive infant smoking within well-baby clinics. Heal Educ Behav. 2006;33:178–96.

    Article  CAS  Google Scholar 

  46. Mancini JA, Marek LI. Sustaining community-based programs for families: conceptualization and measurement*. Fam Relat. 2004;53:339–47.

    Article  Google Scholar 

  47. Hutchinson K. Literature review of program sustainability assessment tools. British Columbia: Burnaby; 2010.

    Google Scholar 

  48. Scheirer MA, Dearing JW. An agenda for research on the sustainability of public health programs. Am J Public Health. 2011;101:2059–67.

    Article  Google Scholar 

  49. Scheirer MA. Is sustainability possible? A review and commentary on empirical studies of program sustainability. Am J Eval. 2005;26:320–47.

    Article  Google Scholar 

  50. Schell SF, Luke DA, Schooley MW, Elliott MB, Herbers SH, Mueller NB, Bunger AC. Public health program capacity for sustainability: a new framework. Implement Sci. 2013;8:15.

    Article  Google Scholar 

  51. Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8:117.

    Article  Google Scholar 

  52. Anderson BO, Yip C-H, Smith RA, Shyyan R, Sener SF, Eniu A, Carlson RW, Azavedo E, Harford J. Guideline implementation for breast healthcare in low-income and middle-income countries. Cancer. 2008;113:2221–43.

    Article  Google Scholar 

  53. Samb B, Desai N, Nishtar S, Mendis S, Bekedam H, Wright A, Hsu J, Martiniuk A, Celletti F, Patel K, Adshead F, McKee M, Evans T, Alwan A, Etienne C. Prevention and management of chronic disease: a litmus test for health-systems strengthening in low-income and middle-income countries. Lancet. 2010;376:1785–97.

    Article  Google Scholar 

  54. Saraceno B, van Ommeren M, Batniji R, Cohen A, Gureje O, Mahoney J, Sridhar D, Underhill C. Barriers to improvement of mental health services in low-income and middle-income countries. Lancet. 2007;370:1164–74.

    Article  Google Scholar 

  55. Wiltsey Stirman S, Kimberly J, Cook N, Calloway A, Castro F, Charns M. The sustainability of new programs and innovations: a review of the empirical literature and recommendations for future research. Implement Sci. 2012;7:17.

    Article  Google Scholar 

Download references

Acknowledgements

The authors acknowledge Xiangji Ying, Anna J. deRuyter, Tahnee Saunders, Leonardo Augusto Becker, Jianwei Shi, and Long Sum Tabitha Mui for their contributions to this study.

Funding

This work was supported by National Cancer Institute at the National Institutes of Health (1R21CA179932 and 5R01CA160327), the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK Grant Number 1P30DK092950), and Washington University Institute of Clinical and Translational Sciences (UL1 TR000448), and from the National Center for Advancing Translational Sciences (KL2 TR000450). The funders played no role in the design of the study, nor in the collection, analysis, or interpretation of the data, nor in writing the manuscript.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Author information

Authors and Affiliations

Authors

Contributions

All authors have read and approved the manuscript. In addition, the authors contributed to this work in the following ways. KF: Helped collect and analyze data and led the drafting of this manuscript as well as the editing process. EB: Coordinated the study, helped collect and analyze the data, and provided comments on and edits to the manuscript. TP: Helped design the study and provided comments on and edits to the manuscript. RA: Helped design the study and provided comments on and edits to the manuscript. RR: Helped design the study and provided comments on and edits to the manuscript. PS: Helped design the study and provided comments on and edits to the manuscript. ZW: Helped design the study and provided comments on and edits to the manuscript. RB: led the study design and provided comments on and edits to the manuscript.

Corresponding author

Correspondence to Karishma S. Furtado.

Ethics declarations

Ethics approval and consent to participate

The ethics review boards of The University of Melbourne (Ref # 1544612), Pontifica Universidade Catolica do Parana (Ref # 1.093.095), The Hong Kong Polytechnic University (Ref # not provided in approval letter), and Washington University in St. Louis (Ref # 201303108) approved this study. Written consent to participate was obtained from all participants.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:

Table S1. Survey Instrument. (DOCX 21 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Furtado, K.S., Budd, E.L., Armstrong, R. et al. A cross-country study of mis-implementation in public health practice. BMC Public Health 19, 270 (2019). https://doi.org/10.1186/s12889-019-6591-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12889-019-6591-x

Keywords