Advanced search
Research

Improving the quality of healthcare: a cross-sectional study of the features of successful clinical networks December 2018, Volume 28, Issue 4

Mary M Haines, Bernadette Brown, Catherine A D’Este, Elizabeth M Yano, Jonathan C Craig, Sandy Middleton, Peter A Castaldi, Carol A Pollock, Kate Needham, William H Watt, Elizabeth J Elliott, Anthony Scott, Amanda Dominello, Emily Klineberg, Jo-An Atkinson, Christine Paul, Sally Redman, on behalf of the Clinical Networks Research Group

Published 6 December 2018. https://doi.org/10.17061/phrp28011803
Citation: Haines MM, Brown B, D’Este CA, Yano EM, Craig JC, Middleton S, Castaldi PA, Pollock CA, Needham K, Watt WH, Elliott EJ, Scott A, Dominello A, Klineberg E, Atkinson J, Paul C, Redman S, on behalf of the Clinical Networks Research Group. Improving the quality of healthcare: a cross-sectional study of the features of successful clinical networks. Public Health Res Pract. 2018;28(4):e28011803.

  • Citation

  • PDF

About the author/s

Mary M Haines | Sax Institute, Sydney, NSW, Australia; Sydney Medical School, University of Sydney, NSW, Australia

Bernadette Brown | Sax Institute, Sydney, NSW, Australia

Catherine A D’Este | College of Health and Medicine, Australian National University, Canberra, ACT; School of Medicine and Public Health, University of Newcastle, NSW, Australia

Elizabeth M Yano | US Department of Veterans Affairs Health Services Research & Development Centre for the Study of Healthcare Innovation, Implementation and Policy, VA Greater Los Angeles Healthcare System, CA, US; UCLA Fielding School of Public Health, Los Angeles, CA, US

Jonathan C Craig | Sydney Medical School, University of Sydney, NSW, Australia

Sandy Middleton | Nursing Research Institute, St Vincent’s Health, and Australian Catholic University, Sydney, NSW

Peter A Castaldi | Sydney Medical School, University of Sydney, NSW, Australia; Agency for Clinical Innovation, Sydney, NSW, Australia; Deceased 17 August 2018

Carol A Pollock | Sydney Medical School, University of Sydney, NSW, Australia

Kate Needham | Agency for Clinical Innovation, Sydney, NSW, Australia; Deceased 22 September 2016

William H Watt | Agency for Clinical Innovation, Sydney, NSW, Australia; School of Medicine, University of Wollongong, NSW, Australia

Elizabeth J Elliott | Sydney Medical School, University of Sydney, NSW, Australia

Anthony Scott | Melbourne Institute of Applied Economics and Social Research, University of Melbourne, VIC, Australia

Amanda Dominello | Sax Institute, Sydney, NSW, Australia; Sydney Medical School, University of Sydney, NSW, Australia

Emily Klineberg | Sax Institute, Sydney, NSW, Australia

Jo-An Atkinson | Sax Institute, Sydney, NSW, Australia

Christine Paul | School of Medicine and Public Health, University of Newcastle, NSW, Australia

Sally Redman | Sax Institute, Sydney, NSW, Australia

on behalf of the Clinical Networks Research Group

Corresponding author

Mary M Haines | [email protected]

Competing interests

PC, WW, KN and Nigel Lyons, who are part of the Clinical Networks Research Group, were previously employed by the NSW Agency for Clinical Innovation. SM is and CAP was a member of the Agency for Clinical Innovation Board. SM and MH are members of the Agency for Clinical Innovation Board Research Sub-Committee. The Agency has provided funds to support this research as part of the National Health and Medical Research Council (NHMRC) partnership project grant scheme. These funds have been awarded on the basis of an NHMRC deed of agreement detailing the governance and conduct of research in Australia. The inclusion of current and past executives and a board member on the investigator team raises perceived and actual conflicts of interest that were considered by the University of Sydney Human Research Ethics Committee, at that time:
• The current executive member (Nigel Lyons) and board member (CAP) have a competing interest in the study that is focused on the Agency’s performance
• Past executives (PC, KN and WW) may be perceived as having a conflict of interest in the outcomes of this study.
The following measures have been taken to be transparent about these competing interests and to manage them:
• Participant information sheets contained explicit details about the funding arrangements and noted the Agency’s contribution
• All publications arising from this study will detail the funding arrangements and note the potential competing interest of the Agency executives and board members
• This study is a comparative study between networks, so the overall performance of the Agency is not in question
• Agency funds have been awarded on the basis of an NHMRC deed of agreement detailing governance and conduct of research in Australia; this means the Agency cannot restrict publication of the research findings
• Current and past executives, the board members of the Agency, and the Agency Board Research Sub-Committee members have not been involved in the data analysis, have not had access to raw data and were not involved in the final selection of members of the expert panel.
These measures were approved by the University of Sydney Human Research Ethics Committee in August 2011 (ID: 13988). The other authors declare they have no competing interests.

Author contributions

MH, SR, PC, CD, JC, EE, AS, EY, CAP, KN, SM, EK, BB, WW and CP contributed to the conception or design of the study. All authors contributed to the acquisition and analysis of data, or interpretation of findings. JA, MH, BB and AD drafted the manuscript. All authors revised drafts for important intellectual content. All authors, external and internal, had access to the data (including statistical reports and tables), except for current and past executives as per ethics requirement. All authors take responsibility for the integrity and accuracy of the data analysis and reporting. All authors have approved the final version of the manuscript submitted for publication. MH is guarantor for the paper.

Abstract

Objectives: Networks of clinical experts are being established internationally to help embed evidence based care in health systems. There is emerging evidence that these clinical networks can drive quality improvement programs, but the features that distinguish successful networks are largely unknown. We examined the factors that make clinical networks effective at improving quality of care and facilitating system-wide changes.

Methods: We conducted a retrospective cross-sectional study of 19 state-wide clinical networks that reflected a range of medical and surgical specialty care and were in operation from 2006 to 2008 in New South Wales, Australia. We conducted qualitative interviews with network leaders to characterise potential impacts, and conducted internet surveys of network members to evaluate external support and the organisational and program characteristics of their respective networks. The main outcome measures were median ratings of individual network impacts on quality of care and system-wide changes, determined through independent assessment of documented evidence by an expert panel.

Results: We interviewed 19 network managers and 32 network co-chairs; 592 network members completed internet surveys. Three networks were rated as having had high impact on quality of care, and seven as having had high impact on system-wide change. Better-perceived strategic and operational network management was significantly associated with higher ratings of impact on quality of care (coefficient estimate 0.86; 95% confidence interval [CI] 0.02, 1.69). Better-perceived leadership of the network manager (coefficient estimate 0.47; 95% CI 0.10, 0.85) and strategic and operational network management (coefficient estimate 0.23; 95% CI 0.06, 0.41) were associated with higher ratings of impact on system-wide change.

Conclusions: This study represents the largest study of clinical networks undertaken to date. The results suggest that clinical networks that span the health system can improve quality of care and facilitate system-wide change. Network management and leadership, encompassing both strategic and operational elements at the organisational level, appear to be the primary influences on network success. These findings can guide future organisational and system-wide change programs and the development or strengthening of clinical networks to help implement evidence based care to improve service delivery and outcomes.

Full text

Key points

  • Networks of clinical experts are being established internationally to help embed evidence based care in health systems, but there is little evidence about the most successful network design
  • Few studies have investigated clinical networks that span multiple clinical disciplines across a health system
  • This study provides quantitative evidence that clinical networks can improve quality of care and facilitate system-wide change
  • Combining ‘top down’ (strategic planning, strong leadership) and ‘bottom up’ (supportive environment, multidisciplinary representation) organisational approaches is most effective

Introduction

The next frontier for evidence based healthcare is to develop the science of its implementation into routine care across health systems.1 Internationally, networks of clinical experts are considered important vehicles for this implementation, as they can provide ‘bottom up’ views on tackling complex problems and champion change at the clinical interface.2,3 However, evidence for their success is largely anecdotal and experiential2,3, or focused on individual clinical areas.4-6 The science to support clinical network design has not kept pace with networks’ rapid operational development.7

We define ‘clinical networks’ as networks of clinicians and consumers that aim to improve clinical care and service delivery using a collegial approach to agree on and implement a range of strategies.2,3 A small number of qualitative and comparative case studies have investigated features of clinical networks8-11, and have suggested that strong leadership, an inclusive and collaborative culture with widespread clinician and stakeholder engagement, and adequate resources tend to be associated with success. An appropriate organisational structure has been shown to be necessary for changing processes and implementing quality improvement activities; a senior strategic leadership group, with implementation at the local level, and a focused, strategic approach to the selection of evidence based programs, has been the most successful structure.9,10,12 External factors such as supportive policy environments, health reorganisations and financial targets also influenced outcomes.10,12 To our knowledge, there has been no quantitative examination of the factors influencing clinical network success.

Nineteen diverse clinical networks established by the Agency for Clinical Innovation (the Agency) in New South Wales (NSW), Australia, provided a unique opportunity to quantitatively assess what features of clinical networks influence their ability to drive improvements in quality of care and facilitate system-wide change.3,13,14 These state-funded clinical networks span multiple disciplines across a large geographical region and have a system-wide focus where clinicians identify and advocate for models of service delivery and quality improvement activities in specialty health service areas. The clinical networks work in collaboration with the NSW Ministry of Health, Local Health Districts and other associated organisations15, and operate in a similar manner to those in the UK16, parts of Europe8, Canada17 and the US.4

We present findings on whether clinical networks can be effective in producing system-wide improvement, and which factors increase their success. Given the complexity and depth of investigation of this study, a number of papers have been, or are in the process of being, published: description of the study protocol18, methods and psychometric properties19,20; a qualitative prestudy to inform the conceptual model10; and a qualitative study to assist with interpreting quantitative results.21 In this paper, to test the conceptual model18, we report the main analyses for the hypotheses that successful clinical networks have:

1. A high level of external support from local health services and hospital management

2. Effective organisation, specifically strong clinical leadership and strategic and operational management.

Methods

Study design

Retrospective cross-sectional study of 19 clinical networks carried out using interviews, an internet survey and document review (Figure 1).

Figure 1.     Study overview with data collection and analysis components reported in this paper highlighted (click to enlarge)

Sample

Clinical networks in operation during a 3-year period (2006–2008) in NSW, Australia, covering: aged care, bone marrow transplantation, brain injury, cardiology, endocrinology, gastroenterology, gynaecological oncology, home enteral nutrition, neurosurgery, nuclear medicine, ophthalmology, radiology, renal medicine, respiratory medicine, severe burn injury, spinal cord injury, stroke, transition care and urology. The networks had a consistent organisational structure supported by the Agency executive, clinical co-chairs, a network manager at the operational level, and multidisciplinary network members.15 We assessed impacts on quality of care and system-wide change resulting from network activities during the study period to the end of 2011 to give sufficient time for changes to have occurred.

Managers and co-chairs of 19 networks were invited to participate in interviews to gather evidence for the primary outcomes. Contactable network members from 2006 to 2008 (n = 3234) – comprising medical, nursing and allied health professionals; consumers; nonhealth executives; and researchers and academics – were invited to participate in an internet survey.

Measures

A summary of outcome and explanatory variable definitions, their indicators and data collection methods is provided in Supplementary Files 1 and 2 (available from: hdl.handle.net/2123/17773).

The primary outcomes measured by this study were median ratings of impact on: 1) quality of care and 2) system-wide change. The secondary outcomes were development and implementation of quality improvement activities, clinician engagement, and perceived value of the network.

Explanatory factors measured were perceived external support; perceived leadership of the network manager, network co-chairs and the Agency executive; and strategic and operational management of the network.

Lastly, we collected data on descriptive and confounding variables: months of operation since the network establishment; network manager’s average full-time equivalent (FTE) working hours during the study period; average annual operating costs; and total in-kind contributions (i.e. sum of the cost of all people contributing to the network).

Data collection

Network managers and co-chairs during the study period were interviewed and asked to identify what, why and how impacts occurred as a result of network activities between 2006 and 2008. An impact had to: 1) meet the definition of quality of care and system-wide change; 2) be due to activities of the network; and 3) be corroborated by independent evidence. Supporting documentation was required to be submitted as evidence. A validation substudy was conducted to verify whether impacts identified in the interviews were attributable to network actions, and results demonstrated that the networks provided accurate information.22

Expert panel rating of impacts (EXPAND method)

To obtain objective and standard measures of the two primary outcomes, an expert panel method was adapted from the RAND/UCLA (University of California, Los Angeles) appropriateness method; detailed description is reported elsewhere.18,19 The EXPAND Method Panel consisted of five independent members with experience in quality improvement programs, implementing system-wide change, clinical care and the expert panel method. Panel members initially assessed the evidence of network impact to independently rate each network (premeeting ratings) on its impact on quality of care and system-wide change. Then a moderated face-to-face meeting was conducted, during which aggregated ratings were presented and discussed. At the conclusion of the meeting, each panel member independently re-rated each network (postmeeting ratings), the median of which was the final measure of network impact.

Internet survey

A survey was developed by building on existing clinical network measures, wider organisational literature, and findings of a qualitative prestudy.10,18 The survey measured five explanatory factors: perceived external support; perceived leadership of the network manager, network co-chairs and the Agency executive; and strategic and operational management of the network. The survey also assessed secondary outcomes: perceived engagement of clinicians, and whether the network was perceived as valuable. The survey items had a five-point Likert response scale (‘strongly agree’ to ‘strongly disagree’, with an additional ‘don’t know’ option). A network-level measure was calculated as the mean of the individual scores for each domain. Cronbach’s alpha coefficients ranged from 0.75 to 0.9220 across the seven domains, indicating acceptable to excellent construct validity.23 Further details on the survey development, its psychometric properties and descriptive results have been published elsewhere.20

Document review

Meeting minutes, records of quality improvement activities undertaken and financial records were audited using a standardised coding schedule and free-text annotations15,18 to measure: one explanatory factor (strategic and operational management of a network); two secondary outcomes (developed and implemented quality improvement activities, engagement of clinicians); and the potential confounding factors.15,18

Statistical methods

SAS 9.1 (Cary, NC: SAS Institute Inc) and Stata 11 software (College Station, TX: StataCorp) were used for analysis. The unit of analysis was the network. Relationships between outcomes, explanatory variables and confounders were examined using Spearman correlation coefficients for continuous explanatory variables and confounders, and t-tests for binary explanatory and confounder variables. Explanatory variables and confounding variables that had a correlation of 0.4 or more with the outcome were included in backward stepwise regression analyses and excluded if they had a p value of 0.1 or more. The regression analysis was first undertaken to investigate which explanatory variables were associated with the outcome. Potential confounders were then added to this model using the same backward stepwise selection process described above, but forcing the explanatory variables chosen in the previous model.

Data from a relevant Australian study24 examining the association between clinical performance and organisational determinants in 19 healthcare organisations was used to estimate the likely effect size. Spearman correlation coefficients for associations of relevance to our study ranged from 0.45 to 0.71. With 19 networks and a 5% significance level, we had 80% power to detect a correlation coefficient as being statistically significant if it was 0.6 or more. Thus, we had sufficient power to detect moderate to large associations that are achievable and clinically meaningful.

Results

Interviews were conducted with 19 clinical network managers and 32 network co-chairs representing each clinical network. The internet survey was completed by 592 network members (18% response rate).

Descriptive results: characteristics of the clinical networks

Characteristics of the clinical networks are provided in Table 1 (published in more detail elsewhere).15,18-20

 

Table 1.     Clinical network characteristics

Mean Standard deviation Median Range
Months of operation since establishment 65.9 21.3 75 14–85
Number of meetings during study period 13.4 4.4 13 5–25
Number of quality improvement activities developed and implemented 16.4 12.3 12 1–52
Number of members 237.3 160.5 205 43–708
Number of medical officers in network 71.3 52.8 51 15–197
Number of nurses in network 88.4 87.5 64 1–367
Number of allied health workers in network 55.6 59.5 35.5 3–202
Number of network managers in network 1.8 0.76 2 1–3
Number of members in executive committees in network 30 14 30 3–62
Number of disciplines represented on network executive committees 3 0.82 3 2–5
Average annual operating costs 2006–2008 (AUD) $199 285 $200 831 $141 299 $41 825–$857 947
Total in-kind contributions 2006–2008 (AUD) $21 765 $12 066 $18 610 $6 776–$55 723

 

Descriptive results: impacts of the networks on improving quality of care and system-wide change

Network impact on improving quality of care and facilitating system-wide change is summarised in Figure 2.

Nine networks (47%) had limited impact on improving quality of care, 37% (n = 7) had moderate impact and 16% (n = 3) had high impact. For facilitating system-wide change, 26% (n = 5) of networks had limited impact, 37% (n = 7) had moderate impact and 37% (n = 7) had high impact. An example of an improvement in quality of care was the development of care protocols to promote standardised approaches and eliminate variation. An example of system-wide change was the implementation of a new model of care for rural patients. Additional examples of impacts will be published elsewhere.19

Figure 2.     Networks’ impact on quality of care and system-wide change as assessed by expert panel (N = 19) (click to enlarge)

Predictors of the effectiveness of clinical networks to impact quality of care and system-wide change

Impact on quality of care

There were large to medium positive correlations between impact on quality of care and the explanatory factors: perceived leadership of network manager (= 0.55; = 0.016); perceived leadership of network co-chairs (= 0.64; = 0.003); perceived strategic and operational management of a network (= 0.50; = 0.029); and strategic and operational management of a network signified by number of meetings (= 0.52; = 0.022) (for further details, see Supplementary File 3, available from: hdl.handle.net/2123/17773). In regression analysis, perceived strategic and operational management of a network emerged as the only significant variable associated with impact on quality of care (coefficient estimate 0.86; 95% confidence interval [CI] 0.02, 1.69; p = 0.045) after controlling for network manager’s average FTE working hours (Table 2).

 

Table 2.     Predictors of the effectiveness of clinical networks to impact quality of care and system-wide change (summary of regression analysesa)

Measure  Median impact (unadjusted regression coefficient, 95% CI, p value) Median impact (adjusted regression coefficienta,b, 95% CI, p value)
Quality of care Perceived strategic and operational management of a network 1.35 (0.49, 2.21), p = 0.004 0.86a (0.02, 1.69), p = 0.045
Strategic and operational management of a network signified by number of meetings 0.21 (0.02, 0.41), p = 0.036 0.09 (–0.10, 0.29), p = 0.311
Proportion of FTE of network manager na 4.33 (0.88, 7.78), p = 0.017
System-wide change Strategic and operational management of a network signified by number of meetings 0.34 (0.15, 0.53), p = 0.002 0.23 (0.06, 0.41), p = 0.013
Perceived leadership of network manager 0.58 (0.14,1.03), p = 0.014 0.47 (0.09, 0.85), p = 0.018
Proportion of FTE of network manager na 4.11 (1.18, 7.05), p = 0.009
CI = confidence interval; FTE = full-time equivalent; na = not applicable
a Analyses of quality of care were adjusted for other variables in the model as indicated in the table, and also adjusted for confounders. Months of operation since establishment of the network and total in-kind contributions had low correlations with impact on quality of care (< 0.4). Although average annual operating costs demonstrated a medium positive correlation with impact on quality of care (= 0.54; = 0.18), it was not retained in the final stepwise regression model.
b Analyses of system-wide change were adjusted for other variables in the model as indicated in the table, and also adjusted for confounders. Months of operation since establishment of the network and total in-kind contributions had low correlations with impact on system-wide change (< 0.4). Although average annual operating costs demonstrated a medium positive correlation with impact on system-wide change (= 0.56; = 0.13), it was not retained in the final stepwise regression model.

 

Impact on system-wide change

There were large to medium positive correlations between impact on system-wide change and the explanatory factors: perceived external support (= 0.53; = 0.019); perceived leadership of network manager (= 0.50; = 0.03); perceived leadership of network co-chairs (= 0.59; = 0.008); and strategic and operational management of a network signified by number of meetings (= 0.71; < 0.001) (for further details, see Supplementary File 3, available from: hdl.handle.net/2123/17773). In regression analyses, strategic and operational management of a network signified by number of meetings (coefficient estimate 0.23; 95% CI 0.06, 0.41; p = 0.013) and perceived leadership of the network manager (coefficient estimate 0.47; 95% CI 0.09, 0.85; p = 0.018) emerged as the significant factors associated with impact on system-wide change after controlling for network manager’s average FTE working hours (Table 2).

Secondary outcomes

Multivariable analyses showed that perceived strategic and operational management of a network was the only explanatory factor significantly associated with three of the secondary outcomes: developed and implemented quality improvement activities (coefficient estimate 5.87; 95% CI 1.03, 10.72; p = 0.021); engagement of clinicians – number of members (coefficient estimate 113.72; 95% CI 55.82, 171.61; p = 0.001); and network was perceived as valuable (coefficient estimate 0.47; 95% CI 0.06, 0.88; p = 0.026) after controlling for confounding factors (complete results for secondary outcomes available upon request).

Discussion

This study demonstrates that clinical networks can improve quality of care and facilitate system-wide change; however, we found substantial variation among NSW clinical networks. Only three of 19 networks demonstrated high impact on quality of care, and seven had high impact on system-wide change. Management and leadership of a clinical network, encompassing both strategic and operational elements, were the primary factors influencing the impact of clinical networks, as well as their ability to develop and implement quality improvement activities, engage clinicians, and be perceived as valuable. Corroborating evidence from a qualitative substudy examining network performance in detail indicated that charismatic and visionary leadership, as well as formal infrastructure to support network activities, were perceived as the most important factors for successful clinical networks.21

This research provided the scope to study clinical networks that covered divergent clinical areas across a large health system. It is the largest study of networks to date, as well as the first to quantitatively examine factors contributing to network success. Methodological innovations, including the systematic collection of data relating to objective and subjective measures, allowed sufficient standardisation of data across clinical areas for quantitative analyses. The EXPAND method enabled consistent assessment of network impacts, despite variation in clinical focus and the nature of desired impact, and could be adapted for use in other studies examining heterogeneous impacts arising in real-world research. The internet survey instrument developed is a valid tool applicable for use with clinical networks in other jurisdictions.20 In addition, we conducted a qualitative substudy to further examine features of high- and low-impact networks to help interpret the quantitative results.21

Leadership, and strategic and operational management, were identified as key features that new and existing clinical networks should use to strengthen their operations and increase impact. Leadership, as measured in this study, encompassed aspects such as demonstrating vision and drive, and the ability to engage clinicians and build relationships with external stakeholders. Strategic and operational management measured in this study was defined by the organisational ability of the network manager; communication to assist with implementation; and a supportive, open environment with multidisciplinary representation. ‘Top down’ approaches to network management can sometimes stifle clinician engagement and innovation8; in contrast, a ‘bottom up’ approach can lack strategic planning, logistical efficiency and problem-solving capabilities that are possible with organisational-level planning and scaled implementation. However, a combined top down and bottom up approach to the design of clinical networks can provide strategic and operational support at the organisational level, while maintaining engagement at the clinical interface.

To optimise the effectiveness of clinical networks, it is recommended that strategic elements of management be established, such as systematic approaches to planning, and forming linkages to external parties to help implement quality improvement activities. In addition, operational elements such as the structure and organisation of network meetings, and communication and engagement strategies should be formalised. Future research should aim to prospectively measure change in the impact of clinical networks after implementing the recommendations from this study.

This study has some limitations. It was powered for analysis at the network level and multivariate subgroup analyses could not be conducted. Our impact measures were restricted to evidence available at the time of the study, and there may have been some measurement bias as a result. The survey response rate was less than the average for online surveys reported at 33%25, and, although sensitivity analyses based on inverse probability weighting found correlation and regression results to be similar to the main (non-weighted) analyses, we cannot conclude that there was no response bias. It is possible that individuals with strong opinions were more likely to respond.20 The external context was not significantly related to network impact. This may be due to lack of variation in external support between networks given their operation within one jurisdiction (NSW). Finally, after piloting, it became apparent that the available operational data were not of sufficient quality to measure all indicators identified in the initial protocol (for details, see Supplementary Files 1 and 2, available from: hdl.handle.net/2123/17773).

Conclusions

This study demonstrates that clinical networks can be vehicles to implement organisational change in healthcare, helping embed evidence based care into health systems and introduce quality improvement activities at a system-wide level. Clinical networks with strong management and leadership, combining top down and bottom up organisational approaches, are most successful. Clinicians and healthcare administrators across disciplines of medicine and healthcare internationally can use these findings to better organise formal and informal networks and increase their effectiveness in implementing evidence based healthcare for better patient outcomes.

Acknowledgements

The authors acknowledge the valuable contribution of the clinical network managers and co-chairs, and the Agency executive, for participating in this research. In addition, the authors thank Robert Sanson-Fisher, Daniel Barker and Mario D’Souza from the University of Newcastle for their valuable input into the study design, data management and data analysis stages, respectively. We acknowledge the support of Cyra Patel and Frances Gilham in assisting with preparing this manuscript.

As well as the named authors, the other members of the Clinical Networks Research Group are: Deanna Kalucy (Sax Institute), Rob Sanson-Fisher (University of Newcastle), Elizabeth McInnes (St Vincent’s Health and Australian Catholic University) and Cyra Patel (The Children’s Hospital at Westmead).

Dedication

As we were working on publishing this study, our colleagues Peter Castaldi and Kate Needham passed away. We would like to acknowledge the tremendous dedication and passion Peter and Kate had for clinical networks, clinical wellbeing and improving healthcare for all people.

Copyright:

Creative Commons License

© 2018 Haines et al. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Licence, which allows others to redistribute, adapt and share this work non-commercially provided they attribute the work and any adapted version of it is distributed under the same Creative Commons licence terms.

References

  • 1. Zerhouni EA. Translational and clinical science – time for a new vision. N Engl J Med. 2005;353(15):1621–3. CrossRef | PubMed
  • 2. Goodwin N, 6 P, Peck E, Freeman T, Posaner R. Managing across diverse networks of care: lessons from other sectors. London: NHS Service Delivery and Organisation Programme; 2004 [cited 2017 Dec 20]. Available from: webarchive.nationalarchives.gov.uk/20091005103011/https://www.sdo.nihr.ac.uk/files/adhoc/39-policy-report.pdf
  • 3. Stewart G, Dwyer J, Goulstone K. The Greater Metropolitan Clinical Taskforce: an Australian model for clinician governance. Med J Aust. 2006;184(12):597–9. PubMed
  • 4. Laliberte L, Fennell ML, Papandonatos G. The relationship of membership in research networks to compliance with treatment guidelines for early-stage breast cancer. Med Care. 2005;43(5):471–9. CrossRef | PubMed
  • 5. Greene A, Pagliari C, Cunningham S, Donnan P, Evans J, Emslie-Smith A, et al. Do managed clinical networks improve quality of diabetes care? Evidence from a retrospective mixed methods evaluation. Qual Saf Health Care. 2009;18(6):456–61. CrossRef | PubMed
  • 6. Spence K, Henderson-Smart D. Closing the evidence-practice gap for newborn pain using clinical networks. J Paediatr Child Health. 2010:1–7. CrossRef | PubMed
  • 7. Brown B, Patel C, McInnes E, Mays N, Young J, Haines M. The effectiveness of clinical networks in improving quality of care and patient outcomes: a systematic review of quantitative and qualitative studies. BMC Health Serv Res. 2016;16:360. CrossRef | PubMed
  • 8. Ahgren B, Axelsson R. Determinants of integrated health care development: chains of care in Sweden. Int J Health Plann Manage. 2007;22(2):145–57. CrossRef | PubMed
  • 9. Cunningham F, Ranmuthugula G, Westbrook J, Braithwaite J. Net benefits: assessing the effectiveness of clinical networks in Australia through qualitative methods. Implement Sci. 2012;7:108. CrossRef | PubMed
  • 10. McInnes E, Middleton S, Gardner G, Haines M, Haerstch M, Paul CL, Castaldi P. A qualitative study of stakeholder views of the conditions for and outcomes of successful networks. BMC Health Serv Res. 2012;12:49. CrossRef | PubMed
  • 11. Hogard E, Ellis R. An evaluation of a managed clinical network for personality disorder: breaking new ground or top dressing? J Eval Clin Pract. 2010;16(6):1147–56. CrossRef | PubMed
  • 12. Sheaff R, Windle K, Wistow G, Ashby S, Beech R, Dickinson A, et al. Reducing emergency bed-days for older people? Network governance lessons from the ‘Improving the Future for Older People’ programme. Soc Sci Med. 2014;106:59–66. CrossRef | PubMed
  • 13. ACI: NSW Agency for Clinical Innovation. Sydney: Agency for Clinical Innovation; 2017. About ACI [cited 2017 Dec 20]; [about 3 screens]. Available from: www.aci.health.nsw.gov.au/about-aci
  • 14. Braithwaite J, Goulston K. Turning the health system 90 degrees down under. Lancet. 2004;364(9432):397–9. CrossRef | PubMed
  • 15. The Sax Institute. What have the clinical networks achieved and who has been involved? 2006–2008. Agency for Clinical Innovation; 2011.
  • 16. Gale C, Santhakumaran S, Nagarajan S, Statnikov Y, Modi N, Neonatal Data Analysis Unit and the Medicines for Neonates Investigator Group. Impact of managed clinical networks on NHS specialist neonatal services in England: population based study. BMJ. 2012;344:e2105. CrossRef | PubMed
  • 17. Touati N, Roberge D, Denis JL, Cazale L, Pineault R, Tremblay D. Clinical leaders at the forefront of change in health-care systems: advantages and issues. Lessons learned from the evaluation of the implementation of an integrated oncological services network. Health Serv Manage Res. 2006;19(2):105–22. CrossRef | PubMed
  • 18. Haines M, Brown B, Craig JC, D'Este C, Elliott E, Klineberg E, et al. Determinants of successful clinical networks: the conceptual framework and study protocol. Implement Sci. 2012;7:16. CrossRef | PubMed
  • 19. Dominello A, Yano EM, Klineberg E, Redman S, Craig JC, Brown B, et al. The EXpert PANel Decision (EXPAND) Method: a way to measure the impact of diverse quality improvement activities of clinical networks. Public Health Res Pract. 2018;28(4):e2841829. CrossRef
  • 20. Brown B, Haines M, Middleton S, Paul C, D'Este C, Klineberg E, et al. Development and validation of a survey to measure features of clinical networks. BMC Health Serv Res. 2016;16(1):531. CrossRef | PubMed
  • 21. McInnes E, Haines M, Dominello A, Kalucy D, Jammali-Blasi A, Middleton S, Klineberg E. What are the reasons for clinical network success? A qualitative study. BMC Health Serv Res. 2015;15:497. CrossRef | PubMed
  • 22. Kalucy D. Determinants of effective clinical networks: Validation sub-study. [Masters Thesis.] Sydney: University of New South Wales; 2013.
  • 23. Bland JM. Statistics notes: Cronbach's alpha. BMJ. 1997;314:572. CrossRef | PubMed
  • 24. Braithwaite J, Greenfield D, Westbrook J, Pawsey M, Westbrook M, Gibberd R, et al. Health service accreditation as a predictor of clinical and organisational performance: a blinded, random, stratified study. Qual Saf Health Care. 2010;19(1):14–21. CrossRef | PubMed
  • 25. Nulty DD. The adequacy of response rates to online and paper surveys: what can be done? Assessment & Evaluation in Higher Education. 2008;33(3):301–14. CrossRef