Skip to main content

Differences in identifying healthcare associated infections using clinical vignettes and the influence of respondent characteristics: a cross-sectional survey of Australian infection prevention staff

Abstract

Background

Australia has commenced public reporting and benchmarking of healthcare associated infections (HAIs), despite not having a standardised national HAI surveillance program. Annual hospital Staphylococcus aureus bloodstream (SAB) infection rates are released online, with other HAIs likely to be reported in the future. Although there are known differences between hospitals in Australian HAI surveillance programs, the effect of these differences on reported HAI rates is not known.

Objective

To measure the agreement in HAI identification, classification, and calculation of HAI rates, and investigate the influence of differences amongst those undertaking surveillance on these outcomes.

Methods

A cross-sectional online survey exploring HAI surveillance practices was administered to infection prevention nurses who undertake HAI surveillance. Seven clinical vignettes describing HAI scenarios were included to measure agreement in HAI identification, classification, and calculation of HAI rates. Data on characteristics of respondents was also collected. Three of the vignettes were related to surgical site infection and four to bloodstream infection. Agreement levels for each of the vignettes were calculated. Using the Australian SAB definition, and the National Health and Safety Network definitions for other HAIs, we looked for an association between the proportion of correct answers and the respondents’ characteristics.

Results

Ninety-two infection prevention nurses responded to the vignettes. One vignette demonstrated 100 % agreement from responders, whilst agreement for the other vignettes varied from 53 to 75 %. Working in a hospital with more than 400 beds, working in a team, and State or Territory was associated with a correct response for two of the vignettes. Those trained in surveillance were more commonly associated with a correct response, whilst those working part-time were less likely to respond correctly.

Conclusion

These findings reveal the need for further HAI surveillance support for those working part-time and in smaller facilities. It also confirms the need to improve uniformity of HAI surveillance across Australian hospitals, and raises questions on the validity of the current comparing of national HAI SAB rates.

Introduction

Despite the absence of a standardised national healthcare associated infection (HAI) surveillance program in Australia, public reporting of HAI rates has commenced. Annual hospital level HAI Staphylococcus aureus bloodstream (SAB) infection rates have been reported publicly since 2012–13 [1]. Although national safety and quality health service standards mandate HAI surveillance [2], there is a large variation in HAI surveillance processes across Australia’s eight States and Territories [3, 4]. Although a national definition for SAB does exist [5], a major difference is the varying use of the National Health and Safety Network (NHSN) definitions [6] with or without local modifications to identify other HAIs [4]. It is unclear how much this variation influences the interpretation and application of definitions and subsequent HAI rates.

Whilst benchmarking and public reporting of HAI is new to Australia, it has been common in several countries for some time, including the USA, England, and France [7]. Nevertheless, there remains significant concern regarding the use of HAI data as performance indicators, particularly in light of insufficient standardisation of events being monitored [8, 9].

If HAI rates are used as quality indicators, data must be robust and reliable [10]. A recent study by Keller et al. identified low inter-rater reliability between those performing HAI surveillance and concluded that such discordance could “dramatically affect not only hospital reputations but also hospital reimbursement” [11]. Despite the lack of evidence demonstrating a reduction of HAI rates using financial incentives [12, 13], one Australian State has recently implemented financial penalties for preventable HAI bloodstream infections [14].

If Australia is to commence public reporting of other HAI data, it is important to be assured the data is robust and reliable. The objective of this study was to measure agreement in HAI identification, classification, and calculation of HAI rates amongst those undertaking HAI surveillance in Australian hospitals using a series of clinical vignettes. We also investigated if differences amongst those undertaking surveillance influenced their responses.

Method

Study instrument

A total of seven vignettes representing HAI surveillance situations that may occur in the acute care setting were developed as part of a larger cross-sectional survey which explored HAI surveillance practices in Australian hospitals [4]. The vignettes were based on those published in similar studies and in a local implementation guide [11, 15, 16], and were further developed in collaboration with infection prevention experts from a jurisdictional surveillance program. As not all hospitals undertake surveillance on the same type of infection, the survey was designed so that participants only answered those vignettes on which they undertook surveillance. For example, if a respondent indicated they did not perform surveillance on central line associated bloodstream infections (CLABSI), they were not presented with a vignette describing a potential CLABSI.

The vignettes were categorised into either a surgical site infection (SSI) or bloodstream infection. These types of infection were included as they represent the most common types of HAI surveillance undertaken. The first was specific to those undertaking SSI surveillance on coronary artery bypass graft surgery (CABG) to identify how they calculated an infection rate if more than one wound site was involved. A gastrointestinal surgery vignette was designed to be a straightforward case and therefore considered a positive control. The other SSI vignette was slightly more challenging in that it sought clarification as to whether or not the SSI was an organ space or deep SSI.

The SAB vignette asked respondents to indicate if they would classify it as healthcare associated. Three central line associated bloodstream infection (CLABSI) vignettes sought to identify differences regarding local modifications of the NHSN definitions, and the application of either 48 h or 2 calendar days as the marker of hospital acquisition.

For each vignette, participants were instructed to answer applying their “usual definitions and methods”.

The survey was constructed using a secure online tool and piloted by four current and two former infection prevention staff. The pilot participants provided feedback on clarity, simplicity, flow and logic of the survey. After minor amendments, the survey was further piloted by two of the six involved in the initial pilot.

Population and recruitment

The survey was administered to infection prevention nurses who undertake HAI surveillance from both public (government funded) and private acute care facilities with more than 50 beds. This size facility was targeted as they were considered more likely to undertake HAI surveillance on a routine basis.

Recruitment was through an open invitation email distributed through the Australasian College of Infection Prevention and Control (ACIPC) list server. Coordinators of State and Territory surveillance programs, where they existed, were contacted and requested to encourage those in their State and Territory to complete the survey. Members of the Australian Commission on Safety and Quality in Health Care HAI Advisory Committee were requested to overtly support completion of the survey to their peers and colleagues. The email requested all recipients to forward on to others who may not have received it.

No identifying details of participants or their facilities were requested. Ethics permission was granted by the University Human Research Ethics Committee, Queensland University of Technology (1400000339).

Statistical analysis

Agreement for the SSI and CLABSI vignettes was calculated as the proportion of responses considered correct using NHSN definitions [6], and for the SAB vignette according to the Australian SAB definition [5]. Data was analysed using Stata, version 13 (Stata Corp, College Station, Texas).

Single variable predictors of correct answers

For each vignette, univariate analysis using logistic regression was used to generate an odds ratio of answering correct depending on the participants’ characteristics. To examine all vignettes combined, a Poisson regression was used to analyse the total number correct across all vignettes, with an adjustment to the denominator, as participants only answered those vignettes on which they undertook surveillance. The results are presented as risk ratios and 95 % confidence intervals, where a risk ratio above 1 means a greater ‘risk’ of a correct answer. To make these results comparable with the logistic regression model using individual vignettes, the odds ratios from the logistic regressions were converted to risk ratios [17].

To explore the influence of the location (i.e. State or Territory of respondent), a Kruskall–Wallis test was used for each individual vignette and the combined analysis of the total number correct.

Multivariable predictors of correct answers

In an attempt to identify independent predictors of answering correct, a multivariable Poisson model of the total number correct was developed from characteristics identified in the Poisson univariate analysis that had a p-value under 0.5. A high p-value threshold was used to ensure that all potentially important variables were considered. To check for multicollinearity, the variance inflation factor (VIF) of each variable was explored. Variables with a VIF of 5 or above indicating high collinearity were removed from final multivariable model.

Results

A total of 92 responses to the vignettes were received. All respondents were registered nurses with an average age of 49 and a mean of 12 years of experience working in infection prevention. There was representation from each of the eight States and Territories in Australia. The majority of respondents worked as part of a team (73 %) and in public facilities (80 %). Only 51 % reported having been trained in HAI surveillance. The median number of vignettes answered was 5 out of a maximum of 7 (Table 1).

Table 1 Number of vignettes answered by respondents

A summary of each vignette, response options and response rates are listed in Table 2. The number of respondents varied from 23 for Vignette 1–85 for Vignette 5. The control vignette was correctly answered by all respondents, however the correct response rates for the other vignettes varied from 53 to 75 % (Table 2).

Table 2 Summary of vignettes and responses (responses in bold indicate correct response)

Predictors of correct answers

Univariate analysis identified three factors that were statistically significantly associated with the outcome of two of the vignettes (Table 3). For Vignette 3, which challenged the responder with the difference between classifying a SSI as either an organ space infection or a deep infection, those who worked in a team were more than twice as likely to respond correctly (RR = 2.16, [95 % CI: 1.14, 2.97]) The State or Territory of the respondents was also statistically significantly associated with a correct answer (p = 0.045, Kruskall–Wallis test).

Table 3 Univariate logistic regression analysis of vignette and respondent characterstics, with the Kruskall–Wallis test of influence of State or Territory

Vignette 5 explored the difference between the current NHSN criteria for CLABSI against 2008 criteria. Working in a hospital with over 400 beds more than doubled the likelihood of a correct answer (RR = 2.42, [95 % CI: 1.09, 3.45]), but those who have had surveillance skills assessed were less likely to have a correct answer (RR = 0.32, [95 % CI: 0.09, 0.98]). There was evidence that the proportion answering correctly varied between State or Territory (Kruskal–Wallis test: p = 0.043).

Those characteristics that were more frequently associated with a correct response across all vignettes were: working in a hospital over 400 beds, having been formally trained in surveillance, being trained by a central organisation, working in a team, and having daily access to an epidemiologist. The characteristic most commonly associated with an incorrect response was working part-time.

No statistically significant factors were identified for the total number correct, but characteristics most strongly associated with a correct response were working in a team RR = 1.15 (95 % CI: 0.89, 1.49) and daily access to an epidemiologist RR = 1.15 (95 % CI: 0.81, 1.62). Working part-time was most strongly associated with an incorrect answer RR = 0.89 (95 % CI: 0.69, 1.14).

Multivariable analysis

Two multivariable models were developed (Table 4). Characteristics from the univariate analysis that had a p-value < 0.5 were included in the first model (Model A). The variable “Work in a Team” was found to have a VIF of 5. Therefore, a second multivariate model (Model B) was generated following the omission of “Work in a Team”.

Table 4 Multivariable analysis of respondent characterstics using poisson regression of the number of correct answers

For both models, the probability of getting a correct answer increased by 12 % if the respondent had daily access to an epidemiologist, and 8 % if they had an academic degree or higher. For Model A the probability increased by 11 % if they worked as part of a team. Both models also identified that incorrect answers were more common for respondents who were part-time or with less than five years experience. No statistically significant factors were identified.

Discussion

This study has identified disparity in HAI identification, classification, and calculation of HAI rates using clinical vignettes in large acute care Australian hospitals. Although one vignette returned an encouraging result of 100 % correct response rate, it was included as a positive control. The range of responses of 53–75 % for the other six vignettes follows on from recent findings describing the broad variation amongst surveillance practices in Australia [4], and infer that comparison between hospitals, States and Territories, and any aggregation of existing data will be flawed. This is implicit from the following findings.

First, aggregation of SSI rates following CABGs will result in an underestimation of the true rate whilst some hospitals, States and Territories persist in using each incision as the denominator to calculate a rate. Second, the inability to distinguish between organ space and deep space means that any aggregated SSI data reported by type of infection will likely be unreliable and incomparable. Third, the present use of both 48 h or 2 calendar days as criteria for CLABSI acquisition clearly affects the CLABSI rate reported. Fourth, even though a national definition for SAB exists (unlike the potential HAIs described in other vignettes) when presented with a complex SAB event the ability to correctly identify it is moderate. This is important as current SAB rates, that are publicly reported on a safety and quality website in Australia encouraging hospital comparisons [1], could be misleading.

The univariate analysis findings suggest that those from larger hospitals and in States with established programs are more likely to be in agreement with current NHSN HAI definitions. This could be explained by the team environment of larger hospitals which may provide improved knowledge from greater learning opportunities, and the training provided by the established programs.

Although no statistically significant predictors were identified in the multivariable analysis, the results from both models indicate that those with less experience and those who work part-time require increased support and training to identify HAIs.

Daily access to an epidemiologist was positively associated with a correct answer for all vignettes and also both models of the multivariable analysis. Given that only 1 % of respondents have daily access to an epidemiologist, this may be a proxy for other factors (e.g., a thriving research culture) that have not been identified in this study and is worthy of further exploration.

The results of this study are consistent with recent international studies that have identified broad variation in the identification of both SSI and CLABSI within and between HCW groups [11, 15, 1821]. Similar to Keller’s study [11], we attempted to identify characteristics that may act as independent predictors of a correct response. Keller identified that those with a clinical background were more likely to identify a HAI correctly. All the respondents to this study were infection prevention nurses with a clinical background and like Keller, no other significant predictors were identified in a multivariable model.

Unlike a recent study using clinical vignettes, [22] we were unable to estimate sensitivity and specificity for this study. Although most hospitals use HAI definitions based on NHSN, there is no uniform national definition for surgical site infection or CLABSI in Australia, and so there is no gold standard available to measure sensitivity and specificity. Also, the emphasis and main objective of this study was to measure agreement, rather than sensitivity and specificity amongst participants.

There are limitations to this study. Selection bias and small numbers may influence the results. Despite the small number of responses, variation in agreement is clearly evident. A survey response rate was unable to be calculated as the number of infection prevention staff in Australia is unknown [23], and we are uncertain how many received the survey. Approximately 500 ACIPC members subscribe to the list server, (personal communication, ACIPC secretary June 2014), but not all undertake HAI surveillance, nor are all infection prevention staff members of ACIPC. It is estimated there are approximately 215 acute public hospitals with more than 50 beds in Australia [24], and our respondents were from all States and Territories with a broad range of experience working in different sized hospitals, and so we are confident this is representative of those undertaking HAI surveillance. Not all participants answered each vignette, as they were only required to answer vignettes relevant to the type of surveillance they usually perform, therefore some vignettes were correctly not answered. Completing vignettes online does not represent reality, and many infection prevention staff will discuss potential HAIs before making a decision, particularly those who work in teams.

A major strength of this study is its anonymity in that there was no pressure influencing the respondents if they had any uncertainty. This in fact may represent a more accurate reflection of infection prevention staff true understanding.

Conclusion

The results of this study have been derived from those who are currently charged with collecting HAI data, and indicate that training and support resources for those in smaller facilities who work part-time needs to be strengthened.

Before national reporting can be established, robust standardised surveillance processes need to be implemented. Presently, the validity of existing SAB data is questionable, and the temptation to aggregate any existing HAI rates to generate national data must be avoided.

Abbreviations

HAI:

Healthcare associated infection

SAB:

Staphylococcus aureus bacteraemia

CLABSI:

Central line associated bloodstream infection

CSSI:

Surgical site infection

CABG:

Coronary artery bypass graft

NHSN:

National Health and Safety Network

ACIPC:

Australasian College for Infection Prevention and Control

VIF:

Variance inflation factor

RR:

Risk ratio

CI:

Confidence Interval

References

  1. National Health Performance Authority. MyHospitals. In: MyHospitals. 2015. http://www.myhospitals.gov.au. Accessed 9th March 2015.

  2. Australian Commission on Safety and Quality in Healthcare. Standard 3. Preventing and Controlling Hospital Acquired Infection. Sydney: Commonwealth of Australia; 2012.

    Google Scholar 

  3. Murphy CL, McLaws ML. Methodologies used in surveillance of surgical wound infections and bacteremia in Australian hospitals. Am J Infect Control. 1999;27(6):474–81.

    Article  CAS  PubMed  Google Scholar 

  4. Russo PL, Cheng AC, Richards M, Graves N, Hall L. Variation in health care-associated infection surveillance practices in Australia. Am J Infect Control. 2015. doi:10.1016/j.ajic.2015.02.029.

  5. Australian Commission on Safety and Quality in Healthcare. National definition and calculation of HAI Staphylococcus aureus bacteraemia. 2014. http://www.safetyandquality.gov.au/our-work/healthcare-associated-infection/national-hai-surveillance-initiative/national-definition-and-caluculation-of-hai-staphylococcus-aureus-bacteraemia/. Accessed 18 September 2014.

  6. Horan TC, Andrus M, Dudeck MA. CDC/NHSN surveillance definition of health care–associated infection and criteria for specific types of infections in the acute care setting. Am J Infect Control. 2008;36(5):309–32. http://dx.doi.org/10.1016/j.ajic.2008.03.002.

    Article  PubMed  Google Scholar 

  7. Haustein T, Gastmeier P, Holmes A, Lucet J-C, Shannon RP, Pittet D, et al. Use of benchmarking and public reporting for infection control in four high-income countries. The Lancet Infectious Diseases. 2011;11(6):471–81.

    Article  PubMed  Google Scholar 

  8. Cheng AC, Bass P, Scheinkestel C, Leong T. Public reporting of infection rates as quality indicators. Med J Aust. 2011;195(6):326–7. doi:10.5694/mja11.10778.

    Article  PubMed  Google Scholar 

  9. Haut ER, Pronovost PJ. Surveillance bias in outcomes reporting. JAMA. 2011;305(23):2462–3. doi:10.1001/jama.2011.822.

    Article  CAS  PubMed  Google Scholar 

  10. Leaper D, Tanner J, Kiernan M. Surveillance of surgical site infection: more accurate definitions and intensive recording needed. J Hosp Infect. 2013;83(2):83–6. http://dx.doi.org/10.1016/j.jhin.2012.11.013.

    Article  CAS  PubMed  Google Scholar 

  11. Keller SC, Linkin DR, Fishman NO, Lautenbach E. Variations in identification of healthcare-associated infections. Infect Control Hosp Epidemiol. 2013;34(7):678–86. doi:10.1086/670999.

    Article  PubMed Central  PubMed  Google Scholar 

  12. Calderwood MS, Kleinman K, Soumerai SB, Jin R, Gay C, Platt R, et al. Impact of Medicare’s payment policy on mediastinitis following coronary artery bypass graft surgery in US hospitals. Infect Control Hosp Epidemiol. 2014;35(2):144–51. doi:10.1086/674861.

    Article  PubMed  Google Scholar 

  13. Lee G, Kleinman K, Soumerai S, Tse A, Cole D, Fridkin SK, et al. Effect of nonpayment for preventable infections in U.S. Hospitals. The New England Journal of Medicine. 2012;367(15):1428–37.

    Article  CAS  PubMed  Google Scholar 

  14. Runnegar N. What proportion of healthcare-associated bloodstream infections (HA-BSI) are preventable and what does this tell us about the likely impact of financial disincentives on HA-BSI rates? Australasian College for Infection Prevention and Control 2014 Conference; 23–26 November 2014; Adelaide, Australia 2014.

  15. Wright M-O, Hebden JN, Allen-Bridson K, Morrell GC, Horan TC. An American Journal of Infection Control and National Healthcare Safety Network data quality collaboration: a supplement of new case studies. Am J Infect Control. 2012;40(5, Supplement):S32–40. http://dx.doi.org/10.1016/j.ajic.2012.03.010.

    Article  PubMed  Google Scholar 

  16. Australian Commission on Safety and Quality in Healthcare. Implementation Guide for Surveillance of Staphylococcal aureus bacteraemia. 2013. http://www.safetyandquality.gov.au/wp-content/uploads/2012/02/SAQ019_Implementation_guide_SAB_v10.pdf. Accessed 18 Septmeber 2014.

  17. Grant RL. Converting an odds ratio to a range of plausible relative risks for better communication of research findings. BMJ. 2014;348:f7450. doi:10.1136/bmj.f7450.

    Article  PubMed  Google Scholar 

  18. Birgand G, Lepelletier D, Baron G, Barrett S, Breier AC, Buke C, et al. Agreement among healthcare professionals in ten European countries in diagnosing case-vignettes of surgical-site infections. PLoS One. 2013;8(7):e68618. doi:10.1371/journal.pone.0068618.

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  19. Lepelletier D, Ravaud P, Baron G, Lucet J-C. Agreement among Health Care Professionals in diagnosing case vignette-based surgical site infections. PLoS One. 2012;7(4):e35131. doi:10.1371/journal.pone.0035131.

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  20. Mayer J, Greene T, Howell J, Ying J, Rubin MA, Trick WE, et al. Agreement in classifying bloodstream infections among multiple reviewers conducting surveillance. Clin Infect Dis. 2012;55(3):364–70.

    Article  PubMed  Google Scholar 

  21. Rich KL, Reese SM, Bol KA, Gilmartin HM, Janosz T. Assessment of the quality of publicly reported central line-associated bloodstream infection data in Colorado, 2010. Am J Infect Control. 2013;41(10):874–9. doi:10.1016/j.ajic.2012.12.014.

    Article  PubMed  Google Scholar 

  22. Schröder C, Behnke M, Gastmeier P, Schwab F, Geffers C. Case vignettes to evaluate the accuracy of identifying healthcare-associated infections by surveillance persons. The Journal Of Hospital Infection. 2015. http://dx.doi.org/10.1016/j.jhin.2015.01.014.

  23. Hall L, Halton K, Macbeth D, Gardner A, Mitchell BG. Roles, responsibilities and scope of practice: describing the ‘state of play’ for infection control professionals in Australia and New Zealand. Healthcare Infection. 2015. http://dx.doi.org/10.1071/HI14037.

  24. Australian Institute for Health and Welfare Australian hospital statistics 2012–13. Health Services Series No. 54. Cat. No. HSE 145. Canberra: AIHW; 2014.

Download references

Acknowledgements

The authors are grateful for the assistance from the infection prevention staff undertaking the survey, the Australian Commission for Safety and Quality in Health Care, the State and Territory Health Department representatives, and the Australasian College for Infection Prevention and Control.

Some of the findings in this manuscript have been presented in a poster at Healthcare Infection Society conference in Lyon, France, November 2014.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Philip L. Russo.

Additional information

Competing interests

Financial competing interests

Philip Russo acknowledges the Rosemary Norman Foundation and the Nurses Memorial Centre through the award of the “Babe” Norman Scholarship to enable PhD studies. He also receives minor support from NHMRC funded Centre of Research Excellence in Reducing Healthcare Associated Infection (Grant 1030103).

Adrian Barnett is supported by the Australian Centre for Health Services Innovation.

Allen Cheng is supported by a NHMRC Career Development Fellowship (Grant 1068732).

Nicholas Graves is funded by a NHMRC Practitioner Fellowship (Grant 1059565).

Lisa Hall receives funding from the NHMRC Centre of Research Excellence in Reducing Healthcare Associated Infection (Grant 1030103).

Non financial competing interests

Philip Russo is a member of the Australian Commission for Safety and Quality in Health Care, Healthcare Associated Infection Advisory Committee, and previously Operations Director at the VICNISS Coordinating Centre.

Adrian Barnett – none declared.

Allen Cheng - none declared.

Mike Richards is the Director of the VICNISS Coordinating Centre, which established and runs the State healthcare infection surveillance program in Victoria. He is Chair of the Australian Commission for Safety and Quality in Health Care, Healthcare Associated Infection Advisory Committee.

Nicholas Graves provides advice to the Centre for Healthcare Related Infection Surveillance and Prevention (CHRISP), QLD Health, and is a member of the Australian Commission for Safety and Quality in Health Care, Healthcare Associated Infection Advisory Committee.

Lisa Hall was previously the Manager of Epidemiology and Research at CHRISP, and is a member of the Australian Commission for Safety and Quality in Health Care, Healthcare Associated Infection Technical Working Group.

Authors’ contributions

PLR conceived, designed, administered and analysed the study and drafted and prepared the manuscript. AGB provided statistical advice and assisted in preparation of the manuscript. ACC advised on study design and analysis and manuscript preparation. MR advised on study design and analysis and manuscript preparation. NG advised on study design and analysis and manuscript preparation. LH supervised study design, administration, analysis and manuscript preparation. All authors read and approved the final manuscript.

Rights and permissions

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Russo, P.L., Barnett, A.G., Cheng, A.C. et al. Differences in identifying healthcare associated infections using clinical vignettes and the influence of respondent characteristics: a cross-sectional survey of Australian infection prevention staff. Antimicrob Resist Infect Control 4, 29 (2015). https://doi.org/10.1186/s13756-015-0070-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13756-015-0070-7

Keywords