Simulations in evaluation of training: a medical example using standardised patients

https://doi.org/10.1016/S0149-7189(01)00047-7Get rights and content

Abstract

In the evaluation of the effectiveness of continuing medical education, observation of a change in physicians' practice offers sound evidence of learning. Such measurements are technically challenging and carry ethical issues of patient confidentiality and vulnerability. This study aimed to address these issues by evaluating the impact of a training program on the performance of family physicians in a simulated consultation with adolescents trained as standardised patients (SPs). One hundred and eight physicians were randomised into intervention or control group, and were tested pre-training, 6-months and 12-months post-training. Physicians rated self-perceived competency, SPs rated rapport and satisfaction, and confidentiality discussions and independent faculty observers rated clinical competency. All measures detected significant impact of the training on physicians' performance except for the SP's subjective rating on rapport and satisfaction. Results indicate the method is feasible and sensitive to changes in performance. Further research is needed to clarify questions raised about the SPs' subjective ratings.

Introduction

Rapidly increasing medical knowledge has resulted in a growing interest in designing educational experiences to help practising doctors efficiently acquire relevant knowledge, skills and attitudes. However, until the 1980s, evaluation of continuing medical education frequently failed to demonstrate a difference to medical practice or health outcomes. The apparent lack of impact was attributed to an evaluation focus on process issues such as participant satisfaction, attendance rates or simple recall of knowledge (Fox, Davis, & Wentz, 1994). These were easy to measure but did not correlate with learning and change (Ward, 1988b).

Evaluators of programs in business and industry similarly tended to focus on process issues, in particular participant satisfaction, while neglecting effects of training on workplace practice:

Superior ratings of training can easily occur without superior learning and transfer to the work setting (Sanders, 1989, p. 61).

The measurement of skill acquisition by self-report is also convenient but on its own has been found to be an unreliable indicator of actual performance due to either over- or under-reporting (Norman, 1985).

Therefore, an ongoing challenge for the evaluation of medical education is to show that learning has occurred through a change in practice, measured objectively. The purpose of this paper is to describe the use of standardised patients (SPs) in a simulated consultation to evaluate the impact of a training program in adolescent health. The objective of the training was to help family physicians acquire the necessary knowledge, skills and attitudes and be able to use these to improve their practice with adolescent patients. This method of evaluation using SPs has advantages that are applicable to the impact evaluation of training programs in any discipline, where the aim of the program is to improve practice.

Section snippets

Evaluating continuing medical education—choosing outcomes

Given that the chief objective of continuing medical education is to change the practice of physicians, changes in competency, performance or health outcomes of patients are sound indicators that learning has occurred (Ward, 1988b).

A useful historical categorisation of training program evaluation into four levels, which build sequentially on one another, has been provided by Kirkpatrick (1977), described in Sanders (1989):

  • 1.

    participant's reaction to the program including satisfaction soon after

Methods

A preliminary qualitative study followed by a survey of Victorian family physicians' attitudes and perceived barriers toward dealing with adolescents (Veit et al., 1995, Veit et al., 1996) formed a detailed needs analysis for the design of our multifaceted training program. Over 80% reported inadequate skills in adolescent health and 87% desired further training, particularly in psychosocial issues (Veit et al., 1996).

Educational strategies were selected on the basis of evidence of their

Participants

Table 2 shows the age and gender distribution of participating family physicians in the intervention and control groups.

Impact evaluation—competency and performance

Table 3 describes the baseline measures related to the simulated consultation and the effect of the training program at the 7-month follow-up. The study groups were similar on all measures at baseline. The intervention group showed significantly greater improvements than the control group at the 7-month follow-up in all but one outcome. The rating of rapport and satisfaction

Discussion

The simulated consultation using a standardised patient was a useful strategy to obtain objective and self-perceived measures of performance of family physicians before and after training. This has also been the case in other evaluation studies of continuing medical education using SPs introduced into the doctors' practice or in the test situation (Gask, Usherwood, Thompson & Williams, 1998, Kaaya et al., 1992; Kinnersley & Pill, 1993). Furthermore, the measures were sensitive enough to detect

Lessons learned

The SPs' rating of rapport and satisfaction with the clinical interview was the only outcome measure in our study without statistical evidence of an intervention effect. In the context of consistent improvement in all other measures, the large variation in the SP scores with some overall improvement (though statistically not significant) could be interpreted as a Type II error, that is, that improvement really occurred but our study was underpowered and could not identify it. We now examine

Acknowledgements

We would like to thank the participating doctors and adolescents, Helen Cahill (Youth research Centre, Melbourne University), Dr David Rosen (University of Michigan) and for the funding, the National Health and Medical Research Council, the Royal Australian College of General Practitioners and Dame Elisabeth Murdoch, Murdoch Children's Research Institute, Victoria.

References (51)

  • L.W Badger et al.

    Stability of standardized patients' performance in a study of clinical decision making

    Family Medicine

    (1995)
  • H.S Barrows

    An overview of the uses of standardized patients for teaching and evaluating clinical skills

    Academic Medicine

    (1993)
  • J.A Colliver

    Validation of standardized-patient assessment: A meaning for clinical competence

    Academic Medicine

    (1995)
  • J.A Colliver et al.

    Assessing clinical performance with standardized patients

    JAMA

    (1997)
  • J.A Colliver et al.

    Three studies of the effect of multiple standardized patients on intercase reliability of five standardized-patient examinations

    Teaching and learning Medicine

    (1990)
  • J.A Colliver et al.

    Effects of using two or more standardized patients to simulate the same case on case means and case failure rates

    Academic Medicine

    (1991)
  • J.A Colliver et al.

    Effect of repeated simulations by standardized patients on intercase reliability

    Teaching and learning Medicine

    (1991)
  • D Davis et al.

    The effectiveness of CME interventions

  • D.A Davis et al.

    Changing physician performance. A systematic review of the effect of continuing medical education strategies

    JAMA

    (1995)
  • R Fox et al.

    The case for research on continuing medical education

  • L Gask et al.

    Medical Education

    (1998)
  • A Gonczi et al.

    Establishing competency-based standards in the professions

    (1990)
  • S Kaaya et al.

    Management of somatic presentations of psychiatric illness in general medical settings: Evaluation of a new training course for general practitioners

    Medical Education

    (1992)
  • P Kinnersley et al.

    Potential of using simulated patients to study the performance of general practitioners

    British Journal of General Practice

    (1993)
  • D.L Kirkpatrick

    Evaluating training: Evidence vs. proof

    Training and Development Journal

    (1977)
  • M Kopelow

    Competency-based assessment using standardized patients and other measures

  • T Mast et al.

    Concepts of competence

  • G.N Masters et al.

    Competency-based assessment in the professions

    (1990)
  • G.E Miller

    Conference summary

    Academic Medicine

    (1993)
  • P.R Mudge et al.

    Vocational training and continuing medical education in general practice

    (1990)
  • G.R Norman

    Objective measurement of clinical performance

    Medical Education

    (1985)
  • G.R Norman et al.

    A comparison of resident performance on real and simulated patients

    Journal of Medical Education

    (1982)
  • G.R Norman et al.

    Measuring physicians' performances by using simulated patients

    Journal of Medical Education

    (1985)
  • J.J O'Hagan et al.

    The use of simulated patients in the assessment of actual clinical performance in general practice

    New Zealand Medical Journal

    (1986)
  • Cited by (9)

    View all citing articles on Scopus
    View full text