Simulations in evaluation of training: a medical example using standardised patients
Introduction
Rapidly increasing medical knowledge has resulted in a growing interest in designing educational experiences to help practising doctors efficiently acquire relevant knowledge, skills and attitudes. However, until the 1980s, evaluation of continuing medical education frequently failed to demonstrate a difference to medical practice or health outcomes. The apparent lack of impact was attributed to an evaluation focus on process issues such as participant satisfaction, attendance rates or simple recall of knowledge (Fox, Davis, & Wentz, 1994). These were easy to measure but did not correlate with learning and change (Ward, 1988b).
Evaluators of programs in business and industry similarly tended to focus on process issues, in particular participant satisfaction, while neglecting effects of training on workplace practice:
Superior ratings of training can easily occur without superior learning and transfer to the work setting (Sanders, 1989, p. 61).
The measurement of skill acquisition by self-report is also convenient but on its own has been found to be an unreliable indicator of actual performance due to either over- or under-reporting (Norman, 1985).
Therefore, an ongoing challenge for the evaluation of medical education is to show that learning has occurred through a change in practice, measured objectively. The purpose of this paper is to describe the use of standardised patients (SPs) in a simulated consultation to evaluate the impact of a training program in adolescent health. The objective of the training was to help family physicians acquire the necessary knowledge, skills and attitudes and be able to use these to improve their practice with adolescent patients. This method of evaluation using SPs has advantages that are applicable to the impact evaluation of training programs in any discipline, where the aim of the program is to improve practice.
Section snippets
Evaluating continuing medical education—choosing outcomes
Given that the chief objective of continuing medical education is to change the practice of physicians, changes in competency, performance or health outcomes of patients are sound indicators that learning has occurred (Ward, 1988b).
A useful historical categorisation of training program evaluation into four levels, which build sequentially on one another, has been provided by Kirkpatrick (1977), described in Sanders (1989):
- 1.
participant's reaction to the program including satisfaction soon after
Methods
A preliminary qualitative study followed by a survey of Victorian family physicians' attitudes and perceived barriers toward dealing with adolescents (Veit et al., 1995, Veit et al., 1996) formed a detailed needs analysis for the design of our multifaceted training program. Over 80% reported inadequate skills in adolescent health and 87% desired further training, particularly in psychosocial issues (Veit et al., 1996).
Educational strategies were selected on the basis of evidence of their
Participants
Table 2 shows the age and gender distribution of participating family physicians in the intervention and control groups.
Impact evaluation—competency and performance
Table 3 describes the baseline measures related to the simulated consultation and the effect of the training program at the 7-month follow-up. The study groups were similar on all measures at baseline. The intervention group showed significantly greater improvements than the control group at the 7-month follow-up in all but one outcome. The rating of rapport and satisfaction
Discussion
The simulated consultation using a standardised patient was a useful strategy to obtain objective and self-perceived measures of performance of family physicians before and after training. This has also been the case in other evaluation studies of continuing medical education using SPs introduced into the doctors' practice or in the test situation (Gask, Usherwood, Thompson & Williams, 1998, Kaaya et al., 1992; Kinnersley & Pill, 1993). Furthermore, the measures were sensitive enough to detect
Lessons learned
The SPs' rating of rapport and satisfaction with the clinical interview was the only outcome measure in our study without statistical evidence of an intervention effect. In the context of consistent improvement in all other measures, the large variation in the SP scores with some overall improvement (though statistically not significant) could be interpreted as a Type II error, that is, that improvement really occurred but our study was underpowered and could not identify it. We now examine
Acknowledgements
We would like to thank the participating doctors and adolescents, Helen Cahill (Youth research Centre, Melbourne University), Dr David Rosen (University of Michigan) and for the funding, the National Health and Medical Research Council, the Royal Australian College of General Practitioners and Dame Elisabeth Murdoch, Murdoch Children's Research Institute, Victoria.
References (51)
- et al.
Stability of standardized patients' performance in a study of clinical decision making
Family Medicine
(1995) An overview of the uses of standardized patients for teaching and evaluating clinical skills
Academic Medicine
(1993)Validation of standardized-patient assessment: A meaning for clinical competence
Academic Medicine
(1995)- et al.
Assessing clinical performance with standardized patients
JAMA
(1997) - et al.
Three studies of the effect of multiple standardized patients on intercase reliability of five standardized-patient examinations
Teaching and learning Medicine
(1990) - et al.
Effects of using two or more standardized patients to simulate the same case on case means and case failure rates
Academic Medicine
(1991) - et al.
Effect of repeated simulations by standardized patients on intercase reliability
Teaching and learning Medicine
(1991) - et al.
The effectiveness of CME interventions
- et al.
Changing physician performance. A systematic review of the effect of continuing medical education strategies
JAMA
(1995) - et al.
The case for research on continuing medical education
Medical Education
Establishing competency-based standards in the professions
Management of somatic presentations of psychiatric illness in general medical settings: Evaluation of a new training course for general practitioners
Medical Education
Potential of using simulated patients to study the performance of general practitioners
British Journal of General Practice
Evaluating training: Evidence vs. proof
Training and Development Journal
Competency-based assessment using standardized patients and other measures
Concepts of competence
Competency-based assessment in the professions
Conference summary
Academic Medicine
Vocational training and continuing medical education in general practice
Objective measurement of clinical performance
Medical Education
A comparison of resident performance on real and simulated patients
Journal of Medical Education
Measuring physicians' performances by using simulated patients
Journal of Medical Education
The use of simulated patients in the assessment of actual clinical performance in general practice
New Zealand Medical Journal
Cited by (9)
Medical and psychology students' self-assessed communication skills: A pilot study
2011, Patient Education and CounselingObserver-rated rapport in interactions between medical students and standardized patients
2009, Patient Education and CounselingSimulated consultations: A sociolinguistic perspective
2016, BMC Medical EducationImproving general practice consultations for older people with asthma: A cluster randomised control trial
2009, Medical Journal of Australia