Skip to main content

Educators’ behaviours during feedback in authentic clinical practice settings: an observational study and systematic analysis

Abstract

Background

Verbal feedback plays a critical role in health professions education but it is not clear which components of effective feedback have been successfully translated from the literature into supervisory practice in the workplace, and which have not. The purpose of this study was to observe and systematically analyse educators’ behaviours during authentic feedback episodes in contemporary clinical practice.

Methods

Educators and learners videoed themselves during formal feedback sessions in routine hospital training. Researchers compared educators’ practice to a published set of 25 educator behaviours recommended for quality feedback. Individual educator behaviours were rated 0 = not seen, 1 = done somewhat, 2 = consistently done. To characterise individual educator’s practice, their behaviour scores were summed. To describe how commonly each behaviour was observed across all the videos, mean scores were calculated.

Results

Researchers analysed 36 videos involving 34 educators (26 medical, 4 nursing, 4 physiotherapy professionals) and 35 learners across different health professions, specialties, levels of experience and gender. There was considerable variation in both educators’ feedback practices, indicated by total scores for individual educators ranging from 5.7 to 34.2 (maximum possible 48), and how frequently specific feedback behaviours were seen across all the videos, indicated by mean scores for each behaviour ranging from 0.1 to 1.75 (maximum possible 2). Educators commonly provided performance analysis, described how the task should be performed, and were respectful and supportive. However a number of recommended feedback behaviours were rarely seen, such as clarifying the session purpose and expectations, promoting learner involvement, creating an action plan or arranging a subsequent review.

Conclusions

These findings clarify contemporary feedback practice and inform the design of educational initiatives to help health professional educators and learners to better realise the potential of feedback.

Peer Review reports

Background

Modern clinical training, aligned with competency based education and programmatic assessment, is focused on assessment and feedback on routine tasks in the workplace, targeting the highest level in Miller’s framework for competency assessment [1,2,3]. Feedback is one of the most powerful influences on learning and performance [4,5,6,7,8]. It offers the opportunity for a learner to benefit from another practitioner’s critique, reasoning, advice and support. Through this collaboration, the learner can enhance their understanding of what the performance targets are and how they can reach those standards [9, 10]. ‘On the run’ or informal feedback refers to brief fragments of feedback that occur in the midst of delivering patient care. A formal feedback session typically refers to a senior clinician (educator) and student or junior clinician (learner) discussing the learner’s performance in a more comprehensive fashion. Formal feedback sessions often occur as a mid- or end-of-attachment appraisal or as part of a workplace-based assessment. However the success of this model relies on everyday clinicians providing effective feedback. It is not clear which components of effective feedback have been successfully translated from the literature into supervisory practice in the workplace, and which have not. Information on gaps in translation could be used to better target professional development training, or to design strategies to overcome impediments to implementing quality feedback behaviours.

Studies involving direct observation of authentic feedback in hospitals are rare. Observational studies are highly valuable, as they provide primary evidence of what actually happens in everyday clinical education. Direct observation can be achieved either by researchers observing the activity or via video-observation. We identified only a few previous direct observation studies: these involved junior learners (medical students or junior residents) in a few specialties (internal or family medicine) involving formal or informal feedback (in outpatient clinics, on a ward, or following summative simulated clinical scenarios) [11,12,13,14,15,16,17,18]. An additional single study involved physiotherapy students during formal mid- or end-of-attachment feedback [19]. The scarcity of observational studies may be related to the time consuming nature, difficulty in arranging observers or video recording to coincide with feedback meetings slotted into busy schedules, or the reticence of participants to be observed or recorded. These studies reported that typically educators make comments on specific aspects of performance, teach important concepts, and describe or demonstrate how the learner can improve. However educators tend to speak most of the time, ask the learner for their self-assessment but then do not respond to it, avoid corrective comments and do not routinely create actions plans. However these findings may no longer reflect current practice. In addition, no study captured the diversity of clinical educators and learners that work in a hospital environment.

Therefore we set out to directly observe authentic formal feedback episodes in hospital training, via self-recorded videos, to review contemporary educators’ feedback practice in workplace-based learning environments. This could then clarify opportunities and inform the design of professional development training. In Australia, health professions training is concentrated in hospitals, integrating both inpatient wards and outpatient clinics; major dedicated specialist outpatient centres are rare and family medicine clinics are relatively small. We recruited a range of participants, characteristic of the diversity present in hospitals, as desirable feedback elements are not profession specific. We targeted formal feedback sessions to capture complete feedback interactions. We then analysed the composition of educators’ feedback practice using a comprehensive set of observable educator behaviours recommended for high quality feedback (see Table 1) [20]. This enabled a systematic analysis of the first set of data gathered using a comprehensive set of behavioural indicators, in contrast to previous studies in which less structured and more exploratory approaches were used. This framework outlines 25 discrete observable educator behaviours considered to enhance learner outcomes by engaging, motivating and assisting a learner to improve (see Table 1) [20]. This earlier publication by our team described how these items were developed, starting with an extensive literature review to identify distinct elements of an educator’s role substantiated by empirical information to enhance learner outcomes, then operationalised into observable behaviours and refined through a Delphi process with experts.

Table 1 Set of 25 educator behaviours that demonstrate high quality feedback in clinical practice

While we strongly endorse a learner-centred paradigm, we have chosen to focus on the educator’s role in feedback because educators are in a position of influence to create conditions that encourage learners to feel safe, participate and work out how to successfully improve their skills. We agree that specific feedback episodes are shaped by the individuals involved, the context and the culture, however strategies to promote a learner’s motivation and capability to enhance their performance remain relevant. Recommended feedback behaviours are not intended to be implemented in a robotic fashion but tailored to a particular situation by prioritising the most useful aspects throughout the interaction. The core segments of quality feedback include clarifying the target performance, analysing the learner’s performance in comparison to this target, outlining practical steps to improve and planning how to review progress [4, 9, 21]. Overarching themes include promoting motivation [22,23,24,25], active learning [26,27,28] and collaboration [29,30,31,32] within a safe learning environment [10, 33, 34].

Research question

The research questions addressed in this study were:

  1. 1.

    What behaviours are exhibited by clinical educators in formal feedback sessions in hospital practice settings?

  2. 2.

    How closely do these behaviours align with published recommendations for feedback?

Methods

Research overview

In this observational study, senior clinicians (educators) observed junior clinicians or students (learners) performing routine clinical tasks in a hospital setting and then videoed themselves during the subsequent formal feedback session. We analysed each video using a check-list based on the set of educator behaviours recommended in high quality feedback (see Table 1) [20].

The feedback videos were captured at multiple hospitals within one of Australia’s largest metropolitan teaching hospital networks between August 2015 and December 2016. Ethics approval was obtained from the health service (Reference 15,233 L) and university human research ethics committees (Reference 2,015,001,338).

Recruitment

Educators (senior clinicians) across medicine, nursing, physiotherapy, occupational therapy, speech therapy and social work, and their learners (either qualified health professionals undertaking further training or students), who were working with them, were invited to participate. A broad range of educators were sought, via widespread advertising of the study using flyers, emails circulated by unit administration assistants, short presentations at unit meetings and face-to-face meetings with staff across the health service. To be considered for participation, an educator had to contact the primary researcher (CJ), in response to the advertisement. Once an educator consented, they distributed flyers to any learners working with them, with instructions to contact the primary researcher (CJ) if the learners were interested in participating. Diversity was sought by rolling advertising to participants, with consideration of key factors including health profession and specialty, gender and supervisor experience (educators) or training level (learners). Once an educator and a learner had both consented, the pair were advised and they made arrangements to video a routine feedback session. They were asked to record an entire feedback encounter and aim for a duration of approximately 10 minutes but were not given any additional instructions regarding how to conduct the feedback session. Participants were not shown the set of 25 educator behaviours recommended for high quality feedback used to analyse the videos nor given any other education on feedback from the research team, as the aim was to study the nature of current feedback practices.

Consenting participants used a smart phone or computer to video-record themselves at their next scheduled formal feedback session related to either a workplace-based assessment or end-of-attachment performance appraisal. This video was subsequently uploaded to a password protected on-line drive and participants were instructed to delete their copy. The videos were numbered using a random number generator and the videos (other than the images) contained no personal identifying information.

Video analysis

The group of raters were all health professionals (two medical, four physiotherapy) in senior education/educational research roles with extensive experience in supervision and feedback. Each rater analysed each video independently and compared their observations with the set of 25 educator behaviours recommended for high quality feedback (see Table 1) [20]. Each educator behaviour was rated 0 = not seen, 1 = done somewhat or done only sometimes, 2 = consistently done.

In a preparatory pilot study, we rated three videos using the instrument. We then met to discuss ratings and to identify differences in interpretation of items and the use of the rating scale. Strategies to encourage concordance and to clarify item meaning were developed. In particular we identified that Behaviour 2: Timely feedback: The educator offered to discuss the performance as soon as practicable was not observable, so it was excluded. For Behaviour 10: Acknowledge learner’s emotional response: The educator acknowledged and responded appropriately to emotions expressed by the learner, we decided that this would be rated as ‘2’ (consistently done) in the following situations i) if implicit or explicit indicators of learner emotion (such as anxiety or defensiveness) were detected, and the educator acknowledged, and attended to this, or ii) if emotional equilibrium was observed throughout the encounter, as we assumed that this emotional balance between educator and learner required the educator to be reading cues and acting accordingly. Subsequently the total item score could range from 0 to 48.

Data analysis

The data provided two perspectives i) on an individual educator’s practice: how many of the behaviours recommended in high quality feedback were observed in each video and ii) across the whole group of educators: which behaviours were commonly performed. To characterise each individual educator’s practice seen in a video, the scores for each item were averaged across assessors and then summed to give a total score. To describe how commonly specific educator behaviours were observed amongst the whole group of educators, the mean score and standard devation for each item was calculated across all the videos [35]. To assess inter-rater reliability, total scores for each video were assessed for concordance between examiner pairs using Spearman’s rho.

Results

Thirty-six feedback videos were available for analysis after five were excluded: two because they were incomplete (insufficient smartphone memory) and three because of technical errors with recording (audio unclear, time-lapse format used, participants not visible).

Video participants

Thirty-four educators participated, with diversity across key characteristics (health profession and specialty, length of supervisor experience and gender). There were four nurses, four physiotherapists and 26 senior medical staff (three anaesthetists, three emergency physicians, two radiologists, one paediatrician, six physicians, three psychiatrists, three obstetrician-gynaecologists, one opthalmologist and four surgeons). There were 18 (52.9%) female and 16 (47.1%) male educators. Fourteen (41.2%) educators had 5 years or less educator experience, 11 (32.3%) had six to 10 years and 9 (26.5%) had more than 10 years.

Thirty-five learners participated with diversity across key characteristics (health profession and specialty, training level and gender). There were 9 (25.7%) students, 9 (25.7%) clinicians who were five years or less post-qualification, 15 (42.9%) clinicians 6 years or more post-qualification and 2 (5.7%) senior clinicians. Twenty-three learners were (65.7%) female and 12 (34.3%) were male. All participants were from the same health profession and specialty as their respective educators.

The feedback session was related to a mid- or end-of-attachment assessment in 11 (30.6%) videos and to a specific task (such as a procedural skill, clinical assessment, case discussion or presentation) in 25 (69.4%) videos. An official feedback form from an institution such as a university or specialist medical college was used in 11 (30.6%) of the feedback sessions, most of which were mid- or end-of-attachment assessments. Most of the assessments were formative but some were summative as a component of longitudinal training programs aligned with programatic assessment principles [3].

Analysis of educator behaviours during feedback

Each video was analysed by four to six raters providing a total of 174 sets of ratings (unexpected time constraints on the project limited analysis by two raters). Missing data were uncommon (0.2% ratings missing).

Inter-rater reliability

To maximise data for comparison, the inter-rater reliability range for total scores was calculated for raters (4/6) who analysed all the videos: Spearman’s rho was 0.62–0.73. The other two raters rated 10 (28%) and 21 (58%) of the 36 videos and were not included in the inter-rater reliability analysis.

Individual educator’s feedback practice

To learn more about individual educator’s practice and how many of the recommended educator behaviours were observed in each video, we calculated a total score (sum of rating for each observed behaviour, averaged across all assessors) for each video. Total scores ranged from a minimum of 5.7 (11.9%) to a maximum of 34.2 (71.3%), with a mean score across educators of 22.5 (46.9%, SD 6.6), from a maximum possible score of 48. More detailed analysis (see Table 2) revealed virtually all the educators (88%) had a total score between 10 and 30. Although it was not our intention to compare performance across different characteristics (which would require sufficient sample sizes for each group, to enable comparisons), there seemed to be a fairly even spread of health professions, experience and gender across the score ranges.

Table 2 Range of total scores for individual educators (34 educators in 36 videos)

Frequency of specific educator behaviours across the whole group of educators

To explore how often specific feedback behaviours were observed amongst all participants, we calculated the mean rating score for each behaviour across all the videos. Table 3 displays the rating mean (SD) for each behaviour, ranked from most to least often observed. Some behaviours were seen in almost every video (highest mean rating 1.75, Behaviour 10) while others were very infrequently observed (lowest mean rating 0.05, Behaviour 25).

Table 3 Observed educator behaviours ranked in order of rating, with the highest at the top. (after references)

Amongst those educator behaviours most commonly observed (top third: mean rating score 1.41–2.0), most related to the educator’s assessment of the learner’s performance. Educators commonly linked comments regarding learner performance to the learner’s actions (Behaviours 1, 17, 20), focused on important aspects for improvement (Behaviour 16), described similarities and differences between the learner’s performance and the target performance (Behaviour 15), and clarified what should be done and why (Behaviour 14). The other two behaviours commonly seen related to creating a safe learning environment. These included showing respect and support (Behaviour 11) and responding appropriately to emotions expressed by the learner (Behaviour 10).

The middle band of educator behaviours were seen intermittently (mean rating score 0.71–1.40) and related to educators encouraging learners to contribute their thoughts, opinions and ideas, and to reveal their uncertainties. These included encouraging the learner to participate in interactive discussions (Behaviour 6), try to work things out for themselves (Behaviour 8), analyse their own performance (Behaviour 13), reveal the reasoning behind their actions (Behaviour 19), raise difficulties and ask questions (Behaviour 9), and participate in choosing the most important aspects to improve (Behaviour 21) and practical ways to do this through an action plan (Behaviour 22).

The lowest band of educator behaviours were rarely seen (mean rating score 0–0.7) and primarily related to the set up and conclusion of a feedback session. At the start of the session, as part of creating a safe learning environment, the recommended educator behaviours included explicitly explaining that the purpose of the feedback was to help the learner improve (Behaviour 3), describing the proposed outline for the session (Behaviour 5), and stating their acceptance that mistakes are an inevitable part of the learning process (Behaviour 4). As part of the session conclusion or wrap-up, the recommended behaviours included checking a learner’s understanding of the learning goals and action plan (Behaviours 23, 24), and discussing future opportunities to review progress, to promote ongoing learning (Behaviour 25). The other educator behaviours that were rarely seen included the educator incorporating the learner’s learning priorities (Behaviour 7) and promoting the learner’s understanding of the value of their self-assessment (Behaviour 12).

Discussion

In this study of educators’ feedback practice, we found considerable variation in both an individual educator’s practice and how frequently specific recommended behaviours were observed across the group of educators. This provides valuable insights into ‘what currently happens’ during formal feedback episodes in hospital-based training. These insights clarify opportunities for future research into educator development with the potential for substantial impact. Furthermore the recommended behaviours offer a repertoire of specific strategies that may assist educators to understand and enact these quality standards.

Frequency of specific recommended behaviours observed across the group of educators

We found that educators routinely gave their assessment of the learner’s performance and described what the task should look like, but only intermittently asked learners for self-assessment or development of an action plan. This seems to reflect a culture in which the educator’s analysis of the learner’s performance predominates [36]. These findings echo those from earlier observational studies and feedback forms [11, 12, 17, 19, 37,38,39,40]. This suggests that typical feedback practice in the clinical setting has remained much the same since these omissions were last reported years ago.

Self-assessment is a key component in self-regulated learning and evaluative judgement, which promotes reflection, independent learning and achievement [28,29,30]. Invitations for learner self-asssessment provide learners with the opportunity to judge their work first and indicate what they most want help with [33, 41, 42]. Self-assessments can alert the educator to the potential for a negative emotional reaction and rejection of the educator’s opinion if the learner rates their performance much higher than the educator [43]. Self -assessment offer opportunities for learners to enhance their evaluative judgement by calibrating their understanding against an expert’s understanding of the observed performance and the desired performance standards [4, 44]. Recent work on student feedback literacy has highlighted the importance of strategically designing opportunities for learners to make judgements and discuss characteristics of quality work, to assist them to appreciate, interpret and utilise feedback [45].

The fact that an action plan continues to be frequently neglected similarly warrants serious attention. If educators do not support and guide learners to create an action plan, learners are left with the difficult task of working out by themselves how to transform feedback information into performance improvement [21]. Furthermore, when learners hear about performance gaps, their distress may be exacerbated if they do not know how to improve it [46].

Our study also identified a number of missing feedback features, which have not been previously documented. One involves positioning the development of a learner’s motivation, understanding and skills as the focal point for feedback. The literature suggests that a learner is only likely to successfully implement changes when they ‘wish to’ (motivation) and ‘know how to’ (clear understanding) [9, 29, 47, 48].

Self-determination theory argues that intrinsic motivation, which is associated with both higher performance and increased well-being, is promoted when a learner decides what to do, in line with their personal values and aspirations [23,24,25]. This is captured by recommended educator behaviours that position the learner as decision maker and the educator as guide (see Table 1: Behaviours 7, 21, 22). A learner must be convinced for themselves that the feedback is credible and valuable (Behaviours 1, 6, 7, 9, 20, 24) [8, 49, 50]. The free flow of information, opinion and ideas between the educator and learner creates a shared understanding, as a foundation for tailored advice and good decision making [51]. In addition, Goal Setting Theory asserts that a learner’s motivation is stimulated by a clear view of the performance gap, performance goals that are specific, achievable and valuable to the learner, and an action plan that is practical and tailored to suit their needs (Behaviours 14, 15, 21, 22) [22].

Recent advances in feedback have focused on the need to assist learners to process and utilise feedback information, so they ‘know how to’ enhance their performance. This is exemplified in the R2C2 feedback model, which includes assisting a learner to explore the information, their reactions to it and to design effective strategies for skill development [30, 32, 51]. Social constructivist learning theory describes how a learner makes meaning of new information through interactions with others [52]. To promote this active learning, recommended educator behaviours include encouraging the learner to analyse their own performance and ‘work things out for themselves’ (Behaviours 8,12,13), enquiring about the learner’s difficulties or questions (Behaviour 9) and checking the learner’s understanding of the action plan before concluding the session (Behaviours 23, 24) [53].

Another feature of effective feedback rarely seen in our study was educators deliberately setting up a safe learning environment at the start the session, although they showed respect and support for learners in general. Recent literature has reinforced the importance of promoting a safe learning environment and establishing an educational alliance [34]. This may be a particularly important strategy when the educator and learner do not have an established relationship, which seems to be increasingly commonplace in modern workplace training with short placements and multiple supervisors attending to learners [54]. Excessive anxiety negatively impacts on thinking, learning and memory [53, 55, 56]. Feedback is inherently psychologically risky; if a learner’s limitations are exposed, this can result in a lower grade or critical remarks from the educator, or threaten a learner’s sense of self [5, 33, 46]. Carless [10] highlighted the important role of trust in view of the strong relational, emotional and motivational influences of feedback. In an attempt to counter the natural anxiety, educators could be explicit that “mistakes are part of the skill-acquisition process” and that they desire to help, not to be critical [53]. In addition, if an educator negotiated the process and expectations for the feedback session, this could reduce the anxiety caused when the learner does not know, or have any control over, what is going to happen [30].

One final important feature was the isolation of the learning activity. In our study, no educator discused when or how the learner might be able to review to what extent they had been able to successfully develop the targeted skills (Behaviour 25); this was the lowest ranked behaviour of all. Molloy and Boud [9] have emphasised the importance of promoting performance development by linking learning activities, so that feedback plans can be implemented and progress evaluated in subsequent tasks. As supervision is increasingly short-term and fragmented in nature, collaborating with the learner in deliberately planning another opportunity to be assessed performing a similar task seems an important objective.

Individual educator’s practice

The range in individual educator’s scores found in our study suggests the educators had variable expertise in feedback. Educators were not shown the check-list of recommended behaviours used in video analysis. Although not formally tested, there was no indication in the data that more experience conferred greater expertise, based on the spread of supervisor experience across the score ranges (Table 2). We did not ask about our educators’ professional development training. Although potentially interesting, this information was tangential to our primary goal of assessing current workplace practice against recommended behaviours. Given that education paradigms have changed considerably across time, and that educator behaviour may partly reflect methods used when they were learners, the observed variability in feedback approaches highlights the need for continuing professional development that focuses on recent advances. The lack of striking differences in scores between professions suggests that feedback skills within formal encounters may be more similar than different. Hence feedback literacy training could, at least in part, be designed for educators across the health professions, allowing significant efficiencies. Nevertheless, the extent to which these skills vary within informal feedback encounters and across different contexts requires more study. Practising clinicans are responsible for the majority of health professions training (both senior students and junior clinicians) and yet specified standards for their education and training role are rare. In contrast health professionals spend many years training and being carefully assessed on their clinical skills.

The aim of our research is to assist educators in generating high quality learner-centred feedback, by developing descriptions of educator behaviours that could engage, motivate and enable learners to improve. It may well be that once clinicians have the opportunity to consider the recommended behaviours, it would be relatively easy for them to introduce missing elements into their practice. One strategy that might be valuable for educators would be to video their feedback with a learner and subsequently use the list to systematically analyse their own behaviours. This would enable educators to also engage in reflective learning and goal setting [57, 58]. In addition, exemplars of supervisors’ phrases or videos re-enacting quality feedback practices may help educators to translate the principles of high quality feedback into new rituals. The set of behaviours is comprehensive however it could be useful to prioritise or summarise them, as 25 recommended behaviours may seem overwhelming, especially to new educators.

Study strengths and limitations

Strengths of our study include self-recorded video-observations of authentic feedback episodes in routine clinical practice, to reveal ‘what actually happens’ and target the top level of Miller’s framework for competency assessment. Participants involved a diverse group of clinical educators, characteristic of hospital practice. The educators’ feedback practices were systematically analysed utilising an empirically derived, comprehensive set of 25 observable educator behaviours.

There are a number of limitations to our study. The small sample of 36 participants were from a single health service, although it is one of the largest in Australia with multiple hospitals. Participants volunteered (which may have resulted in a subset of educators and learners with stronger skills than those who did not volunteer) and participants recorded their own performances, potentially making our data overly optimistic. These factors limit the generalisability of our findings. In the application of the educator behaviour descriptions to the assessment of educator behaviour during feedback, there was some variation in rater consistency. One reason for this could be different interpretations of the educator behaviour descriptions. In future research, attention will be directed to refining the descriptions of observable behaviours and supporting information, accompanied by additional practice and discussion to optimise consensus amongst raters. Although video raters represented only two health professions (two physicians and four physiotherapists), which could raise the possibility that this might influence their analysis of educators’ behaviours beyond their own profession, we cannot see a plausible argument to support this. A number of educators used official feedback forms (from university, hospital or specialty college). Trying to complete these forms in accordance with their instructions, may have influenced educators’ conduct or may have distracted educators’ attention, as they can be quite cognitively demanding. However, there are no compelling reasons why best practice in feedback could not occur in parallel with any learner assessment rubric. In addition, educator-learner pairs could have had earlier feedback conversations, during which some of the quality feedback behaviours may have occurred, particularly relating to setting up expectations and establishing trust, but were not captured on video.

Conclusions

Our study showed that during formal feedback sessions, educators routinely provided their analysis of the learner’s performance, described how the task should be performed, and were respectful and supportive within the conversation. These are all valuable and recommended components of quality feedback. Nevertheless, other desirable behaviours were rarely observed. Important elements that were often omitted included deliberately instigating a safe learning environment at the start of the feedback session (by explicitly articulating the purpose, expectations and likely structure of the session), encouraging self-assessment, activating the learner’s motivation and understanding, creating an action plan and planning a subsequent performance review. This suggests that many advances in feedback research, regarding the importance of assisting learners to understand, incorporate and act on performance information, have not impacted routine clinical education. Our research clarifies valuable targets for educator feedback skill development across the health professions education community. However further research is required to investigate whether implementing these recommended educator behaviours results in enhanced learner outcomes, as designed.

References

  1. Carraccio C, Englander R, Van Melle E, Ten Cate O, Lockyer J, Chan MK, et al. Advancing competency-based medical education: a charter for clinician-educators. Acad Med. 2016;91(5):645–9.

    Article  Google Scholar 

  2. Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9 Suppl):S63–7.

    Article  Google Scholar 

  3. van der Vleuten CP, Schuwirth LW, Driessen EW, Dijkstra J, Tigelaar D, Baartman LK, et al. A model for programmatic assessment fit for purpose. Med Teach. 2012;34(3):205–14.

    Article  Google Scholar 

  4. Hattie J, Timperley H. The power of feedback. Rev Educ Res. 2007;77(1):81–112.

    Article  Google Scholar 

  5. Kluger AN, DeNisi A. The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol Bull. 1996;119(2):254–84.

    Article  Google Scholar 

  6. Ericsson KA. Acquisition and maintenance of medical expertise: a perspective from the expert-performance approach with deliberate practice. Acad Med. 2015;90(11):1471–86.

    Article  Google Scholar 

  7. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;(6):CD000259.

  8. Veloski J, Boex JR, Grasberger MJ, Evans A, Wolfson DB. Systematic review of the literature on assessment, feedback and physicians’ clinical performance: BEME guide no. 7. Medical Teacher. 2006;28(2):117–28.

    Article  Google Scholar 

  9. Molloy E, Boud D. Changing conceptions of feedback. In: Boud DME, editor. Feedback in higher and professional education. London: Routledge; 2013. p. 11–33.

    Google Scholar 

  10. Carless D. Trust and its role in facilitating dialogic feedback. In: Boud D, Molloy E, editors. Feedback in higher and professional education. London: Routledge; 2013. p. 90–103.

    Google Scholar 

  11. Blatt B, Confessore S, Kallenberg G, Greenberg L. Verbal interaction analysis: viewing feedback through a different lens. Teaching and Learning in Medicine. 2008;20(4):329–33.

    Article  Google Scholar 

  12. Bardella IJ, Janosky J, Elnicki DM, Ploof D, Kolarik R. Observed versus reported precepting skills: teaching behaviours in a community ambulatory clerkship. Med Educ. 2005;39(10):1036–44.

    Article  Google Scholar 

  13. Ende J, Pomerantz A, Erickson F. Preceptors' strategies for correcting residents in an ambulatory care medicine setting: a qualitative analysis. Acad Med. 1995;70(3):224–9.

    Article  Google Scholar 

  14. Frye AW, Hollingsworth MA, Wymer A, Hinds MA. Dimensions of feedback in clinical teaching: a descriptive study. Acad Med. 1996;71(1):S79–81.

    Article  Google Scholar 

  15. Hekelman FP, Vanek E, Kelly K, Alemagno S. Characteristics of family physicians’ clinical teaching behaviors in the ambulatory setting: a descriptive study. Teaching and Learning in Medicine. 1993;5(1):18–23.

    Article  Google Scholar 

  16. Huang WY, Dains JE, Monteiro FM, Rogers JC. Observations on the teaching and learning occurring in offices of community-based family and community medicine clerkship preceptors. Fam Med. 2004;36(2):131–6.

    Google Scholar 

  17. Kogan JR, Conforti LN, Bernabeo EC, Durning SJ, Hauer KE, Holmboe ES. Faculty staff perceptions of feedback to residents after direct observation of clinical skills. Med Educ. 2012;46(2):201–15.

    Article  Google Scholar 

  18. Urquhart LM, Ker JS, Rees CE. Exploring the influence of context on feedback at medical school: a video-ethnography study. Adv Health Sci Educ Theory Pract. 2018;23(1):159–86.

    Article  Google Scholar 

  19. Molloy E. Time to pause: feedback in clinical education. In: Delany C, Molloy E, editors. Clinical education in the health professions. Sydney: Elsevier; 2009. p. 128–46.

    Google Scholar 

  20. Johnson CE, Keating JL, Boud DJ, Dalton M, Kiegaldie D, Hay M, et al. Identifying educator behaviours for high quality verbal feedback in health professions education: literature review and expert refinement. BMC Medical Education. 2016;16(1):96.

    Article  Google Scholar 

  21. Sadler DR. Formative assessment and the design of instructional systems. Instr Sci. 1989;18(2):119–44.

    Article  Google Scholar 

  22. Locke EA, Latham GP. Building a practically useful theory of goal setting and task motivation. A 35-year odyssey. The American psychologist. 2002;57(9):705–17.

    Article  Google Scholar 

  23. Deci EL, Ryan RM. The ‘what’ and ‘why’ of goal pursuits: human needs and the self-determination of behavior. Psychol Inq. 2000;11:227–68.

    Article  Google Scholar 

  24. Ten Cate TJ, Kusurkar RA, Williams GC. How self-determination theory can assist our understanding of the teaching and learning processes in medical education. AMEE guide no. 59. Medical Teacher. 2011;33(12):961–73.

    Article  Google Scholar 

  25. Ten Cate OT. Why receiving feedback collides with self determination. Adv Health Sci Educ. 2013;18(4):845–9.

    Article  Google Scholar 

  26. Wadsworth BJ. Piaget’s theory of cognitive and affective development: foundations of constructivism. 5th ed. White Plains: Longman Publishing; 1996.

    Google Scholar 

  27. Kaufman DM. Applying educational theory in practice. In: Cantillon P, Wood D, editors. ABC of learning and teaching in medicine. Oxford: Blackwell Publishing Ltd; 2010.

    Google Scholar 

  28. Butler DL, Winne PH. Feedback and self-regulated learning: a theoretical synthesis. Rev Educ Res. 1995;65(3):245–81.

    Article  Google Scholar 

  29. Nicol DJ, Macfarlane-Dick D. Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Stud High Educ. 2006;31(2):199–218.

    Article  Google Scholar 

  30. Ramani S, Konings KD, Ginsburg S, van der Vleuten CPM. Twelve tips to promote a feedback culture with a growth mind-set: swinging the feedback pendulum from recipes to relationships. Med Teach. 2018:1–7. https://www.tandfonline.com/doi/full/10.1080/0142159X.2018.1432850.

  31. Sargeant J, Lockyer J, Mann K, Holmboe E, Silver I, Armson H, et al. Facilitated reflective performance feedback: developing an evidence- and theory-based model that builds relationship, explores reactions and content, and coaches for performance change (R2C2). Acad Med. 2015;90(12):1698–706.

    Article  Google Scholar 

  32. Sargeant J, Lockyer JM, Mann K, Armson H, Warren A, Zetkulic M, et al. The R2C2 model in residency education: how does it Foster coaching and promote feedback use? Acad Med. 2018;93(7):1055–63.

    Article  Google Scholar 

  33. Ende J. Feedback in clinical medical education. J Am Med Assoc. 1983;250(6):777–81.

    Article  Google Scholar 

  34. Telio S, Ajjawi R, Regehr G. The "educational alliance" as a framework for reconceptualizing feedback in medical education. Acad Med. 2015;90(5):609–14.

    Article  Google Scholar 

  35. Norman G. Likert scales, levels of measurement and the “laws” of statistics. Adv Health Sci Educ. 2010;15(5):625–32.

    Article  Google Scholar 

  36. Molloy E, Van de Ridder M. Reworking feedback to to build better work. In: Delany C, Molloy E, editors. Learning and teaching in clinical contexts. Sydney: Elsevier; 2018. p. 305–20.

    Google Scholar 

  37. Bindal T, Wall D, Goodyear HM. Trainee doctors' views on workplace-based assessments: are they just a tick box exercise? Medical Teacher. 2011;33(11):919–27.

    Article  Google Scholar 

  38. Fernando N, Cleland J, McKenzie H, Cassar K. Identifying the factors that determine feedback given to undergraduate medical students following formative mini-CEX assessments. Med Educ. 2008;42(1):89–95.

    Google Scholar 

  39. Holmboe ES, Yepes M, Williams F, Huot SJ. Feedback and the mini clinical evaluation exercise. J Gen Intern Med. 2004;19(5 Pt2):558–61.

    Article  Google Scholar 

  40. Pelgrim EAM, Kramer AWM, Mokkink HGA, Vleuten CPM. Quality of written narrative feedback and reflection in a modified mini-clinical evaluation exercise: an observational study. BMC Medical Education. 2012;12(12):97.

    Article  Google Scholar 

  41. Rudolph JW, Simon R, Raemer DB, Eppich WJ. Debriefing as formative assessment: closing performance gaps in medical education. Acad Emerg Med. 2008;15(11):1010–6.

    Article  Google Scholar 

  42. Silverman J, Kurtz S. The Calgary-Cambridge approach to communication skills teaching ii: the set-go method of descriptive feedback. Educ Gen Pract. 1997;8(7):288–99.

    Google Scholar 

  43. Sargeant J, Mann K, Ferrier S. Exploring family physicians' reactions to multisource feedback: perceptions of credibility and usefulness. Med Educ. 2005;39(5):497–504.

    Article  Google Scholar 

  44. Johnson CE, Molloy EK. Building evaluative judgement through the process of feedback. In: Boud D, Ajjawi R, Dawson P, Tai J, editors. Developing evaluative judgement in higher education assessment for knowing and producing quality work. London: Routledge; 2018. p. 166–75.

    Chapter  Google Scholar 

  45. Carless D, Boud D. The development of student feedback literacy: enabling uptake of feedback. Assess Eval High Educ. 2018;43(8):1315–25.

    Article  Google Scholar 

  46. Sargeant J, Mann K, Sinclair D, Van der Vleuten C, Metsemakers J. Understanding the influence of emotions and reflection upon multi-source feedback acceptance and use. Adv Health Sci Educ. 2008;13(3):275–88.

    Article  Google Scholar 

  47. Bing-You RG, Paterson J, Levine MA. Feedback falling on deaf ears: residents' receptivity to feedback tempered by sender credibility. Medical Teacher. 1997;19(1):40–4.

    Article  Google Scholar 

  48. Grenny J, Patterson K, Maxfield D, McMillan R, Switzler A. Infuencer the science of leading change. 2nd ed. New York: MC Graw Hill Education; 2013.

    Google Scholar 

  49. Sargeant J, Mann K, Sinclair D, van der Vleuten C, Metsemakers J. Challenges in multisource feedback: intended and unintended outcomes. Med Educ. 2007;41(6):583–91.

    Article  Google Scholar 

  50. Telio S, Regehr G, Ajjawi R. Feedback and the educational alliance: examining credibility judgements and their consequences. Med Educ. 2016;50(9):933–42.

    Article  Google Scholar 

  51. Patterson K, Grenny J, McMillan R, Switzler A. Crucial conversations: tools for talking when the stakes are high. 2nd ed. New York: McGraw-Hill; 2012.

    Google Scholar 

  52. Kaufman DM, Mann KV. Teaching and learning in medical education: How theory can inform practice. In: Swanwick T, editor. Understanding medical education evidence, theory and practice. 2nd ed. Oxford: Wiley Blackwell; 2014. p. 7–29.

    Google Scholar 

  53. Shute VJ. Focus on formative feedback. Rev Educ Res. 2008;78(1):153–89.

    Article  Google Scholar 

  54. Voyer S, Cuncic C, Butler DL, MacNeil K, Watling C, Hatala R. Investigating conditions for meaningful feedback in the context of an evidence-based feedback programme. Med Educ. 2016;50(9):943–54.

    Article  Google Scholar 

  55. LaDonna KA, Hatala R, Lingard L, Voyer S, Watling C. Staging a performance: learners’ perceptions about direct observation during residency. Med Educ. 2017;51(5):498–510.

    Article  Google Scholar 

  56. Flinn JT, Miller A, Pyatka N, Brewer J, Schneider T, Cao CG. The effect of stress on learning in surgical skill acquisition. Medical Teacher. 2016;38(9):897–903.

    Article  Google Scholar 

  57. Irby DM. Excellence in clinical teaching: knowledge transformation and development required. Med Educ. 2014;48(8):776–84.

    Article  Google Scholar 

  58. Knight J. Focus on teaching: using video for high-impact instruction. Thousand Oaks: Corwin; 2014.

    Google Scholar 

Download references

Acknowledgments

The authors are indebted to the educators and learners who so generously participated in this research.

Funding

This research received no specific funding for this research.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Author information

Authors and Affiliations

Authors

Contributions

CJ conceived the research concept, participated in designing the protocol, assembled videos, analysed videos, interpreted data and prepared the manuscript. JK participated in designing the research protocol, analysed videos, interpreted data and assisted in preparing the manuscript. MF analysed videos and suggested revisions to the manuscript. FK analysed videos and suggested revisions to the manuscript. ML analysed videos and suggested revisions to the manuscript. EM participated in designing the research protocol, analysed videos, interpreted data and assisted in preparing the manuscript. All authors read and approved the manuscript.

Corresponding author

Correspondence to Christina E. Johnson.

Ethics declarations

Authors information

CJ is a Consultant Physician in General and Geriatric Medicine and Director, Monash Doctors Education at Monash Health and a PhD candidate at the University of Melbourne, Melbourne, Australia.

JK is Emeritus Professor Department of Physiotherapy, School of Primary and Allied Health Care, Faculty of Medicine Nursing and Health Science, Monash University, Melbourne, Australia.

MF is a Physiotherapist and Allied Health Education Lead for the Workforce Innovation, Education & Research (WISER) Unit at Monash Health, and Teaching Associate in the Faculty of Medicine, Nursing and Health Sciences at Monash University, Melbourne, Australia.

FK is a Physiotherapist and the Collaborative Care Curriculum Lead at Monash University, Education Portfolio, Faculty Medicine, Nursing and Health Sciences, Melbourne, Australia.

ML is Professor and Deputy Dean, Head of the medical course in the Faculty of Medicine, Nursing & Health Sciences, Monash University and a Consultant Physician and Deputy Director of Rheumatology at Monash Health Melbourne, Melbourne, Australia.

EM is Professor of Work Integrated Learning in the Department of Medical Education, Melbourne Medical School at the University of Melbourne, Melbourne, Australia.

Ethics approval and consent to participate

This study was approved by the Human Research Ethics Committee at Monash University (Reference 2,015,001,338) and Monash Health (Reference 15,233 L). Written informed consent was obtained by all participants.

Consent for publication

Not applicable. The manuscript does not contain identifiable data for any individual person.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Johnson, C.E., Keating, J.L., Farlie, M.K. et al. Educators’ behaviours during feedback in authentic clinical practice settings: an observational study and systematic analysis. BMC Med Educ 19, 129 (2019). https://doi.org/10.1186/s12909-019-1524-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-019-1524-z

Keywords