Research reportThe Osteopathic Clinical Practice Assessment – a pilot study to develop a new workplace-based assessment tool
Introduction
A central component to any health professional education program of study is the work-based or clinical education curriculum. The assumption is that clinical education provides learners with opportunities to experience clinical life through participation in the workplace learning (WPL) setting, under supervision of a qualified health professional. It is expected that such an experience will promote knowledge and skill development,1, 2 and develop their sense of professional identity and autonomy.
Typically, in any health discipline the desired set of competencies come under the broader terms of: knowledge, skills, problem-solving skills and attitudes or professionalism.3 Underneath those umbrella terms clinical education is chiefly concerned with developing the learner's competencies in clinical reasoning, problem-solving and critical appraisal skills, communication and professionalism.4, 5, 6 Towards the close of the curriculum, assessing competence means assessing the learners' management of integrated whole tasks of increasing complexity (i.e. patient care beyond performance of a single task).3, 7
In WPL, the assessment of the learner's development of clinical competence takes many forms with each assessment tool used having a different purpose. Ideally, each tool provides a different level of information about a learners' competency in different contexts and situations. Through the use of multiple tools academic and clinical faculty can develop a picture of the clinical competence of a learner. When the results from these multiple tools are combined or, looked at through a programmatic lens, the learner will have ideally been assessed across the breadth of skills, knowledge and attributes required of a graduate health professional. Numerous examples of assessment tools that contribute to this developing picture of learner competence exist within the literature.
In Australian allied health there has been a move toward the use of global rating tools which record learners' performance in clinic, over a period of time, as opposed to assessment of performance of the application of clinical skills knowledge and abilities at the point of patient care. The use of such assessment tools has been driven by the need to ensure that students are assessed on a range of criteria related to clinical performance, and that the same assessment tool can be used regardless of the clinical context. Examples of these global rating tools are the Assessment of Physiotherapy Practice (APP),8, 9, 10 occupational therapy's Learner Practice Evaluation Form – Revised,11 speech therapy's COMPASS,12, 13 the Radiation Therapy Learner Clinical Assessment,14, 15 and a tool to assess nursing competencies.16 These tools are typically used at the end of a block clinical placement as a summative assessment. In essence the above named tools explore learner's clinical habits and methodologies.
As with any assessment tool, making an argument for its validity is paramount. Kane's approach17, 18 to structuring a validity argument is helpful here in that it outlines four links in an inferential chain from administration of an assessment tool to the final decisions therein. This chain is: scoring, generalization, extrapolations through to decision. Further, it is important to recognize that the tool itself is not valid, however evidence can be provided to support the validity of the score derived from the assessment tool. The global assessment tools listed previously are designed to contribute to the evidence used to make decisions about competency and fitness-to-practice; they are not the sole determinant. The score on the global assessment tool represents performance over a period of time in a WPL setting, thus the score represents a broad view of a learner's daily habits and methodologies. To support the notion of generalization, global assessment tools would need to be completed by several examiners per learner, and completed across different clinical contexts. For example, a physiotherapy student would be required to be assessed in the musculoskeletal, neurological and cardiothoracic practice contexts prior to graduation. The process to design the global assessment tool ensures that it has face and content validity, and the users of the tool have been informed about its implementation and execution thereby supporting the generalization notion. In order to extrapolate the results of the multiple global assessment tools, evidence from other sources is required. Educationalists must ask, do the results of the global assessment tool correlate with the results of other performance assessments? Only then is it possible to extrapolate the results of these performance assessments and subsequently make a decision about the learners' fitness-to-practice.
A major challenge in the implementation of any workplace-based assessment is the reliability of the ratings. Using theoretical frameworks from social perception research, Govaerts et al.19 explored the content of schemas and their use by raters during assessment of learning performance in a single patient encounter. These authors identified that a ‘judgment’ by a rater could involve interactions between a variety of performance theories, task-specific performance requirements and/or person (rater) schemas. Differences between novice and expert raters in their approach to task-specific performance schemas were also observed: that is, the dimensions of the task being assessed were considered variously depending on the learner. Among other implications, the authors posited that raters will interpret the rating scale differently – the utility of a particular tool may be compromised when the rating scales used does not mirror the raters' own performance theories. This means there is no ‘consistency’.
When preparing for the administration of any assessment, among other issues, it is important to be explicit with the instructions to rater's (in our case the clinical supervisors) and learners regarding how often this tool is administered and if the rater is instructed to rate a learner's work in either of the following ways:
- a)
according to where the learner is in their development relative to the skills required of a graduate, or;
- b)
according to their perception of where the learner is performing according to a specific time point in a program of study.
If the educators and learners are instructed that a) is desirable than several administrations of the tool over any set period of time will typically see the learners' scores progress up the scale. However, if it is instructed that option b) is desirable a learner may score a ‘satisfactory’ at every administration of the tool over a set time period and that would be regarded as acceptable progress. As an example, the 20 items in the APP8, 9, 10 are designed so that raters can judge the learner at the end of a block placement on each item against the minimum target attributes required to achieve beginner's (entry-level) standard and register to practice. In the present study a global rating tool was used formatively to provide learners with progressive feedback throughout a 12-week longitudinal placement.
The object of the present paper is to report on an adaptation of the APP for the osteopathic context: the Osteopathic Clinical Practice Assessment (OCPA). Further, the present paper also discusses a number of considerations including its use as a formative assessment, the rating scale, our learnings from the pilot study and plans for future use, together with how these issues intersect with current theories about assessment in the health professions.
Section snippets
Method
This study was approved by the VU Human Research Ethics Committee as part of a larger investigation into the assessment practices in the osteopathy program.
Results
Data were available from 31 (73.3%) of the forty-two enrolled learners assessed by 12 clinician educators. The OCPA assessment sheets from the remaining 11 learners were not available for analysis as they had been handed back to the respective learner. Learners received between 1 and 3 assessments for the semester, and the clinician educators completed between 1 and 10 assessments each. Descriptive statistics for the OCPA are presented in Table 3.
The correlation between the global rating and
Discussion
The purpose of this pilot investigation was to introduce the notion of the OCPA tool as a global assessment of osteopathic learner performance in the on-campus, student-led teaching clinic. The OCPA has been mooted for inclusion in the suite of assessment tools used to make learner progress and competency decisions. As a formative assessment and feedback tool it was expected the OCPA would aid learner's learning and provide a clear record of their progress at regular intervals.22 At this time
Conclusion
This pilot study has introduced an adaptation of the APP, as a global competency assessment tool for osteopathy. We propose the Osteopathy Clinical Practice Assessment for use in a pre-registration osteopathy teaching program in an on-campus, student-led clinic. The tool appears to be able to provide the learner and program administrators with information about their skills across a range of expected learning objectives related to osteopathic practice. The OCPA has great potential to provide
Statement of competing interests
Brett Vaughan is a member of the Editorial Board of the International Journal of Osteopathic Medicine but was not involved in review or editorial decisions regarding this manuscript.
Ethical statement
This study was approved by the Victoria University Human Research Ethics Committee.
Funding
None declared.
References (27)
- et al.
The Assessment of Physiotherapy Practice (APP) is a valid measure of professional competence of physiotherapy students: a cross-sectional study with Rasch analysis
J Physiother
(2011) - et al.
The Assessment of Physiotherapy Practice (APP) is a reliable measure of professional competence of physiotherapy students: a reliability study
J Physiother
(2012) - et al.
Clinical education in the osteopathy program at Victoria University
Int J Osteopath Med
(2014) - et al.
A handbook for teaching and learning in higher education: enhancing academic practice
(2008) - et al.
Key concepts in the philosophy of education
(2002) - et al.
Programmatic assessment: from assessment of learning to assessment for learning
Med Teach
(2011) - et al.
Goals and components of clinical education in the allied health professions
- et al.
Educating beginning practitioners: challenges for health professional education
(1999) Workplace-based assessment in clinical training
(2007)- et al.
A new framework for designing programmes of assessment
Adv Health Sci Educ Theory Pract
(2010)
Development of the Assessment of Physiotherapy Practice (APP): a standardised and valid approach to assessment of clinical competence in physiotherapy
Development of the student placement evaluation form: a tool for assessing student fieldwork performance
Aust Occup Ther J
Issues in developing valid assessments of speech pathology students' performance in the workplace
Int J Lang Commun Disord
Cited by (3)
Professional identity in osteopathy: A scoping review of peer-reviewed primary osteopathic research
2022, International Journal of Osteopathic MedicineCitation Excerpt :Within osteopathy, however, little is published about OPI or its development (OPID). Initiatives in the International Journal of Osteopathic Medicine (IJOM) have undoubtedly raised the profile of osteopathic education research [39–41], but this has often focused on practical issues such as student assessment [42–44] and learning resources [45–47]. A search of IJOM, from inception to present, revealed only three examples of peer-reviewed primary research with explicit OPI focus [23,24,48], although there are some discussion papers [39,49–53].
Australian chiropractic and osteopathic graduates' perceptions of readiness for transition to practice
2022, Journal of Chiropractic Education