1
|
Hall EJ, Baillie S, Hunt JA, Catterall AJ, Wolfe L, Decloedt A, Taylor AJ, Wissing S. Practical Tips for Setting Up and Running OSCEs. JOURNAL OF VETERINARY MEDICAL EDUCATION 2022; 50:e20220003. [PMID: 35617627 DOI: 10.3138/jvme-2022-0003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Objective structured clinical examinations (OSCEs) are used to assess students' skills on a variety of tasks using live animals, models, cadaver tissue, and simulated clients. OSCEs can be used to provide formative feedback, or they can be summative, impacting progression decisions. OSCEs can also drive student motivation to engage with clinical skill development and mastery in preparation for clinical placements and rotations. This teaching tip discusses top tips for running an OSCE for veterinary and veterinary nursing/technician students as written by an international group of authors experienced with running OSCEs at a diverse set of institutions. These tips include tasks to perform prior to the OSCE, on the day of the examination, and after the examination and provide a comprehensive review of the requirements that OSCEs place on faculty, staff, students, facilities, and animals. These tips are meant to assist those who are already running OSCEs and wish to reassess their existing OSCE processes or intend to increase the number of OSCEs used across the curriculum, and for those who are planning to start using OSCEs at their institution. Incorporating OSCEs into a curriculum involves a significant commitment of resources, and this teaching tip aims to assist those responsible for delivering these assessments with improving their implementation and delivery.
Collapse
|
2
|
Sterz J, Linßen S, Stefanescu MC, Schreckenbach T, Seifert LB, Ruesseler M. Implementation of written structured feedback into a surgical OSCE. BMC MEDICAL EDUCATION 2021; 21:192. [PMID: 33823844 PMCID: PMC8022414 DOI: 10.1186/s12909-021-02581-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Accepted: 02/26/2021] [Indexed: 05/13/2023]
Abstract
BACKGROUND Feedback is an essential element of learning. Despite this, students complain about receiving too little feedback in medical examinations, e.g., in an objective structured clinical examination (OSCE). This study aims to implement a written structured feedback tool for use in OSCEs and to analyse the attitudes of students and examiners towards this kind of feedback. METHODS The participants were OSCE examiners and third-year medical students. This prospective study was conducted using a multistage design. In the first step, an unstructured interrogation of the examiners formed the basis for developing a feedback tool, which was evaluated and then adopted in the next steps. RESULTS In total, 351 students and 51 examiners participated in this study. A baseline was created for each category of OSCE station and was supplemented with station-specific items. Each of these items was rated on a three-point scale. In addition to the preformulated answer options, each domain had space for individual comments. A total of 87.5% of the students and 91.6% of the examiners agreed or rather agreed that written feedback should continue to be used in upcoming OSCEs. CONCLUSION The implementation of structured, written feedback in a curricular, summative examination is possible, and examiners and students would like the feedback to be constant.
Collapse
Affiliation(s)
- J Sterz
- Department of Trauma, Hand and Reconstructive Surgery, University Hospital Frankfurt, Goethe University, Theodor-Stern-Kai 7, 60590, Frankfurt, Germany
| | - S Linßen
- Department of Trauma, Hand and Reconstructive Surgery, University Hospital Frankfurt, Goethe University, Theodor-Stern-Kai 7, 60590, Frankfurt, Germany
| | - M C Stefanescu
- Department of Trauma, Hand and Reconstructive Surgery, University Hospital Frankfurt, Goethe University, Theodor-Stern-Kai 7, 60590, Frankfurt, Germany
| | - T Schreckenbach
- Department of General and Visceral Surgery, University Hospital Frankfurt, Goethe University, Theodor-Stern-Kai 7, 60590, Frankfurt, Germany
| | - L B Seifert
- Department of Oral, Cranio-Maxillofacial and Facial Plastic Surgery, University Hospital Frankfurt, Goethe University, Theodor-Stern-Kai 7, 60590, Frankfurt, Germany
| | - M Ruesseler
- Department of Trauma, Hand and Reconstructive Surgery, University Hospital Frankfurt, Goethe University, Theodor-Stern-Kai 7, 60590, Frankfurt, Germany.
| |
Collapse
|
3
|
Horita S, Park YS, Son D, Eto M. Computer-based test (CBT) and OSCE scores predict residency matching and National Board assessment results in Japan. BMC MEDICAL EDUCATION 2021; 21:85. [PMID: 33531010 PMCID: PMC7856777 DOI: 10.1186/s12909-021-02520-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2020] [Accepted: 01/18/2021] [Indexed: 06/12/2023]
Abstract
CONTEXT The Japan Residency Matching Program (JRMP) launched in 2003 and is now a significant event for graduating medical students and postgraduate residency hospitals. The environment surrounding JRMP changed due to Japanese health policy, resulting in an increase in the number of unsuccessfully-matched students in the JRMP. Beyond policy issues, we suspected there were also common characteristics among the students who do not get a match with residency hospitals. METHODS In total 237 out of 321 students at The University of Tokyo Faculty of Medicine graduates from 2018 to 2020 participated in the study. The students answered to the questionnaire and gave written consent for using their personal information including the JRMP placement, scores of the pre-clinical clerkship (CC) Objective Structured Clinical Examinations (OSCE), the Computer-Based Test (CBT), the National Board Examination (NBE), and domestic scores for this study. The collected data were statistically analyzed. RESULTS The JRMP placements were correlated with some of the pre-CC OSCE factors/stations and/or total scores/global scores. Above all, the result of neurological examination station had most significant correlation between the JRMP placements. On the other hand, the CBT result had no correlation with the JRMP results. The CBT results had significant correlation between the NBE results. CONCLUSIONS Our data suggest that the pre-clinical clerkship OSCE score and the CBT score, both undertaken before the clinical clerkship, predict important outcomes including the JRMP and the NBE. These results also suggest that the educational resources should be intensively put on those who did not make good scores in the pre-clinical clerkship OSCE and the CBT to avoid the failure in the JRMP and the NBE.
Collapse
Affiliation(s)
- Shoko Horita
- Office for Clinical Practice and Medical Education, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.
| | - Yoon-Soo Park
- Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Daisuke Son
- International Research Center for Medical Education, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- Department of Community-based Family Medicine, School of Medicine, Tottori University Faculty of Medicine, Yonago, Japan
| | - Masato Eto
- International Research Center for Medical Education, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
4
|
Wilby KJ, Paravattil B. Cognitive load theory: Implications for assessment in pharmacy education. Res Social Adm Pharm 2020; 17:1645-1649. [PMID: 33358136 DOI: 10.1016/j.sapharm.2020.12.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2019] [Revised: 11/09/2020] [Accepted: 12/15/2020] [Indexed: 11/28/2022]
Abstract
The concept of mental workload is well studied from a learner's perspective but has yet to be better understood from the perspective of an assessor. Mental workload is largely associated with cognitive load theory, which describes three different types of load. Intrinsic load deals with the complexity of the task, extraneous load describes distractors to the task at hand, and germane load focuses on the development of schemas in working memory for future recall. Studies from medical education show that all three types of load are relevant when considering rater -based assessment (e.g. Objective Structured Clinical Examinations (OSCEs), or experiential training). Assessments with high intrinsic and extraneous load may interfere with assessors' attention and working memory and result in poorer quality assessment. Reducing these loads within assessment tasks should therefore be a priority for pharmacy educators. This commentary aims to provide a theoretical overview of mental workload in assessment, outline research findings from the medical education context, and propose strategies to be considered for reducing mental workload in rater-based assessments relevant to pharmacy education. Suggestions for future research are also addressed.
Collapse
Affiliation(s)
- Kyle John Wilby
- School of Pharmacy, University of Otago, PO Box 56, Dunedin, 9054, New Zealand.
| | | |
Collapse
|
5
|
Zimmermann P, Kadmon M. Standardized examinees: development of a new tool to evaluate factors influencing OSCE scores and to train examiners. GMS JOURNAL FOR MEDICAL EDUCATION 2020; 37:Doc40. [PMID: 32685668 PMCID: PMC7346289 DOI: 10.3205/zma001333] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 02/23/2020] [Accepted: 04/27/2020] [Indexed: 05/27/2023]
Abstract
Introduction: The Objective Structured Clinical Examination (OSCE) is an established format for practical clinical assessments at most medical schools and discussion is underway in Germany to make it part of future state medical exams. Examiner behavior that influences assessment results is described. Erroneous assessments of student performance can result, for instance, from systematic leniency, inconsistent grading, halo effects, and even a lack of differentiation between the tasks to be performed over the entire grading scale. The aim of this study was to develop a quality assurance tool that can monitor factors influencing grading in a real OSCE and enable targeted training of examiners. Material, Methods and Students: Twelve students at the Medical Faculty of the University of Heidelberg were each trained to perform a defined task for a particular surgical OSCE station. Definitions were set and operationalized for an excellent and a borderline performance. In a simulated OSCE during the first part of the study, the standardized student performances were assessed and graded by different examiners three times in succession; video recordings were made. Quantitative and qualitative analysis of the videos was also undertaken by the study coordinator. In the second part of the study, the videos were used to investigate the examiners' acceptance of standardized examinees and to analyze potential influences on scoring that stemmed from the examiners' experience. Results: In the first part of the study, the OSCE scores and subsequent video analysis showed that standardization for defined performance levels at different OSCE stations is generally possible. Individual deviations from the prescribed examinee responses were observed and occurred primarily with increased complexity of OSCE station content. In the second part of the study, inexperienced examiners assessed a borderline performance significantly lower than their experienced colleagues (13.50 vs. 15.15, p=0.035). No difference was seen in the evaluation of the excellent examinees. Both groups of examiners graded the item "ocial competence" - despite identical standardization - significantly lower for examinees with borderline performances than for excellent examinees (4.13 vs. 4.80, p<0.001). Conclusion: Standardization of examinees for previously defined performance levels is possible, making a new tool available in future not only for OSCE quality assurance, but also for training examiners. Detailed preparation of the OSCE checklists and intensive training of the examinees are essential. This new tool takes on a special importance if standardized OSCEs are integrated into state medical exams and, as such, become high-stakes assessments.
Collapse
Affiliation(s)
- Petra Zimmermann
- Ludwig-Maximilians-Universität München, Klinikum der Universität, Klinik für Allgemein-, Viszeral- und Transplantationschirurgie, München, Germany
| | - Martina Kadmon
- Universität Augsburg, Medizinische Fakultät, Gründungsdekanat, Augsburg, Germany
| |
Collapse
|
6
|
Moreno-López R, Sinclair S. Evaluation of a new e-learning resource for calibrating OSCE examiners on the use of rating scales. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2020; 24:276-281. [PMID: 31925850 DOI: 10.1111/eje.12495] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Revised: 01/05/2020] [Accepted: 01/07/2020] [Indexed: 06/10/2023]
Abstract
INTRODUCTION Rating scales have been described as better at assessing behaviours such as professionalism during Objective Structured Clinical Examinations (OSCEs). However, there is an increased need to train and calibrate staff on their use prior to student assessment. MATERIAL AND METHODS An online e-learning package was developed and made available to all examiners at the Institute of Dentistry at the University of Aberdeen. The package included videos of three OSCE stations (medical emergency, rubber dam placement and handling a complaint) which were recorded in two different scenarios; (excellent and unsatisfactory candidate). These videos were recorded to meet a pre-defined marking score. The examiners were required to mark the six videos using pre-set marking criteria (checklist and rating scales). The rating scales included professionalism, general clinical ability and/or communication skills. For each video, examiners were given four possible options (unsatisfactory, borderline, satisfactory or excellent), and they were provided with a description for each domain. They were also required to complete a questionnaire to gather their views on the use of this e-learning environment. RESULTS Fifteen examiners completed the task. The total scores given were very similar to the expected scores for the medical emergency and complaint stations; however, this was not the case for the rubber dam station (P-value .017 and .036). This could be attributed to some aspects of the placement of the rubber dam being unclear as commented on in the examiners questionnaires. There was consistency in the selection of marks on the rating scales (inter-examiner correlation ranged between 0.916 and 0.979). CONCLUSION Further studies are required on the field of e-learning training to calibrate examiners for practical assessment; however, this study provides preliminary evidence to support the use of videos as part of an online training package to calibrate OSCE examiners on the use of rating scales.
Collapse
Affiliation(s)
| | - Serena Sinclair
- Institute of Dentistry, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
7
|
Paravattil B, Wilby KJ. Optimizing assessors' mental workload in rater-based assessment: a critical narrative review. PERSPECTIVES ON MEDICAL EDUCATION 2019; 8:339-345. [PMID: 31728841 PMCID: PMC6904389 DOI: 10.1007/s40037-019-00535-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
INTRODUCTION Rater-based assessment has resulted in high cognitive demands for assessors within the education of health professionals. Rating quality may be influenced by the mental workload required of assessors to complete rating tasks. The objective of this review was to explore interventions or strategies aimed at measuring and reducing mental workload for improvement in assessment outcomes in health professions education. METHODS A critical narrative review was conducted for English-language articles using the databases PubMed, EMBASE, and Google Scholar from conception until November 2018. To be included, articles were eligible if they reported results of interventions aimed at measuring or reducing mental workload in rater-based assessment. RESULTS A total of six articles were included in the review. All studies were conducted in simulation settings (OSCEs or videotaped interactions). Of the four studies that measured mental workload, none found any reduction in mental workload as demonstrated by objective secondary task performance after interventions of assessor training or reductions in competency dimension assessment. Reductions in competency dimensions, however, did result in improvements in assessment quality across three studies. DISCUSSION The concept of mental workload in assessment in medical education needs further exploration, including investigation into valid measures of assessors' mental workload. It appears that adjusting raters' focus may be a valid strategy to improve assessment outcomes. Future research should be designed to inform how to best reduce load in assessments to improve quality, while balancing the type and quantity of data needed for judgments.
Collapse
Affiliation(s)
| | - Kyle John Wilby
- School of Pharmacy, University of Otago, Dunedin, New Zealand.
| |
Collapse
|
8
|
Sterz J, Bender B, Linßen S, Stefanescu MC, Höfer SH, Walcher F, Voss J, Seifert LB, Ruesseler M. Effects and Consequences of Being an OSCE Examiner in Surgery-A Qualitative Study. JOURNAL OF SURGICAL EDUCATION 2019; 76:433-439. [PMID: 30213735 DOI: 10.1016/j.jsurg.2018.08.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2018] [Revised: 07/30/2018] [Accepted: 08/04/2018] [Indexed: 05/26/2023]
Abstract
OBJECTIVE Even though objective structured clinical examination (OSCE) is a well-investigated format for competency-based practical examination, only a few studies have explored the motivations of OSCE examiners and their opinions, both positive and negative, toward being an examiner. The aim of this study was to gain insights into the views of OSCE examiners using semi-structured interviews. DESIGN Surgical OSCE examiners were queried at two medical faculties in Germany via semi-structured interviews. The interviews were transcribed verbatim and analyzed using the techniques of structured qualitative content analysis. SETTING This study was conducted at the medical faculties of the Goethe University, Frankfurt, Germany and of the Otto-von-Guericke University, Magdeburg, Germany. PARTICIPANTS All of the study participants were surgeons working at the university hospital of one of the faculties. RESULTS A total of 29 examiners were queried until a saturation of content was achieved. A critical reflection of one's own teaching was described as a major benefit by most participants. Furthermore, they noted that the standards and competences examined during the OSCE boosted the detail of their teaching sessions in the wards. However, the examiners criticized missed operations due the examination and were not appreciated by superiors for being an examiner. Most of the examiners (22/29) preferred to be an examiner themselves rather than appointing student peer examiners. If they had appointed someone else, that would mean they would miss valuable experiences useful for their own teaching. CONCLUSIONS Being an OSCE examiner confers several advantages, notably the reflection of one's own teaching, which was described as highly valuable by the examiners.
Collapse
Affiliation(s)
- Jasmina Sterz
- University Hospital Frankfurt, Department of Trauma, Hand and Reconstructive Surgery, Frankfurt, Germany.
| | - Bernd Bender
- University Hospital Frankfurt, Department of Trauma, Hand and Reconstructive Surgery, Frankfurt, Germany.
| | - Svea Linßen
- University Hospital Frankfurt, Department of Trauma, Hand and Reconstructive Surgery, Frankfurt, Germany.
| | - Maria-Christina Stefanescu
- University Hospital Frankfurt, Department of Pediatric Surgery and Pediatric Urology, Frankfurt, Germany.
| | - Sebastian Herbert Höfer
- University Hospital Frankfurt, Department of Oral, Cranio-Maxillofacial and Facial Plastic Surgery, Frankfurt, Germany.
| | - Felix Walcher
- University Hospital Magdeburg, Department of Trauma Surgery, Magdeburg, Germany.
| | - Julia Voss
- University Hospital Magdeburg, Department of Trauma Surgery, Magdeburg, Germany.
| | - Lukas Benedikt Seifert
- University Hospital Frankfurt, Department of Oral, Cranio-Maxillofacial and Facial Plastic Surgery, Frankfurt, Germany.
| | - Miriam Ruesseler
- University Hospital Frankfurt, Department of Trauma, Hand and Reconstructive Surgery, Frankfurt, Germany.
| |
Collapse
|
9
|
[Urology onLINE-webinar for assistants : Implementation and evaluation of a voluntary, web-based e‑learning training series for urology assistants in continuing education (Urology onLINE)]. Urologe A 2019; 58:658-665. [PMID: 30623215 DOI: 10.1007/s00120-018-0845-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
BACKGROUND AND OBJECTIVES The rapid development of new knowledge resources is essential for continuous training and continuing education. Since the training period often coincides with the starting of a family, time resources are often scarce. For this reason, a new, voluntary, web-based e‑learning training series was designed for urological assistants (urology onLINE). We investigated to what extent the offer of a web-based training series is used by urological training assistants and how it is evaluated by participants and speakers. MATERIALS AND METHODS The training series includes a one-month training on a topic from the urological continuing education, which is presented online and whose contents are checked by means of interspersed CME questions. During the investigation period November 2016 to October 2017, participants in the Urology onLINE training series were evaluated. In addition, an evaluation of individual events and an evaluation of the work load of the speakers took place. RESULTS On average, 60 participants participated in the individual events. These were rated very well with an average grade of 1.43 ± 0.21. Two thirds of the participants experienced an active and inquisitive experience during the event. The workload for the speakers was less than that of a comparable classroom event. CONCLUSIONS Overall, the new Urology onLINE training series aims to contribute to the increase in spatial and temporal flexibility, and complements existing training formats, especially in times of scarce time resources.
Collapse
|
10
|
Woods B, Byrne A, Bodger O. The effect of multitasking on the communication skill and clinical skills of medical students. BMC MEDICAL EDUCATION 2018; 18:76. [PMID: 29631572 PMCID: PMC5892044 DOI: 10.1186/s12909-018-1183-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Accepted: 03/27/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND Mental workload is an abstract concept that perceives cognition as the brain having a small and finite capacity to process information, with high levels of workload associated with poor performance and error. While an individual may be able to complete two different tasks individually, a combination of tasks may lead to cognitive overload and poor performance. In many high-risk industries, it is common to measure mental workload and then to redesign tasks until cognitive overload is avoided. This study aimed to measure the effect of multitasking on the mental workload and performance of medical students completing single and combined clinical tasks. METHODS Medical students who had completed basic clinical skills training in a single undergraduate Medical School completed four standardised tasks for a total of four minutes each, consisting of: inactivity, listening, venepuncture and a combination of listening and venepuncture. Task performance was measured using standard binary checklists and with mental workload measured using a secondary task method. RESULTS The tasks were successfully completed by 40 subjects and as expected, mental workload increased with task complexity. Combining the two tasks showed no difference in the associated mental workload and performance at venepuncture (p = 0.082) However, during the combined task, listening appeared to deteriorate (p < 0.001). CONCLUSIONS If staff are expected to simultaneously complete multiple tasks then they may preferentially shed communication tasks in order to maintain their performance of physical tasks, leading to the appearance of poor communication skills. Although this is a small-scale study in medical students it suggests that the active assessment and management of clinician workload in busy clinical settings may be an effective strategy to improve doctor-patient communication.
Collapse
Affiliation(s)
| | - Aidan Byrne
- Medical School, Swansea University, Swansea, UK
| | - Owen Bodger
- Medical School, Swansea University, Swansea, UK
| |
Collapse
|
11
|
Perera DP, Andrades M, Wass V. Peer feedback for examiner quality assurance on MRCGP International South Asia: a mixed methods study. BMC MEDICAL EDUCATION 2017; 17:244. [PMID: 29221450 PMCID: PMC5723026 DOI: 10.1186/s12909-017-1090-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2016] [Accepted: 12/04/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND The International Membership Examination (MRCGP[INT]) of the Royal College of General Practitioners UK is a unique collaboration between four South Asian countries with diverse cultures, epidemiology, clinical facilities and resources. In this setting good quality assurance is imperative to achieve acceptable standards of inter rater reliability. This study aims to explore the process of peer feedback for examiner quality assurance with regard to factors affecting the implementation and acceptance of the method. METHODS A sequential mixed methods approach was used based on focus group discussions with examiners (n = 12) and clinical examination convenors who acted as peer reviewers (n = 4). A questionnaire based on emerging themes and literature review was then completed by 20 examiners at the subsequent OSCE exam. Qualitative data were analysed using an iterative reflexive process. Quantitative data were integrated by interpretive analysis looking for convergence, complementarity or dissonance. The qualitative data helped understand the issues and informed the process of developing the questionnaire. The quantitative data allowed for further refining of issues, wider sampling of examiners and giving voice to different perspectives. RESULTS Examiners stated specifically that peer feedback gave an opportunity for discussion, standardisation of judgments and improved discriminatory abilities. Interpersonal dynamics, hierarchy and perception of validity of feedback were major factors influencing acceptance of feedback. Examiners desired increased transparency, accountability and the opportunity for equal partnership within the process. The process was stressful for examiners and reviewers; however acceptance increased with increasing exposure to receiving feedback. The process could be refined to improve acceptability through scrupulous attention to training and selection of those giving feedback to improve the perceived validity of feedback and improved reviewer feedback skills to enable better interpersonal dynamics and a more equitable feedback process. It is important to highlight the role of quality assurance and peer feedback as a tool for continuous improvement and maintenance of standards to examiners during training. CONCLUSION Examiner quality assurance using peer feedback was generally a successful and accepted process. The findings highlight areas for improvement and guide the path towards a model of feedback that is responsive to examiner views and cultural sensibilities.
Collapse
Affiliation(s)
- D. P. Perera
- Department of Family Medicine, Faculty of Medicine, University of Kelaniya, Ragama, Sri Lanka
| | - Marie Andrades
- Department of Family Medicine, Aga Khan University Hospital, Karachi, Pakistan
| | - Val Wass
- Emeritus Professor of Medical Education, Faculty of Health, Keele University, Newcastle under Lyme, UK
| |
Collapse
|
12
|
Brennan PA, Scrimgeour DS, Patel S, Patel R, Griffiths G, Croke DT, Smith L, Arnett R. Changing Objective Structured Clinical Examinations Stations at Lunchtime During All Day Postgraduate Surgery Examinations Improves Examiner Morale and Stress. JOURNAL OF SURGICAL EDUCATION 2017; 74:736-747. [PMID: 28131800 DOI: 10.1016/j.jsurg.2016.12.013] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2016] [Revised: 11/23/2016] [Accepted: 12/28/2016] [Indexed: 06/06/2023]
Abstract
BACKGROUND Human factors are important causes of error, but little is known about their possible effect during objective structured clinical examinations (OSCE). We have previously identified stress and pressure in OSCE examiners in the postgraduate intercollegiate Membership of the Royal College of Surgeons (MRCS) examination. After modifying examination delivery by changing OSCE stations at lunchtime with no demonstrable effect on candidate outcome, we resurveyed examiners to ascertain whether examiner experience was improved. METHOD Examiners (n = 180) from all 4 surgical colleges in the United Kingdom and Ireland were invited to complete the previously validated human factors questionnaire used in 2014. Aggregated scores for each of 4 previously identified factors were compared with the previous data. Unit-weighted z-scores and nonparametric Kruskal-Wallis methods were used to test the hypothesis that there was no difference among the median factor z-scores for each college. Individual Mann-Whitney-Wilcoxon tests (with appropriate Bonferonni corrections) were used to determine any differences between factors and the respective colleges. RESULTS 141 Completed questionnaires were evaluated (78% response rate) and compared with 108 responses (90%) from the original study. Analysis was based on 26 items common to both studies. In 2014, the college with the highest candidate numbers (England) was significantly different in 1 factor (stress and pressure), compared with Edinburgh (Mann-Whitney-Wilcoxon: W = 1524, p < 0.001) and Glasgow colleges (Mann-Whitney-Wilcoxon: W = 104, p = 0.004). No differences were found among colleges in the same factor in 2016, Kruskall-Wallis: (χ2 (3) = 1.73, p = 0.63). Analysis of responses found inconsistency among examiners regarding mistakes or omissions made when candidates were performing well. CONCLUSION After making changes to OSCE delivery, factor scores relating to examiner stress and pressure are now improved and consistent across the surgical colleges. Stress and pressure can occur in OSCE examiners and examination delivery should ideally minimize these issues, thereby improving morale is also likely to benefit candidates.
Collapse
Affiliation(s)
- Peter A Brennan
- ICBSE, The Royal College of Surgeons of England, London, United Kingdom.
| | | | - Sheena Patel
- Maxillofacial Unit, Queen Alexandra Hospital, Portsmouth, United Kingdom
| | - Roshnee Patel
- Maxillofacial Unit, Queen Alexandra Hospital, Portsmouth, United Kingdom
| | - Gareth Griffiths
- Intercollegiate Surgical Curriculum Project, London, United Kingdom
| | - David T Croke
- Royal College of Surgeons in Ireland, Dublin, Ireland
| | - Lee Smith
- ICBSE, The Royal College of Surgeons of England, London, United Kingdom
| | | |
Collapse
|
13
|
Reid K, Smallwood D, Collins M, Sutherland R, Dodds A. Taking OSCE examiner training on the road: reaching the masses. MEDICAL EDUCATION ONLINE 2016; 21:32389. [PMID: 27687287 PMCID: PMC5043080 DOI: 10.3402/meo.v21.32389] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2016] [Revised: 08/30/2016] [Accepted: 08/31/2016] [Indexed: 06/06/2023]
Abstract
BACKGROUND To ensure the rigour of objective structured clinical examinations (OSCEs) in assessing medical students, medical school educators must educate examiners with a view to standardising examiner assessment behaviour. Delivering OSCE examiner training is a necessary yet challenging part of the OSCE process. A novel approach to implementing training for current and potential OSCE examiners was trialled by delivering large-group education sessions at major teaching hospitals. METHODS The 'OSCE Roadshow' comprised a short training session delivered in the context of teaching hospital 'Grand Rounds' to current and potential OSCE examiners. The training was developed to educate clinicians about OSCE processes, clarify the examiners' role and required behaviours, and to review marking guides and mark allocation in an effort to standardise OSCE processes and encourage consistency in examiner marking behaviour. A short exercise allowed participants to practise marking a mock OSCE to investigate examiner marking behaviour after the training. RESULTS OSCE Roadshows at four metropolitan and one rural teaching hospital were well received and well attended by 171 clinicians across six sessions. Unexpectedly, medical students also attended in large numbers (n=220). After training, participants' average scores for the mock OSCE clustered closely around the ideal score of 28 (out of 40), and the average scores did not differ according to the levels of clinical experience. CONCLUSION The OSCE Roadshow demonstrated the potential of brief familiarisation training in reaching large numbers of current and potential OSCE examiners in a time and cost-effective manner to promote standardisation of OSCE processes.
Collapse
Affiliation(s)
- Katharine Reid
- Department of Medical Education, Melbourne Medical School, The University of Melbourne, Victoria, Melbourne, Australia;
| | - David Smallwood
- Department of Medical Education, Melbourne Medical School, The University of Melbourne, Victoria, Melbourne, Australia
| | - Margo Collins
- Department of Medical Education, Melbourne Medical School, The University of Melbourne, Victoria, Melbourne, Australia
| | - Ruth Sutherland
- Department of Medical Education, Melbourne Medical School, The University of Melbourne, Victoria, Melbourne, Australia
| | - Agnes Dodds
- Department of Medical Education, Melbourne Medical School, The University of Melbourne, Victoria, Melbourne, Australia
| |
Collapse
|