51
|
Daelmans HE, Mak-van der Vossen MC, Croiset G, Kusurkar RA. What difficulties do faculty members face when conducting workplace-based assessments in undergraduate clerkships? INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2016; 7:19-24. [PMID: 26803256 PMCID: PMC4724428 DOI: 10.5116/ijme.5689.3c7f] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Accepted: 01/03/2016] [Indexed: 05/14/2023]
Abstract
OBJECTIVE Workplace-based assessments are based on the principle of providing feedback to medical students on clinical performance in authentic settings. In practice, however, the assessment often overshadows the feedback. The aim of this study was to determine what problems faculty perceived when performing workplace-based assessments and what solutions they suggested to overcome these difficulties. METHODS Discussion meetings were conducted with education coordinators and faculty (n=55) from 11 peripheral hospitals concerning the difficulties encountered when conducting workplace-based assessments. We analysed the reports from these discussion meetings using an integrated approach guided by our research questions to code the data. Two researchers analysed the data independently and resolved differences of opinion through consensus. RESULTS The problems perceived by faculty in workplace-based assessments (difficulties) and suggestions for improvement formed the overarching themes. Problems included the short duration of clerkships, students choosing the assessment moments, the use of grades for the mini-Clinical Evaluation Exercise, the difficulty in combining teacher and assessor roles and the difficulty in giving fail judgements. Suggestions for improvement included longer clerkship duration, faculty choosing the assessment moments, using a pass/fail system for the mini-Clinical Evaluation Exercise and forward feeding of performance from earlier clerkships following a fail judgement. CONCLUSIONS Our study indicates that faculty perceive difficulties when conducting workplace-based assessments. These assessments need periodical review to understand the difficulties faculty experience using them; they also require periodical feedback to ensure their proper and effective use.
Collapse
Affiliation(s)
- Hester E.M. Daelmans
- VUmc School of Medical Sciences, Institute of Education and Training, Amsterdam, the Netherlands
| | | | - Gerda Croiset
- VUmc School of Medical Sciences, Institute of Education and Training, Amsterdam, the Netherlands
| | - Rashmi A. Kusurkar
- VUmc School of Medical Sciences, Institute of Education and Training, Amsterdam, the Netherlands
| |
Collapse
|
52
|
Bok HGJ, Jaarsma DADC, Spruijt A, Van Beukelen P, Van Der Vleuten CPM, Teunissen PW. Feedback-giving behaviour in performance evaluations during clinical clerkships. MEDICAL TEACHER 2016; 38:88-95. [PMID: 25776225 DOI: 10.3109/0142159x.2015.1017448] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
CONTEXT Narrative feedback documented in performance evaluations by the teacher, i.e. the clinical supervisor, is generally accepted to be essential for workplace learning. Many studies have examined factors of influence on the usage of mini-clinical evaluation exercise (mini-CEX) instruments and provision of feedback, but little is known about how these factors influence teachers' feedback-giving behaviour. In this study, we investigated teachers' use of mini-CEX in performance evaluations to provide narrative feedback in undergraduate clinical training. METHODS We designed an exploratory qualitative study using an interpretive approach. Focusing on the usage of mini-CEX instruments in clinical training, we conducted semi-structured interviews to explore teachers' perceptions. Between February and June 2013, we conducted interviews with 14 clinicians participated as teachers during undergraduate clinical clerkships. Informed by concepts from the literature, we coded interview transcripts and iteratively reduced and displayed data using template analysis. RESULTS We identified three main themes of interrelated factors that influenced teachers' practice with regard to mini-CEX instruments: teacher-related factors; teacher-student interaction-related factors, and teacher-context interaction-related factors. Four issues (direct observation, relationship between teacher and student, verbal versus written feedback, formative versus summative purposes) that are pertinent to workplace-based performance evaluations were presented to clarify how different factors interact with each other and influence teachers' feedback-giving behaviour. Embedding performance observation in clinical practice and establishing trustworthy teacher-student relationships in more longitudinal clinical clerkships were considered important in creating a learning environment that supports and facilitates the feedback exchange. CONCLUSION Teachers' feedback-giving behaviour within the clinical context results from the interaction between personal, interpersonal and contextual factors. Increasing insight into how teachers use mini-CEX instruments in daily practice may offer strategies for creating a professional learning culture in which feedback giving and seeking would be enhanced.
Collapse
Affiliation(s)
| | | | | | | | | | - Pim W Teunissen
- c Maastricht University , The Netherlands
- d VU University Medical Centre , The Netherlands
| |
Collapse
|
53
|
Lefroy J, Watling C, Teunissen PW, Brand P. Guidelines: the do's, don'ts and don't knows of feedback for clinical education. PERSPECTIVES ON MEDICAL EDUCATION 2015; 4:284-299. [PMID: 26621488 PMCID: PMC4673072 DOI: 10.1007/s40037-015-0231-7] [Citation(s) in RCA: 171] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
INTRODUCTION The guidelines offered in this paper aim to amalgamate the literature on formative feedback into practical Do's, Don'ts and Don't Knows for individual clinical supervisors and for the institutions that support clinical learning. METHODS The authors built consensus by an iterative process. Do's and Don'ts were proposed based on authors' individual teaching experience and awareness of the literature, and the amalgamated set of guidelines were then refined by all authors and the evidence was summarized for each guideline. Don't Knows were identified as being important questions to this international group of educators which if answered would change practice. The criteria for inclusion of evidence for these guidelines were not those of a systematic review, so indicators of strength of these recommendations were developed which combine the evidence with the authors' consensus. RESULTS A set of 32 Do and Don't guidelines with the important Don't Knows was compiled along with a summary of the evidence for each. These are divided into guidelines for the individual clinical supervisor giving feedback to their trainee (recommendations about both the process and the content of feedback) and guidelines for the learning culture (what elements of learning culture support the exchange of meaningful feedback, and what elements constrain it?) CONCLUSION Feedback is not easy to get right, but it is essential to learning in medicine, and there is a wealth of evidence supporting the Do's and warning against the Don'ts. Further research into the critical Don't Knows of feedback is required. A new definition is offered: Helpful feedback is a supportive conversation that clarifies the trainee's awareness of their developing competencies, enhances their self-efficacy for making progress, challenges them to set objectives for improvement, and facilitates their development of strategies to enable that improvement to occur.
Collapse
Affiliation(s)
- Janet Lefroy
- Keele University School of Medicine, Clinical Education Centre RSUH, ST4 6QG, Staffordshire, UK.
| | - Chris Watling
- Schulich School of Medicine and Dentistry, Western University, Ontario, Canada
| | - Pim W Teunissen
- Maastricht University and VU University Medical Center, Amsterdam, The Netherlands
| | - Paul Brand
- Isala Klinieken, Zwolle, The Netherlands
| |
Collapse
|
54
|
Rogausch A, Beyeler C, Montagne S, Jucker-Kupper P, Berendonk C, Huwendiek S, Gemperli A, Himmel W. The influence of students' prior clinical skills and context characteristics on mini-CEX scores in clerkships--a multilevel analysis. BMC MEDICAL EDUCATION 2015; 15:208. [PMID: 26608836 PMCID: PMC4658793 DOI: 10.1186/s12909-015-0490-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2015] [Accepted: 11/19/2015] [Indexed: 05/23/2023]
Abstract
BACKGROUND In contrast to objective structured clinical examinations (OSCEs), mini-clinical evaluation exercises (mini-CEXs) take place at the clinical workplace. As both mini-CEXs and OSCEs assess clinical skills, but within different contexts, this study aims at analyzing to which degree students' mini-CEX scores can be predicted by their recent OSCE scores and/or context characteristics. METHODS Medical students participated in an end of Year 3 OSCE and in 11 mini-CEXs during 5 different clerkships of Year 4. The students' mean scores of 9 clinical skills OSCE stations and mean 'overall' and 'domain' mini-CEX scores, averaged over all mini-CEXs of each student were computed. Linear regression analyses including random effects were used to predict mini-CEX scores by OSCE performance and characteristics of clinics, trainers, students and assessments. RESULTS A total of 512 trainers in 45 clinics provided 1783 mini-CEX ratings for 165 students; OSCE results were available for 144 students (87%). Most influential for the prediction of 'overall' mini-CEX scores was the trainers' clinical position with a regression coefficient of 0.55 (95%-CI: 0.26-0.84; p < .001) for residents compared to heads of department. Highly complex tasks and assessments taking place in large clinics significantly enhanced 'overall' mini-CEX scores, too. In contrast, high OSCE performance did not significantly increase 'overall' mini-CEX scores. CONCLUSION In our study, Mini-CEX scores depended rather on context characteristics than on students' clinical skills as demonstrated in an OSCE. Ways are discussed which focus on either to enhance the scores' validity or to use narrative comments only.
Collapse
Affiliation(s)
- Anja Rogausch
- Department of Assessment and Evaluation, Institute of Medical Education, University of Bern, Bern, Switzerland.
- Clinic Sonnenhalde, Riehen, Switzerland.
| | - Christine Beyeler
- Department of Assessment and Evaluation, Institute of Medical Education, University of Bern, Bern, Switzerland.
| | - Stephanie Montagne
- Department of Assessment and Evaluation, Institute of Medical Education, University of Bern, Bern, Switzerland.
| | - Patrick Jucker-Kupper
- Department of Assessment and Evaluation, Institute of Medical Education, University of Bern, Bern, Switzerland.
| | - Christoph Berendonk
- Department of Assessment and Evaluation, Institute of Medical Education, University of Bern, Bern, Switzerland.
| | - Sören Huwendiek
- Department of Assessment and Evaluation, Institute of Medical Education, University of Bern, Bern, Switzerland.
| | - Armin Gemperli
- Department of Health Sciences and Health Policy, University of Lucerne, Lucerne, Switzerland.
- Swiss Paraplegic Research Nottwil, Nottwil, Switzerland.
| | - Wolfgang Himmel
- Department of General Practice, University Medical Center, Göttingen, Germany.
| |
Collapse
|
55
|
Apramian T, Cristancho S, Watling C, Ott M, Lingard L. Thresholds of Principle and Preference: Exploring Procedural Variation in Postgraduate Surgical Education. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2015; 90:S70-6. [PMID: 26505105 PMCID: PMC5578750 DOI: 10.1097/acm.0000000000000909] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
BACKGROUND Expert physicians develop their own ways of doing things. The influence of such practice variation in clinical learning is insufficiently understood. Our grounded theory study explored how residents make sense of, and behave in relation to, the procedural variations of faculty surgeons. METHOD We sampled senior postgraduate surgical residents to construct a theoretical framework for how residents make sense of procedural variations. Using a constructivist grounded theory approach, we used marginal participant observation in the operating room across 56 surgical cases (146 hours), field interviews (38), and formal interviews (6) to develop a theoretical framework for residents' ways of dealing with procedural variations. Data analysis used constant comparison to iteratively refine the framework and data collection until theoretical saturation was reached. RESULTS The core category of the constructed theory was called thresholds of principle and preference and it captured how faculty members position some procedural variations as negotiable and others not. The term thresholding was coined to describe residents' daily experiences of spotting, mapping, and negotiating their faculty members' thresholds and defending their own emerging thresholds. CONCLUSIONS Thresholds of principle and preference play a key role in workplace-based medical education. Postgraduate medical learners are occupied on a day-to-day level with thresholding and attempting to make sense of the procedural variations of faculty. Workplace-based teaching and assessment should include an understanding of the integral role of thresholding in shaping learners' development. Future research should explore the nature and impact of thresholding in workplace-based learning beyond the surgical context.
Collapse
|
56
|
Yeates P, Cardell J, Byrne G, Eva KW. Relatively speaking: contrast effects influence assessors' scores and narrative feedback. MEDICAL EDUCATION 2015; 49:909-919. [PMID: 26296407 DOI: 10.1111/medu.12777] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2014] [Revised: 12/22/2014] [Accepted: 04/27/2015] [Indexed: 06/04/2023]
Abstract
CONTEXT In prior research, the scores assessors assign can be biased away from the standard of preceding performances (i.e. 'contrast effects' occur). OBJECTIVES This study examines the mechanism and robustness of these findings to advance understanding of assessor cognition. We test the influence of the immediately preceding performance relative to that of a series of prior performances. Further, we examine whether assessors' narrative comments are similarly influenced by contrast effects. METHODS Clinicians (n = 61) were randomised to three groups in a blinded, Internet-based experiment. Participants viewed identical videos of good, borderline and poor performances by first-year doctors in varied orders. They provided scores and written feedback after each video. Narrative comments were blindly content-analysed to generate measures of valence and content. Variability of narrative comments and scores was compared between groups. RESULTS Comparisons indicated contrast effects after a single performance. When a good performance was preceded by a poor performance, ratings were higher (mean 5.01, 95% confidence interval [CI] 4.79-5.24) than when observation of the good performance was unbiased (mean 4.36, 95% CI 4.14-4.60; p < 0.05, d = 1.3). Similarly, borderline performance was rated lower when preceded by good performance (mean 2.96, 95% CI 2.56-3.37) than when viewed without preceding bias (mean 3.55, 95% CI 3.17-3.92; p < 0.05, d = 0.7). The series of ratings participants assigned suggested that the magnitude of contrast effects is determined by an averaging of recent experiences. The valence (but not content) of narrative comments showed contrast effects similar to those found in numerical scores. CONCLUSIONS These findings are consistent with research from behavioural economics and psychology that suggests judgement tends to be relative in nature. Observing that the valence of narrative comments is similarly influenced suggests these effects represent more than difficulty in translating impressions into a number. The extent to which such factors impact upon assessment in practice remains to be determined as the influence is likely to depend on context.
Collapse
Affiliation(s)
- Peter Yeates
- Centre for Respiratory Medicine and Allergy, Institute of Inflammation and Repair, University of Manchester, Manchester, UK
| | - Jenna Cardell
- Royal Bolton Hospital, Bolton NHS Foundation Trust, Bolton, Lancashire, UK
| | - Gerard Byrne
- Health Education North West, Health Education England, Manchester, UK
| | - Kevin W Eva
- Centre for Health Education Scholarship, Division of Medicine, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
57
|
Moonen-van Loon JMW, Overeem K, Govaerts MJB, Verhoeven BH, van der Vleuten CPM, Driessen EW. The reliability of multisource feedback in competency-based assessment programs: the effects of multiple occasions and assessor groups. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2015; 90:1093-9. [PMID: 25993283 DOI: 10.1097/acm.0000000000000763] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
PURPOSE Residency programs around the world use multisource feedback (MSF) to evaluate learners' performance. Studies of the reliability of MSF show mixed results. This study aimed to identify the reliability of MSF as practiced across occasions with varying numbers of assessors from different professional groups (physicians and nonphysicians) and the effect on the reliability of the assessment for different competencies when completed by both groups. METHOD The authors collected data from 2008 to 2012 from electronically completed MSF questionnaires. In total, 428 residents completed 586 MSF occasions, and 5,020 assessors provided feedback. The authors used generalizability theory to analyze the reliability of MSF for multiple occasions, different competencies, and varying numbers of assessors and assessor groups across multiple occasions. RESULTS A reliability coefficient of 0.800 can be achieved with two MSF occasions completed by at least 10 assessors per group or with three MSF occasions completed by 5 assessors per group. Nonphysicians' scores for the "Scholar" and "Health advocate" competencies and physicians' scores for the "Health advocate" competency had a negative effect on the composite reliability. CONCLUSIONS A feasible number of assessors per MSF occasion can reliably assess residents' performance. Scores from a single occasion should be interpreted cautiously. However, every occasion can provide valuable feedback for learning. This research confirms that the (unique) characteristics of different assessor groups should be considered when interpreting MSF results. Reliability seems to be influenced by the included assessor groups and competencies. These findings will enhance the utility of MSF during residency training.
Collapse
Affiliation(s)
- Joyce M W Moonen-van Loon
- J.M.W. Moonen-van Loon is postdoctoral researcher, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands. K. Overeem is postdoctoral researcher, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands. M.J.B. Govaerts is assistant professor, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands. B.H. Verhoeven is pediatric surgeon, Department of Surgery, Radboud University Medical Center, Nijmegen, and assistant professor, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands. C.P.M. van der Vleuten is professor of education, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands. E.W. Driessen is associate professor of education, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands
| | | | | | | | | | | |
Collapse
|
58
|
Flum E, Maagaard R, Godycki-Cwirko M, Scarborough N, Scherpbier N, Ledig T, Roos M, Steinhäuser J. Assessing family medicine trainees--what can we learn from the European neighbours? GMS ZEITSCHRIFT FUR MEDIZINISCHE AUSBILDUNG 2015; 32:Doc21. [PMID: 26038686 PMCID: PMC4446652 DOI: 10.3205/zma000963] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/22/2014] [Revised: 11/21/2014] [Accepted: 02/12/2015] [Indexed: 12/05/2022]
Abstract
Background: Although demands on family physicians (FP) are to a large extent similar in the European Union, uniform assessment standards for family medicine (FM) specialty training and assessment do not exist. Aim of this pilot study was to elicit and compare the different modalities and assessment methods of FM specialty training in five European countries. Methods: A semi structured survey was undertaken based on a convenient sample in five European countries (Denmark, Germany, Poland, the Netherlands and the United Kingdom). The respondents were asked to respond to ten items about aspects of FM specialty training and assessment methods in their respective countries. If available, this data was completed with information from official websites of the countries involved. Results: FM specialty training is performed heterogeneously in the surveyed countries. Training time periods range from three to five years, in some countries requiring a foundation program of up to two years. Most countries perform longitudinal assessment during FM specialty training using a combination of competence-based approach with additional formative and summative assessment. There is some evidence on the assessments methods used, however the assessment method used and costs of assessment differs remarkably between the participating countries. Conclusions: Longitudinal and competence-based assessment is the presently preferred approach for FM specialty training. Countries which use less multifaceted methods for assessment could learn from best practice. Potential changes have significant cost implications.
Collapse
Affiliation(s)
- Elisabeth Flum
- University Hospital Heidelberg, Department of General Practice and Health Services Research, Heidelberg, Germany
| | - Roar Maagaard
- University of Aarhus, Department of Medical Education, Aarhus, Denmark
| | | | | | - Nynke Scherpbier
- Radboud University Medical Centre, Department of Primary and Community Care, Nijmegen, The Netherlands
| | - Thomas Ledig
- University Hospital Heidelberg, Department of General Practice and Health Services Research, Heidelberg, Germany
| | - Marco Roos
- University of Erlangen-Nuremberg, Institute of General Practice, Erlangen, Germany
| | - Jost Steinhäuser
- University Hospital Heidelberg, Department of General Practice and Health Services Research, Heidelberg, Germany ; University Hospital Schleswig-Holstein, Institute of Family Medicine, Lübeck, Germany
| |
Collapse
|
59
|
Heeneman S, Oudkerk Pool A, Schuwirth LWT, van der Vleuten CPM, Driessen EW. The impact of programmatic assessment on student learning: theory versus practice. MEDICAL EDUCATION 2015; 49:487-98. [PMID: 25924124 DOI: 10.1111/medu.12645] [Citation(s) in RCA: 107] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2014] [Revised: 09/15/2014] [Accepted: 10/21/2014] [Indexed: 05/07/2023]
Abstract
CONTEXT It is widely acknowledged that assessment can affect student learning. In recent years, attention has been called to 'programmatic assessment', which is intended to optimise both learning functions and decision functions at the programme level of assessment, rather than according to individual methods of assessment. Although the concept is attractive, little research into its intended effects on students and their learning has been conducted. OBJECTIVES This study investigated the elements of programmatic assessment that students perceived as supporting or inhibiting learning, and the factors that influenced the active construction of their learning. METHODS The study was conducted in a graduate-entry medical school that implemented programmatic assessment. Thus, all assessment information, feedback and reflective activities were combined into a comprehensive, holistic programme of assessment. We used a qualitative approach and interviewed students (n = 17) in the pre-clinical phase of the programme about their perceptions of programmatic assessment and learning approaches. Data were scrutinised using theory-based thematic analysis. RESULTS Elements from the comprehensive programme of assessment, such as feedback, portfolios, assessments and assignments, were found to have both supporting and inhibiting effects on learning. These supporting and inhibiting elements influenced students' construction of learning. Findings showed that: (i) students perceived formative assessment as summative; (ii) programmatic assessment was an important trigger for learning, and (iii) the portfolio's reflective activities were appreciated for their generation of knowledge, the lessons drawn from feedback, and the opportunities for follow-up. Some students, however, were less appreciative of reflective activities. For these students, the elements perceived as inhibiting seemed to dominate the learning response. CONCLUSIONS The active participation of learners in their own learning is possible when learning is supported by programmatic assessment. Certain features of the comprehensive programme of assessment were found to influence student learning, and this influence can either support or inhibit students' learning responses.
Collapse
Affiliation(s)
- Sylvia Heeneman
- Department of Pathology, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands; Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands
| | | | | | | | | |
Collapse
|
60
|
Al-Wassia H, Al-Wassia R, Shihata S, Park YS, Tekian A. Using patients' charts to assess medical trainees in the workplace: a systematic review. MEDICAL TEACHER 2015; 37 Suppl 1:S82-S87. [PMID: 25649102 DOI: 10.3109/0142159x.2015.1006599] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
OBJECTIVES The objective of this review is to summarize and critically appraise existing evidence on the use of chart stimulated recall (CSR) and case-based discussion (CBD) as an assessment tool for medical trainees. METHODS Medline, Embase, CINAHL, PsycINFO, Educational Resources Information Centre (ERIC), Web of Science, and the Cochrane Central Register of Controlled Trials were searched for original articles on the use of CSR or CBD as an assessment method for trainees in all medical specialties. RESULTS Four qualitative and three observational non-comparative studies were eligible for this review. The number of patient-chart encounters needed to achieve sufficient reliability varied across studies. None of the included studies evaluated the content validity of the tool. Both trainees and assessors expressed high level of satisfaction with the tool; however, inadequate training, different interpretation of the scoring scales and skills needed to give feedback were addressed as limitations for conducting the assessment. CONCLUSION There is still no compelling evidence for the use of patient's chart to evaluate medical trainees in the workplace. A body of evidence that is valid, reliable, and documents the educational effect in support of the use of patients' charts to assess medical trainees is needed.
Collapse
|
61
|
Lockyer J, Horsley T, Zeiter J, Campbell C. Role for assessment in maintenance of certification: physician perceptions of assessment. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2015; 35:11-17. [PMID: 25799968 DOI: 10.1002/chp.21265] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
INTRODUCTION The Royal College of Physicians and Surgeons of Canada modified its Maintenance of Certification (MOC) framework in 2011 to further incentivize assessment activities compared to group and self-learning. The purpose of this study was to explore physician's perceptions of their access to assessment activities, barriers to participation in assessment, and the need for the Royal College to further support its fellows in gaining access to assessment activities. METHODS A questionnaire-based survey was sent to all participants of the MOC program as part of a program evaluation examining recent changes to the MOC program. RESULTS 5259 respondents contributed responses. Most physicians were comfortable with the revised framework for assessment while approximately 40% were neutral regarding whether lack of access to self-assessment activities was a problem. Respondents expressed a need for more self-assessment programs particularly those developed outside of Canada. Neither a lack of feedback about performance or discomfort with recording performance gaps was perceived as a barrier to participation in assessment activities. Physician comments were consistent with the quantitative data and elaborated on the need to develop and recognize more assessment activities. DISCUSSION Physicians accepted the revised MOC program framework but perceived difficulty in accessing assessment programs, activities, and tools. As the framework changed again January 2014, requiring all fellows and MOC program participants to completion of at least 25 credits in each section of the MOC program (including assessment) during their new 5-year MOC cycle, additional resources will be needed to support opportunities for physicians to engage in assessment.
Collapse
|
62
|
Abstract
A 12-month pilot was carried out on assessments for learning and assessments of learning as part of workplace-based assessments in postgraduate medical education. This was carried out in three regions and core medical trainees and higher specialty medical trainees participated. Focus groups and questionnaires were utilised to investigate the trainees' and trainers' experiences and perceptions of assessments for learning. The study demonstrated that the trainees and trainers perceived the newly introduced assessments for learning--supervised learning events (SLEs)--as learning tools. However, SLEs were often undertaken with no previous organisation and with no direct observation, regardless of the underlying purposes and methods of the WPBAs. There was a lack of, or delayed or non-specific, feedback following SLEs, which would have impeded its educational value. Trainee and trainer disengagement was one of the contributing factors. These findings are valuable in informing and facilitating future successful implementation of assessments for learning.
Collapse
|
63
|
Rees CE, Cleland JA, Dennis A, Kelly N, Mattick K, Monrouxe LV. Supervised learning events in the foundation programme: a UK-wide narrative interview study. BMJ Open 2014; 4:e005980. [PMID: 25324323 PMCID: PMC4202004 DOI: 10.1136/bmjopen-2014-005980] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
OBJECTIVES To explore Foundation trainees' and trainers' understandings and experiences of supervised learning events (SLEs), compared with workplace-based assessments (WPBAs), and their suggestions for developing SLEs. DESIGN A narrative interview study based on 55 individual and 19 group interviews. SETTING UK-wide study across three sites in England, Scotland and Wales. PARTICIPANTS Using maximum-variation sampling, 70 Foundation trainees and 40 trainers were recruited, shared their understandings and experiences of SLEs/WPBAs and made recommendations for future practice. METHODS Data were analysed using thematic and discourse analysis and narrative analysis of one exemplar personal incident narrative. RESULTS While participants volunteered understandings of SLEs as learning and assessment, they typically volunteered understandings of WPBAs as assessment. Trainers seemed more likely to describe SLEs as assessment and a 'safety net' to protect patients than trainees. We identified 333 personal incident narratives in our data (221 SLEs; 72 WPBAs). There was perceived variability in the conduct of SLEs/WPBAs in terms of their initiation, tools used, feedback and finalisation. Numerous factors at individual, interpersonal, cultural and technological levels were thought to facilitate/hinder learning. SLE narratives were more likely to be evaluated positively than WPBA narratives overall and by trainees specifically. Participants made sense of their experiences, emotions, identities and relationships through their narratives. They provided numerous suggestions for improving SLEs at individual, interpersonal, cultural and technological levels. CONCLUSIONS Our findings provide tentative support for the shift to formative learning with the introduction of SLEs, albeit raising concerns around trainees' and trainers' understandings about SLEs. We identify five key educational recommendations from our study. Additional research is now needed to explore further the complexities around SLEs within workplace learning.
Collapse
Affiliation(s)
- Charlotte E Rees
- Centre for Medical Education, Medical Education Institute, School of Medicine, University of Dundee, Dundee, UK
| | - Jennifer A Cleland
- Division of Medical and Dental Education, University of Aberdeen, Aberdeen, UK
| | - Ashley Dennis
- Centre for Medical Education, Medical Education Institute, School of Medicine, University of Dundee, Dundee, UK
| | - Narcie Kelly
- University of Exeter Medical School, University of Exeter, Exeter, UK
| | - Karen Mattick
- University of Exeter Medical School, University of Exeter, Exeter, UK
| | - Lynn V Monrouxe
- Office of Research and Scholarship, Institute of Medical Education, Cardiff University, Cardiff, UK
| |
Collapse
|
64
|
Montagne S, Rogausch A, Gemperli A, Berendonk C, Jucker-Kupper P, Beyeler C. The mini-clinical evaluation exercise during medical clerkships: are learning needs and learning goals aligned? MEDICAL EDUCATION 2014; 48:1008-19. [PMID: 25200021 DOI: 10.1111/medu.12513] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2013] [Revised: 02/24/2014] [Accepted: 04/22/2014] [Indexed: 05/11/2023]
Abstract
OBJECTIVES The generation of learning goals (LGs) that are aligned with learning needs (LNs) is one of the main purposes of formative workplace-based assessment. In this study, we aimed to analyse how often trainer-student pairs identified corresponding LNs in mini-clinical evaluation exercise (mini-CEX) encounters and to what degree these LNs aligned with recorded LGs, taking into account the social environment (e.g. clinic size) in which the mini-CEX was conducted. METHODS Retrospective analyses of adapted mini-CEX forms (trainers' and students' assessments) completed by all Year 4 medical students during clerkships were performed. Learning needs were defined by the lowest score(s) assigned to one or more of the mini-CEX domains. Learning goals were categorised qualitatively according to their correspondence with the six mini-CEX domains (e.g. history taking, professionalism). Following descriptive analyses of LNs and LGs, multi-level logistic regression models were used to predict LGs by identified LNs and social context variables. RESULTS A total of 512 trainers and 165 students conducted 1783 mini-CEXs (98% completion rate). Concordantly, trainer-student pairs most often identified LNs in the domains of 'clinical reasoning' (23% of 1167 complete forms), 'organisation/efficiency' (20%) and 'physical examination' (20%). At least one 'defined' LG was noted on 313 student forms (18% of 1710). Of the 446 LGs noted in total, the most frequently noted were 'physical examination' (49%) and 'history taking' (21%). Corresponding LNs as well as social context factors (e.g. clinic size) were found to be predictors of these LGs. CONCLUSIONS Although trainer-student pairs often agreed in the LNs they identified, many assessments did not result in aligned LGs. The sparseness of LGs, their dependency on social context and their partial non-alignment with students' LNs raise questions about how the full potential of the mini-CEX as not only a 'diagnostic' but also an 'educational' tool can be exploited.
Collapse
Affiliation(s)
- Stephanie Montagne
- Institute of Medical Education, Faculty of Medicine, University of Bern, Bern, Switzerland
| | | | | | | | | | | |
Collapse
|
65
|
Scheele F, Novak Z, Vetter K, Caccia N, Goverde A. Obstetrics and gynaecology training in Europe needs a next step. Eur J Obstet Gynecol Reprod Biol 2014; 180:130-2. [DOI: 10.1016/j.ejogrb.2014.04.014] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2014] [Accepted: 04/08/2014] [Indexed: 11/30/2022]
|
66
|
van Loon KA, Driessen EW, Teunissen PW, Scheele F. Experiences with EPAs, potential benefits and pitfalls. MEDICAL TEACHER 2014; 36:698-702. [PMID: 24804911 DOI: 10.3109/0142159x.2014.909588] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Reforms in postgraduate medical education (PGME) exposed a gap between educational theory and clinical practice. Entrustable Professional Activities (EPAs) were introduced to assist clinicians in bridging this gap and to create better consonance between the intended and the enacted curriculum. In this viewpoint paper, we discuss the potential and the pitfalls of using EPAs in PGME. EPAs promise an effective way of teaching abstract competencies in a curriculum based on real-life professional activities that are suitable for clinical assessment. Summative judgement is used to entrust a resident step by step in a certain EPA, resulting in an increase of independent practice. However, we argue that the success of EPAs depends on (1) a balance: brief focussed descriptions against the requirements for detail and (2) a precondition: a mature and flexible workplace for learning.
Collapse
|
67
|
Jenkins L, Mash B, Derese A. Reliability testing of a portfolio assessment tool for postgraduate family medicine training in South Africa. Afr J Prim Health Care Fam Med 2013. [PMCID: PMC4502840 DOI: 10.4102/phcfm.v5i1.577] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
Background Competency-based education and the validity and reliability of workplace-based assessment of postgraduate trainees have received increasing attention worldwide. Family medicine was recognised as a speciality in South Africa six years ago and a satisfactory portfolio of learning is a prerequisite to sit the national exit exam. A massive scaling up of the number of family physicians is needed in order to meet the health needs of the country. Aim The aim of this study was to develop a reliable, robust and feasible portfolio assessment tool (PAT) for South Africa. Methods Six raters each rated nine portfolios from the Stellenbosch University programme, using the PAT, to test for inter-rater reliability. This rating was repeated three months later to determine test–retest reliability. Following initial analysis and feedback the PAT was modified and the inter-rater reliability again assessed on nine new portfolios. An acceptable intra-class correlation was considered to be > 0.80. Results The total score was found to be reliable, with a coefficient of 0.92. For test–retest reliability, the difference in mean total score was 1.7%, which was not statistically significant. Amongst the subsections, only assessment of the educational meetings and the logbook showed reliability coefficients > 0.80. Conclusion This was the first attempt to develop a reliable, robust and feasible national portfolio assessment tool to assess postgraduate family medicine training in the South African context. The tool was reliable for the total score, but the low reliability of several sections in the PAT helped us to develop 12 recommendations regarding the use of the portfolio, the design of the PAT and the training of raters.
Collapse
Affiliation(s)
- Louis Jenkins
- Division of Family Medicine and Primary Care, Faculty of Health Sciences, University of Stellenbosch, South Africa
- Western Cape Department of Health, Eden district, George Hospital, South Africa
| | - Bob Mash
- Division of Family Medicine and Primary Care, Faculty of Health Sciences, University of Stellenbosch, South Africa
| | - Anselme Derese
- Centre for Education Development, Faculty of Medicine and Health Sciences, Ghent University, Belgium
| |
Collapse
|