1
|
Sekar DR, Ehrenberger KA, Dakroub A, Rothenberger S, Grau T, Carter AE. What/Why/When/Where/How Framework and Faculty Development Workshop to Improve the Utility of Narrative Evaluations for Assessing Internal Medicine Residents. MEDEDPORTAL : THE JOURNAL OF TEACHING AND LEARNING RESOURCES 2024; 20:11420. [PMID: 39081631 PMCID: PMC11286767 DOI: 10.15766/mep_2374-8265.11420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 03/14/2024] [Indexed: 08/02/2024]
Abstract
Introduction Clinical competency committees (CCCs) rely on narrative evaluations to assess resident competency. Despite the emphasis on these evaluations, their utility is frequently hindered by lack of sufficient detail for use by CCCs. Prior resources have sought to improve specificity of comments and use of evaluations by residents but not their utility for CCCs in assessing trainee performance. Methods We developed a 1-hour faculty development workshop focused on a newly devised framework for Department of Medicine faculty supervising internal medicine residents. The what/why/when/where/how framework highlighted key features of useful narrative evaluations: behaviors of strength and growth, contextualized observations, improvement over time, and actionable next steps. Workshop sessions were implemented at a large multisite internal medicine residency program. We assessed the workshop by measuring attendee confidence and skill in writing narrative evaluations useful for CCCs. Skill was assessed through a rubric adapted from literature on the utility of narrative evaluations. Results Fifty-four participants started the presurvey, and 33 completed the workshop, for a response rate of 61%. Participant confidence improved pre-, post-, and 3 months postworkshop. Total utility scores improved in mock evaluations from 12.4 to 15.5 and in real evaluations from 13.7 to 15.0, but only some subcomponent scores improved, with fewer improving in the real evaluations. Discussion A short workshop focusing on our framework improves confidence and utility of narrative evaluations of internal medicine residents for use by CCCs. Next steps should include developing more challenging components of narrative evaluations for continued improvement in trainee performance and faculty assessment.
Collapse
Affiliation(s)
- Dheepa R. Sekar
- Assistant Professor, Division of General Internal Medicine and Geriatrics, Department of Medicine, Emory University School of Medicine
| | - Kristen Ann Ehrenberger
- Assistant Professor, Division of General Internal Medicine, Department of Medicine and Department of Pediatrics, University of Pittsburgh School of Medicine
| | - Allie Dakroub
- Assistant Professor, Division of General Internal Medicine, Department of Medicine and Department of Pediatrics, University of Pittsburgh School of Medicine
| | - Scott Rothenberger
- Assistant Professor, Division of General Internal Medicine, Department of Medicine, University of Pittsburgh School of Medicine
| | - Thomas Grau
- Associate Professor, Division of General Internal Medicine, Department of Medicine, University of Pittsburgh School of Medicine; Associate Chief of Staff of Education, VA Pittsburgh Healthcare System
| | - Andrea E. Carter
- Assistant Professor, Division of General Internal Medicine, Department of Medicine, University of Pittsburgh School of Medicine
| |
Collapse
|
2
|
Choo EK, Woods R, Walker ME, O’Brien JM, Chan TM. The Quality of Assessment for Learning score for evaluating written feedback in anesthesiology postgraduate medical education: a generalizability and decision study. CANADIAN MEDICAL EDUCATION JOURNAL 2023; 14:78-85. [PMID: 38226296 PMCID: PMC10787859 DOI: 10.36834/cmej.75876] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
Background Competency based residency programs depend on high quality feedback from the assessment of entrustable professional activities (EPA). The Quality of Assessment for Learning (QuAL) score is a tool developed to rate the quality of narrative comments in workplace-based assessments; it has validity evidence for scoring the quality of narrative feedback provided to emergency medicine residents, but it is unknown whether the QuAL score is reliable in the assessment of narrative feedback in other postgraduate programs. Methods Fifty sets of EPA narratives from a single academic year at our competency based medical education post-graduate anesthesia program were selected by stratified sampling within defined parameters [e.g. resident gender and stage of training, assessor gender, Competency By Design training level, and word count (≥17 or <17 words)]. Two competency committee members and two medical students rated the quality of narrative feedback using a utility score and QuAL score. We used Kendall's tau-b co-efficient to compare the perceived utility of the written feedback to the quality assessed with the QuAL score. The authors used generalizability and decision studies to estimate the reliability and generalizability coefficients. Results Both the faculty's utility scores and QuAL scores (r = 0.646, p < 0.001) and the trainees' utility scores and QuAL scores (r = 0.667, p < 0.001) were moderately correlated. Results from the generalizability studies showed that utility scores were reliable with two raters for both faculty (Epsilon=0.87, Phi=0.86) and trainees (Epsilon=0.88, Phi=0.88). Conclusions The QuAL score is correlated with faculty- and trainee-rated utility of anesthesia EPA feedback. Both faculty and trainees can reliability apply the QuAL score to anesthesia EPA narrative feedback. This tool has the potential to be used for faculty development and program evaluation in Competency Based Medical Education. Other programs could consider replicating our study in their specialty.
Collapse
Affiliation(s)
- Eugene K Choo
- Department of Anesthesiology, College of Medicine, University of Saskatchewan, Saskatchewan, Canada;
| | - Rob Woods
- Department of Emergency Medicine, College of Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Mary Ellen Walker
- Department of Anesthesiology, College of Medicine, University of Saskatchewan, Saskatchewan, Canada;
| | - Jennifer M O’Brien
- Department of Anesthesiology, College of Medicine, University of Saskatchewan, Saskatchewan, Canada;
| | - Teresa M Chan
- Department of Medicine (Division of Emergency Medicine; Division of Education & Innovation), Michael G. DeGroote School of Medicine, Faculty of Health Sciences, McMaster University and Office of Continuing Professional Development & McMaster Education Research, Innovation, and Theory (MERIT) Program, Faculty of Health Sciences, McMaster University, Ontario, Canada
| |
Collapse
|
3
|
Kogan JR, Dine CJ, Conforti LN, Holmboe ES. Can Rater Training Improve the Quality and Accuracy of Workplace-Based Assessment Narrative Comments and Entrustment Ratings? A Randomized Controlled Trial. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:237-247. [PMID: 35857396 DOI: 10.1097/acm.0000000000004819] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Prior research evaluating workplace-based assessment (WBA) rater training effectiveness has not measured improvement in narrative comment quality and accuracy, nor accuracy of prospective entrustment-supervision ratings. The purpose of this study was to determine whether rater training, using performance dimension and frame of reference training, could improve WBA narrative comment quality and accuracy. A secondary aim was to assess impact on entrustment rating accuracy. METHOD This single-blind, multi-institution, randomized controlled trial of a multifaceted, longitudinal rater training intervention consisted of in-person training followed by asynchronous online spaced learning. In 2018, investigators randomized 94 internal medicine and family medicine physicians involved with resident education. Participants assessed 10 scripted standardized resident-patient videos at baseline and follow-up. Differences in holistic assessment of narrative comment accuracy and specificity, accuracy of individual scenario observations, and entrustment rating accuracy were evaluated with t tests. Linear regression assessed impact of participant demographics and baseline performance. RESULTS Seventy-seven participants completed the study. At follow-up, the intervention group (n = 41), compared with the control group (n = 36), had higher scores for narrative holistic specificity (2.76 vs 2.31, P < .001, Cohen V = .25), accuracy (2.37 vs 2.06, P < .001, Cohen V = .20) and mean quantity of accurate (6.14 vs 4.33, P < .001), inaccurate (3.53 vs 2.41, P < .001), and overall observations (2.61 vs 1.92, P = .002, Cohen V = .47). In aggregate, the intervention group had more accurate entrustment ratings (58.1% vs 49.7%, P = .006, Phi = .30). Baseline performance was significantly associated with performance on final assessments. CONCLUSIONS Quality and specificity of narrative comments improved with rater training; the effect was mitigated by inappropriate stringency. Training improved accuracy of prospective entrustment-supervision ratings, but the effect was more limited. Participants with lower baseline rating skill may benefit most from training.
Collapse
Affiliation(s)
- Jennifer R Kogan
- J.R. Kogan is associate dean, Student Success and Professional Development, and professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-8426-9506
| | - C Jessica Dine
- C.J. Dine is associate dean, Evaluation and Assessment, and associate professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-5894-0861
| | - Lisa N Conforti
- L.N. Conforti is research associate for milestones evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-7317-6221
| | - Eric S Holmboe
- E.S. Holmboe is chief, research, milestones development and evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0003-0108-6021
| |
Collapse
|
4
|
Mooney CJ, Pascoe JM, Blatt AE, Lang VJ, Kelly MS, Braun MK, Burch JE, Stone RT. Predictors of faculty narrative evaluation quality in medical school clerkships. MEDICAL EDUCATION 2022; 56:1223-1231. [PMID: 35950329 DOI: 10.1111/medu.14911] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 08/01/2022] [Accepted: 08/08/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Narrative approaches to assessment provide meaningful and valid representations of trainee performance. Yet, narratives are frequently perceived as vague, nonspecific and low quality. To date, there is little research examining factors associated with narrative evaluation quality, particularly in undergraduate medical education. The purpose of this study was to examine associations of faculty- and student-level characteristics with the quality of faculty member's narrative evaluations of clerkship students. METHODS The authors reviewed faculty narrative evaluations of 50 students' clinical performance in their inpatient medicine and neurology clerkships, resulting in 165 and 87 unique evaluations in the respective clerkships. The authors evaluated narrative quality using the Narrative Evaluation Quality Instrument (NEQI). The authors used linear mixed effects modelling to predict total NEQI score. Explanatory covariates included the following: time to evaluation completion, number of weeks spent with student, faculty total weeks on service per year, total faculty years in clinical education, student gender, faculty gender, and an interaction term between student and faculty gender. RESULTS Significantly higher narrative evaluation quality was associated with a shorter time to evaluation completion, with NEQI scores decreasing by approximately 0.3 points every 10 days following students' rotations (p = .004). Additionally, women faculty had statistically higher quality narrative evaluations with NEQI scores 1.92 points greater than men faculty (p = .012). All other covariates were not significant. CONCLUSIONS The quality of faculty members' narrative evaluations of medical students was associated with time to evaluation completion and faculty gender but not faculty experience in clinical education, faculty weeks on service, or the amount of time spent with students. Findings advance understanding on ways to improve the quality of narrative evaluations which are imperative given assessment models that will increase the volume and reliance on narratives.
Collapse
Affiliation(s)
- Christopher J Mooney
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Jennifer M Pascoe
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Amy E Blatt
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Valerie J Lang
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | | | - Melanie K Braun
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Jaclyn E Burch
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | | |
Collapse
|
5
|
Chakroun M, Dion VR, Ouellet K, Graillon A, Désilets V, Xhignesse M, St-Onge C. Narrative Assessments in Higher Education: A Scoping Review to Identify Evidence-Based Quality Indicators. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:1699-1706. [PMID: 35612917 DOI: 10.1097/acm.0000000000004755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Narrative comments are increasingly used in assessment to document trainees' performance and to make important decisions about academic progress. However, little is known about how to document the quality of narrative comments, since traditional psychometric analysis cannot be applied. The authors aimed to generate a list of quality indicators for narrative comments, to identify recommendations for writing high-quality narrative comments, and to document factors that influence the quality of narrative comments used in assessments in higher education. METHOD The authors conducted a scoping review according to Arksey & O'Malley's framework. The search strategy yielded 690 articles from 6 databases. Team members screened abstracts for inclusion and exclusion, then extracted numerical and qualitative data based on predetermined categories. Numerical data were used for descriptive analysis. The authors completed the thematic analysis of qualitative data with iterative discussions until they achieved consensus for the interpretation of the results. RESULTS After the full-text review of 213 selected articles, 47 were included. Through the thematic analysis, the authors identified 7 quality indicators, 12 recommendations for writing quality narratives, and 3 factors that influence the quality of narrative comments used in assessment. The 7 quality indicators are (1) describes performance with a focus on particular elements (attitudes, knowledge, skills); (2) provides a balanced message between positive elements and elements needing improvement; (3) provides recommendations to learners on how to improve their performance; (4) compares the observed performance with an expected standard of performance; (5) provides justification for the mark/score given; (6) uses language that is clear and easily understood; and (7) uses a nonjudgmental style. CONCLUSIONS Assessors can use these quality indicators and recommendations to write high-quality narrative comments, thus reinforcing the appropriate documentation of trainees' performance, facilitating solid decision making about trainees' progression, and enhancing the impact of narrative feedback for both learners and programs.
Collapse
Affiliation(s)
- Molk Chakroun
- M. Chakroun is a PhD student, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0002-0518-1782
| | - Vincent R Dion
- V.R. Dion was research assistant, Paul Grand'Maison de la Société des médecins de l'Université de Sherbrooke Research Chair in Medical Education, at the time of this work, and is now a first-year medical student, Université de Sherbrooke, Sherbrooke, Québec, Canada
| | - Kathleen Ouellet
- K. Ouellet is research coordinator, Centre de pédagogie et des sciences de la santé, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0001-9829-151X
| | - Ann Graillon
- A. Graillon is associate professor, Department of Pediatrics, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0003-3677-7113
| | - Valérie Désilets
- V. Désilets is associate professor, Department of Pediatrics, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0001-7399-119X
| | - Marianne Xhignesse
- M. Xhignesse is full professor, Department of Family and Emergency Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0002-3257-5912
| | - Christina St-Onge
- C. St-Onge is full professor, Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, and holds the Paul Grand'Maison de la Société des médecins de l'Université de Sherbrooke Research Chair in Medical Education, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0001-5313-0456
| |
Collapse
|
6
|
Anderson HL, Kurtz J, West DC. Implementation and Use of Workplace-Based Assessment in Clinical Learning Environments: A Scoping Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S164-S174. [PMID: 34406132 DOI: 10.1097/acm.0000000000004366] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE Workplace-based assessment (WBA) serves a critical role in supporting competency-based medical education (CBME) by providing assessment data to inform competency decisions and support learning. Many WBA systems have been developed, but little is known about how to effectively implement WBA. Filling this gap is important for creating suitable and beneficial assessment processes that support large-scale use of CBME. As a step toward filling this gap, the authors describe what is known about WBA implementation and use to identify knowledge gaps and future directions. METHOD The authors used Arksey and O'Malley's 6-stage scoping review framework to conduct the review, including: (1) identifying the research question; (2) identifying relevant studies; (3) study selection; (4) charting the data; (5) collating, summarizing, and reporting the results; and (6) consulting with relevant stakeholders. RESULTS In 2019-2020, the authors searched and screened 726 papers for eligibility using defined inclusion and exclusion criteria. One hundred sixty-three met inclusion criteria. The authors identified 5 themes in their analysis: (1) Many WBA tools and programs have been implemented, and barriers are common across fields and specialties; (2) Theoretical perspectives emphasize the need for data-driven implementation strategies; (3) User perceptions of WBA vary and are often dependent on implementation factors; (4) Technology solutions could provide useful tools to support WBA; and (5) Many areas of future research and innovation remain. CONCLUSIONS Knowledge of WBA as an implemented practice to support CBME remains constrained. To remove these constraints, future research should aim to generate generalizable knowledge on WBA implementation and use, address implementation factors, and investigate remaining knowledge gaps.
Collapse
Affiliation(s)
- Hannah L Anderson
- H.L. Anderson is research associate, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0002-9435-1535
| | - Joshua Kurtz
- J. Kurtz is a first-year resident, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Daniel C West
- D.C. West is professor of pediatrics, The Perelman School of Medicine at the University of Pennsylvania, and associate chair for education and senior director of medical education, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0002-0909-4213
| |
Collapse
|
7
|
Affiliation(s)
- Liana Puscas
- Liana Puscas, MD, MHS, MA, is Associate Professor, Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine
| | - Jennifer R. Kogan
- Jennifer R. Kogan, MD, is Professor of Medicine, Department of Medicine, University of Pennsylvania Perelman School of Medicine
| | - Eric S. Holmboe
- Eric S. Holmboe, MD, MACP, FRCP, is Chief Research, Milestone Development, and Evaluation Officer, Accreditation Council for Graduate Medical Education
| |
Collapse
|
8
|
Abstract
Introduction: Faculty development has played a significant role in health professions education over the last 40 years. The goal of this perspective is to present a portrait of faculty development in Medical Teacher since its inception and to highlight emerging trends moving forward.Method: All issues of Medical Teacher were reviewed, using the search terms faculty development, staff development, professional development, or in-service training for faculty. The search yielded 286 results of which 145 focused specifically on faculty development initiatives, reviews, or frameworks.Findings: This review demonstrated a significant growth in publications related to faculty development in Medical Teacher over the last 40 years, with a primary focus on teaching improvement and traditional approaches to faculty development, including workshops, short courses and other structured, group activities. The international nature of faculty development was also highlighted.Recommendations: Moving forward, it is suggested that we: broaden the scope of faculty development from teaching to academic development; expand our approaches to faculty development, to include peer coaching, workplace learning and communities of practice; utilize a competency-based framework to guide the development of faculty development curricula; support teachers' professional identities through faculty development; focus on organizational development and change; and rigorously promote research and scholarship in faculty development.
Collapse
Affiliation(s)
- Yvonne Steinert
- Institute of Health Sciences Education, Faculty of Medicine, McGill University, Montreal, Canada
| |
Collapse
|
9
|
Jasemi M, Ahangarzadeh Rezaie S, Hemmati Maslakpak M, Parizad N. Are workplace-based assessment methods (DOPS and Mini-CEX) effective in nursing students' clinical skills? A single-blind randomized, parallel group, controlled trial. Contemp Nurse 2020; 55:565-575. [PMID: 32107975 DOI: 10.1080/10376178.2020.1735941] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Background: Evaluation of clinical skills is critically important for nursing students. However, the quality of evaluation tools is poor.Objectives: To evaluate the effectiveness of Direct Observation of Procedural Skills (DOPS) and Mini-Clinical Evaluation Exercise (Mini-CEX) on clinical skills of nursing students.Methods: This study was conducted among 108 senior nursing students. Mini-CEX and DOPS were utilized to evaluate clinical skills in the intervention group.Results: The mean of students' scores in all of the five procedures was significantly higher in the intervention group compared to control group.. Students' scores for the procedures significantly raised through the first stage of DOPS and Mini-CEX to the third stage.Conclusions: Utilization of DOPS and Mini-CEX for evaluation of clinical skills in nursing students effectively enhance their learning ability. Implementing of such assessment methods lead to promoting clinical skills of students which eventually help them to provide high quality care for their patients.
Collapse
Affiliation(s)
- Madineh Jasemi
- Faculty of Nursing and Midwifery, Urmia University of Medical Sciences, Urmia, Iran
| | | | - Masumeh Hemmati Maslakpak
- Faculty of Nursing and Midwifery, Urmia University of Medical Sciences, Urmia, Iran.,Maternal and Childhood Obesity Research Center, Urmia University of Medical Sciences, Urmia, Iran
| | - Naser Parizad
- Faculty of Nursing and Midwifery, Urmia University of Medical Sciences, Urmia, Iran.,Patient Safety Research Center, Urmia University of Medical Sciences, Urmia, Iran
| |
Collapse
|
10
|
Wilbur K. Does faculty development influence the quality of in-training evaluation reports in pharmacy? BMC MEDICAL EDUCATION 2017; 17:222. [PMID: 29157239 PMCID: PMC5697106 DOI: 10.1186/s12909-017-1054-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2017] [Accepted: 11/02/2017] [Indexed: 06/02/2023]
Abstract
BACKGROUND In-training evaluation reports (ITERs) of student workplace-based learning are completed by clinical supervisors across various health disciplines. However, outside of medicine, the quality of submitted workplace-based assessments is largely uninvestigated. This study assessed the quality of ITERs in pharmacy and whether clinical supervisors could be trained to complete higher quality reports. METHODS A random sample of ITERs submitted in a pharmacy program during 2013-2014 was evaluated. These ITERs served as a historical control (control group 1) for comparison with ITERs submitted in 2015-2016 by clinical supervisors who participated in an interactive faculty development workshop (intervention group) and those who did not (control group 2). Two trained independent raters scored the ITERs using a previously validated nine-item scale assessing report quality, the Completed Clinical Evaluation Report Rating (CCERR). The scoring scale for each item is anchored at 1 ("not at all") and 5 ("exemplary"), with 3 categorized as "acceptable". RESULTS Mean CCERR score for reports completed after the workshop (22.9 ± 3.39) did not significantly improve when compared to prospective control group 2 (22.7 ± 3.63, p = 0.84) and were worse than historical control group 1 (37.9 ± 8.21, p = 0.001). Mean item scores for individual CCERR items were below acceptable thresholds for 5 of the 9 domains in control group 1, including supervisor documented evidence of specific examples to clearly explain weaknesses and concrete recommendations for student improvement. Mean item scores for individual CCERR items were below acceptable thresholds for 6 and 7 of the 9 domains in control group 2 and the intervention group, respectively. CONCLUSIONS This study is the first using CCERR to evaluate ITER quality outside of medicine. Findings demonstrate low baseline CCERR scores in a pharmacy program not demonstrably changed by a faculty development workshop, but strategies are identified to augment future rater training.
Collapse
Affiliation(s)
- Kerry Wilbur
- College of Pharmacy, Qatar University, PO Box 2713, Doha, Qatar.
| |
Collapse
|
11
|
Hauer KE, Nishimura H, Dubon D, Teherani A, Boscardin C. Competency assessment form to improve feedback. CLINICAL TEACHER 2017; 15:472-477. [PMID: 29045060 DOI: 10.1111/tct.12726] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
BACKGROUND In-training evaluation reports are a commonly used assessment method for clinical learners that can characterise the development of competence in essential domains of practice. Strategies to increase the usefulness and specificity of written narrative comments about learner performance in these reports are needed to guide their learning. Soliciting narrative comments by competency domain from supervising doctors on in-training evaluation reports could improve the quality of written feedback to students. METHODS This is a pre-post study examining narrative comments derived from assessments of core clerkship students by faculty members and resident supervisors in seven clerkships using two assessment forms in academic years 2013/14 (pre; two comments fields - summative, constructive) and 2014/15 (post; seven comments fields - six competency domains, constructive comments). Using a purposive sample of 60 students based on overall clerkship performance, we conducted content analysis of written comments to compare comment quality based on word count, competencies addressed and reinforcing or constructive content. Differences between the two forms across these three components of quality were compared using Student's t-tests. RESULTS The revised form elicited more narrative comments in all seven clerkships, with more competencies addressed. The revised form led to a decrease in the proportion of constructive comments about the students' performances. In-training evaluation reports are a commonly used assessment method for clinical learners DISCUSSION: Structural changes to a medical student assessment form to elicit narrative comments by competency improved some measures of the quality of narrative comments provided by faculty members and residents. Additional study is needed to determine how learners use this information to improve their clinical practice.
Collapse
Affiliation(s)
- Karen E Hauer
- University of California at San Francisco, San Francisco, California, USA
| | - Holly Nishimura
- University of California at San Francisco, San Francisco, California, USA
| | - Diego Dubon
- University of California at Berkeley, Berkeley, California, USA
| | - Arianne Teherani
- University of California at San Francisco, San Francisco, California, USA
| | - Christy Boscardin
- University of California at San Francisco, San Francisco, California, USA
| |
Collapse
|
12
|
Mak-van der Vossen M, van Mook W, van der Burgt S, Kors J, Ket JC, Croiset G, Kusurkar R. Descriptors for unprofessional behaviours of medical students: a systematic review and categorisation. BMC MEDICAL EDUCATION 2017; 17:164. [PMID: 28915870 PMCID: PMC5603020 DOI: 10.1186/s12909-017-0997-x] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2017] [Accepted: 09/04/2017] [Indexed: 05/17/2023]
Abstract
BACKGROUND Developing professionalism is a core task in medical education. Unfortunately, it has remained difficult for educators to identify medical students' unprofessionalism, because, among other reasons, there are no commonly adopted descriptors that can be used to document students' unprofessional behaviour. This study aimed to generate an overview of descriptors for unprofessional behaviour based on research evidence of real-life unprofessional behaviours of medical students. METHODS A systematic review was conducted searching PubMed, Ebsco/ERIC, Ebsco/PsycINFO and Embase.com from inception to 2016. Articles were reviewed for admitted or witnessed unprofessional behaviours of undergraduate medical students. RESULTS The search yielded 11,963 different studies, 46 met all inclusion criteria. We found 205 different descriptions of unprofessional behaviours, which were coded into 30 different descriptors, and subsequently classified in four behavioural themes: failure to engage, dishonest behaviour, disrespectful behaviour, and poor self-awareness. CONCLUSIONS This overview provides a common language to describe medical students' unprofessional behaviour. The framework of descriptors is proposed as a tool for educators to denominate students' unprofessional behaviours. The found behaviours can have various causes, which should be explored in a discussion with the student about personal, interpersonal and/or institutional circumstances in which the behaviour occurred. Explicitly denominating unprofessional behaviour serves two goals: [i] creating a culture in which unprofessional behaviour is acknowledged, [ii] targeting students who need extra guidance. Both are important to avoid unprofessional behaviour among future doctors.
Collapse
Affiliation(s)
- Marianne Mak-van der Vossen
- Department of Research in Education, VUmc School of Medical Sciences, Amsterdam, the Netherlands
- LEARN! Research Institute for Education and Learning, VU University, Amsterdam, the Netherlands
- Department for General Practice and Elderly Care Management, VU Medical Center, Amsterdam, the Netherlands
| | - Walther van Mook
- Department of Intensive Care Medicine, Maastricht University Medical Center, Maastricht, the Netherlands
- Department of Medical Education Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands
| | - Stéphanie van der Burgt
- Department of Research in Education, VUmc School of Medical Sciences, Amsterdam, the Netherlands
- LEARN! Research Institute for Education and Learning, VU University, Amsterdam, the Netherlands
| | - Joyce Kors
- AVAG Midwifery Academy Amsterdam Groningen, Amsterdam, the Netherlands
| | - Johannes C.F. Ket
- Medical Library, University Library, Vrije Universiteit, Amsterdam, the Netherlands
| | - Gerda Croiset
- Department of Research in Education, VUmc School of Medical Sciences, Amsterdam, the Netherlands
- LEARN! Research Institute for Education and Learning, VU University, Amsterdam, the Netherlands
| | - Rashmi Kusurkar
- Department of Research in Education, VUmc School of Medical Sciences, Amsterdam, the Netherlands
- LEARN! Research Institute for Education and Learning, VU University, Amsterdam, the Netherlands
| |
Collapse
|
13
|
Fielding DW, Regehr G. A Call for an Integrated Program of Assessment. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2017; 81:77. [PMID: 28630518 PMCID: PMC5468715 DOI: 10.5688/ajpe81477] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Accepted: 03/31/2016] [Indexed: 05/22/2023]
Abstract
An integrated curriculum that does not incorporate equally integrated assessment strategies is likely to prove ineffective in achieving the desired educational outcomes. We suggest it is time for colleges and schools of pharmacy to re-engineer their approach to assessment. To build the case, we first discuss the challenges leading to the need for curricular developments in pharmacy education. We then turn to the literature that informs how assessment can influence learning, introduce an approach to learning assessment that is being used by several medical education programs, and provide some examples of this approach in operation. Finally, we identify some of the challenges faced in adopting such an integrated approach to assessment and suggest that this is an area ripe with research opportunities for pharmacy educators.
Collapse
Affiliation(s)
| | - Glenn Regehr
- Centre for Health Education Scholarship, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
14
|
Wilbur K, Mousa Bacha R, Abdelaziz S. How does culture affect experiential training feedback in exported Canadian health professional curricula? INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2017; 8:91-98. [PMID: 28315858 PMCID: PMC5376492 DOI: 10.5116/ijme.58ba.7c68] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2016] [Accepted: 03/04/2017] [Indexed: 06/06/2023]
Abstract
OBJECTIVES To explore feedback processes of Western-based health professional student training curricula conducted in an Arab clinical teaching setting. METHODS This qualitative study employed document analysis of in-training evaluation reports (ITERs) used by Canadian nursing, pharmacy, respiratory therapy, paramedic, dental hygiene, and pharmacy technician programs established in Qatar. Six experiential training program coordinators were interviewed between February and May 2016 to explore how national cultural differences are perceived to affect feedback processes between students and clinical supervisors. Interviews were recorded, transcribed, and coded according to a priori cultural themes. RESULTS Document analysis found all programs' ITERs outlined competency items for students to achieve. Clinical supervisors choose a response option corresponding to their judgment of student performance and may provide additional written feedback in spaces provided. Only one program required formal face-to-face feedback exchange between students and clinical supervisors. Experiential training program coordinators identified that no ITER was expressly culturally adapted, although in some instances, modifications were made for differences in scopes of practice between Canada and Qatar. Power distance was recognized by all coordinators who also identified both student and supervisor reluctance to document potentially negative feedback in ITERs. Instances of collectivism were described as more lenient student assessment by clinical supervisors of the same cultural background. Uncertainty avoidance did not appear to impact feedback processes. CONCLUSIONS Our findings suggest that differences in specific cultural dimensions between Qatar and Canada have implications on the feedback process in experiential training which may be addressed through simple measures to accommodate communication preferences.
Collapse
Affiliation(s)
- Kerry Wilbur
- College of Pharmacy, Qatar University, Doha, Qatar
| | | | | |
Collapse
|
15
|
Yepes-Rios M, Dudek N, Duboyce R, Curtis J, Allard RJ, Varpio L. The failure to fail underperforming trainees in health professions education: A BEME systematic review: BEME Guide No. 42. MEDICAL TEACHER 2016; 38:1092-1099. [PMID: 27602533 DOI: 10.1080/0142159x.2016.1215414] [Citation(s) in RCA: 114] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
BACKGROUND Many clinical educators feel unprepared and/or unwilling to report unsatisfactory trainee performance. This systematic review consolidates knowledge from medical, nursing, and dental literature on the experiences and perceptions of evaluators or assessors with this failure to fail phenomenon. METHODS We searched the English language literature in CINAHL, EMBASE, and MEDLINE from January 2005 to January 2015. Qualitative and quantitative studies were included. Following our review protocol, registered with BEME, reviewers worked in pairs to identify relevant articles. The investigators participated in thematic analysis of the qualitative data reported in these studies. Through several cycles of analysis, discussion and reflection, the team identified the barriers and enablers to failing a trainee. RESULTS From 5330 articles, we included 28 publications in the review. The barriers identified were (1) assessor's professional considerations, (2) assessor's personal considerations, (3) trainee related considerations, (4) unsatisfactory evaluator development and evaluation tools, (5) institutional culture and (6) consideration of available remediation for the trainee. The enablers identified were: (1) duty to patients, to society, and to the profession, (2) institutional support such as backing a failing evaluation, support from colleagues, evaluator development, and strong assessment systems, and (3) opportunities for students after failing. DISCUSSION/CONCLUSIONS The inhibiting and enabling factors to failing an underperforming trainee were common across the professions included in this study, across the 10 years of data, and across the educational continuum. We suggest that these results can inform efforts aimed at addressing the failure to fail problem.
Collapse
Affiliation(s)
- Monica Yepes-Rios
- a Uniformed Services University of the Health Sciences , Bethesda , MD , USA
| | - Nancy Dudek
- b Ottawa Hospital Rehabilitation Centre, University of Ottawa , Ottawa , ON , Canada
| | - Rita Duboyce
- a Uniformed Services University of the Health Sciences , Bethesda , MD , USA
| | - Jerri Curtis
- a Uniformed Services University of the Health Sciences , Bethesda , MD , USA
| | - Rhonda J Allard
- a Uniformed Services University of the Health Sciences , Bethesda , MD , USA
| | - Lara Varpio
- a Uniformed Services University of the Health Sciences , Bethesda , MD , USA
| |
Collapse
|
16
|
Apramian T, Cristancho S, Watling C, Ott M, Lingard L. Thresholds of Principle and Preference: Exploring Procedural Variation in Postgraduate Surgical Education. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2015; 90:S70-6. [PMID: 26505105 PMCID: PMC5578750 DOI: 10.1097/acm.0000000000000909] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
BACKGROUND Expert physicians develop their own ways of doing things. The influence of such practice variation in clinical learning is insufficiently understood. Our grounded theory study explored how residents make sense of, and behave in relation to, the procedural variations of faculty surgeons. METHOD We sampled senior postgraduate surgical residents to construct a theoretical framework for how residents make sense of procedural variations. Using a constructivist grounded theory approach, we used marginal participant observation in the operating room across 56 surgical cases (146 hours), field interviews (38), and formal interviews (6) to develop a theoretical framework for residents' ways of dealing with procedural variations. Data analysis used constant comparison to iteratively refine the framework and data collection until theoretical saturation was reached. RESULTS The core category of the constructed theory was called thresholds of principle and preference and it captured how faculty members position some procedural variations as negotiable and others not. The term thresholding was coined to describe residents' daily experiences of spotting, mapping, and negotiating their faculty members' thresholds and defending their own emerging thresholds. CONCLUSIONS Thresholds of principle and preference play a key role in workplace-based medical education. Postgraduate medical learners are occupied on a day-to-day level with thresholding and attempting to make sense of the procedural variations of faculty. Workplace-based teaching and assessment should include an understanding of the integral role of thresholding in shaping learners' development. Future research should explore the nature and impact of thresholding in workplace-based learning beyond the surgical context.
Collapse
|
17
|
May SA, Silva-Fletcher A. Scaffolded Active Learning: Nine Pedagogical Principles for Building a Modern Veterinary Curriculum. JOURNAL OF VETERINARY MEDICAL EDUCATION 2015; 42:332-339. [PMID: 26421513 DOI: 10.3138/jvme.0415-063r] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Veterinary discipline experts unfamiliar with the broader educational literature can find the adoption of an evidence-based approach to curriculum development challenging. However, greater societal and professional demands for achieving and verifying Day One knowledge and skills, together with continued progress in information generation and technology, make it all the more important that the defined period for initial professional training be well used. This article presents and discusses nine pedagogical principles that have been used in modern curricular development in Australia, the United Kingdom, and the United States: (1) outcomes-based curriculum design; (2) valid and reliable assessments; (3) active learning; (4) integrated knowledge for action; (5) tightly controlled core curriculum; (6) "just-in-time" rather than "just-in-case" knowledge; (7) vertical integration, the spiral curriculum, and sequential skills development; (8) learning skills support; and (9) bridges from classroom to workplace. Crucial to effective educational progress is active learning that embraces the skills required by the modern professional, made possible by tight control of curricular content. In this information age, professionals' ability to source information on a "just-in-time" basis to support high quality reasoning and decision making is far more important than the memorization of large bodies of increasingly redundant information on a "just-in-case" basis. It is important that those with responsibility for veterinary curriculum design ensure that their programs fully equip the modern veterinary professional for confident entry into the variety of roles in which society needs their skills.
Collapse
|