1
|
Saberzadeh-Ardestani B, Sima AR, Khosravi B, Young M, Mortaz Hejri S. The impact of prior performance information on subsequent assessment: is there evidence of retaliation in an anonymous multisource assessment system? ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024; 29:531-550. [PMID: 37488326 DOI: 10.1007/s10459-023-10267-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 07/16/2023] [Indexed: 07/26/2023]
Abstract
Few studies have engaged in data-driven investigations of the presence, or frequency, of what could be considered retaliatory assessor behaviour in Multi-source Feedback (MSF) systems. In this study, authors explored how assessors scored others if, before assessing others, they received their own assessment score. The authors examined assessments from an established MSF system in which all clinical team members - medical students, interns, residents, fellows, and supervisors - anonymously assessed each other. The authors identified assessments in which an assessor (i.e., any team member providing a score to another) gave an aberrant score to another individual. An aberrant score was defined as one that was more than two standard deviations from the assessment receiver's average score. Assessors who gave aberrant scores were categorized according to whether their behaviour was preceded by: (1) receiving a score or not from another individual in the MSF system (2) whether the score they received was aberrant or not. The authors used a multivariable logistic regression model to investigate the association between the type of score received and the type of score given by that same individual. In total, 367 unique assessors provided 6091 scores on the performance of 484 unique individuals. Aberrant scores were identified in 250 forms (4.1%). The chances of giving an aberrant score were 2.3 times higher for those who had received a score, compared to those who had not (odds ratio 2.30, 95% CI:1.54-3.44, P < 0.001). Individuals who had received an aberrant score were also 2.17 times more likely to give an aberrant score to others compared to those who had received a non-aberrant score (2.17, 95% CI:1.39-3.39, P < 0.005) after adjusting for all other variables. This study documents an association between receiving scores within an anonymous multi-source feedback (MSF) system and providing aberrant scores to team members. These findings suggest care must be given to designing MSF systems to protect against potential downstream consequences of providing and receiving anonymous feedback.
Collapse
Affiliation(s)
- Bahar Saberzadeh-Ardestani
- Digestive Disease Research Center, Digestive Disease Research Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Ali Reza Sima
- Digestive Disease Research Center, Digestive Disease Research Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Bardia Khosravi
- Digestive Disease Research Center, Digestive Disease Research Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Meredith Young
- Institute of Health Sciences Education, McGill University, Montreal, QC, Canada
| | - Sara Mortaz Hejri
- Department of Medical Education, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
2
|
Swails JL, Gadgil MA, Goodrum H, Gupta R, Rahbar MH, Bernstam EV. Role of faculty characteristics in failing to fail in clinical clerkships. MEDICAL EDUCATION 2022; 56:634-640. [PMID: 34983083 DOI: 10.1111/medu.14725] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 12/21/2021] [Accepted: 12/27/2021] [Indexed: 06/14/2023]
Abstract
INTRODUCTION In the context of competency-based medical education, poor student performance must be accurately documented to allow learners to improve and to protect the public. However, faculty may be reluctant to provide evaluations that could be perceived as negative, and clerkship directors report that some students pass who should have failed. Student perception of faculty may be considered in faculty promotion, teaching awards, and leadership positions. Therefore, faculty of lower academic rank may perceive themselves to be more vulnerable and, therefore, be less likely to document poor student performance. This study investigated faculty characteristics associated with low performance evaluations (LPEs). METHOD The authors analysed individual faculty evaluations of medical students who completed the third-year clerkships over 15 years using a generalised mixed regression model to assess the association of evaluator academic rank with likelihood of an LPE. Other available factors related to experience or academic vulnerability were incorporated including faculty age, race, ethnicity, and gender. RESULTS The authors identified 50 120 evaluations by 585 faculty on 3447 students between January 2007 and April 2021. Faculty were more likely to give LPEs at the midpoint (4.9%), compared with the final (1.6%), evaluation (odds ratio [OR] = 4.004, 95% confidence interval [CI] [3.59, 4.53]; p < 0.001). The likelihood of LPE decreased significantly during the 15-year study period (OR = 0.94 [0.90, 0.97]; p < 0.01). Full professors were significantly more likely to give an LPE than assistant professors (OR = 1.62 [1.08, 2.43]; p = 0.02). Women were more likely to give LPEs than men (OR = 1.88 [1.37, 2.58]; p 0.01). Other faculty characteristics including race and experience were not associated with LPE. CONCLUSIONS The number of LPEs decreased over time, and senior faculty were more likely to document poor medical student performance compared with assistant professors.
Collapse
Affiliation(s)
- Jennifer L Swails
- Department of Internal Medicine, Mc Govern Medical School, University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Meghana A Gadgil
- Division of Hospital Medicine, San Francisco General Hospital, San Francisco, California, USA
- Division of Health Policy and Management, School of Public Health, University of California, Berkeley, Berkeley, California, USA
| | - Heath Goodrum
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Resmi Gupta
- Division of Clinical and Translational Sciences, Department of Internal Medicine, McGovern Medical School, Houston, Texas, USA
| | - Mohammad H Rahbar
- Division of Clinical and Translational Sciences, Department of Internal Medicine, McGovern Medical School, Houston, Texas, USA
- Department of Epidemiology, Human Genetics, and Environmental Sciences, School of Public Health, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Elmer V Bernstam
- Department of Internal Medicine, Mc Govern Medical School, University of Texas Health Science Center at Houston, Houston, Texas, USA
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, Texas, USA
| |
Collapse
|
3
|
Carbajal MM, Dadiz R, Sawyer T, Kane S, Frost M, Angert R. Part 5: Essentials of Neonatal-Perinatal Medicine Fellowship: evaluation of competence and proficiency using Milestones. J Perinatol 2022; 42:809-814. [PMID: 35149835 DOI: 10.1038/s41372-021-01306-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Revised: 11/03/2021] [Accepted: 12/23/2021] [Indexed: 11/09/2022]
Abstract
The Accreditation Council for Graduate Medical Education (ACGME) Pediatric Subspecialty Milestone Project competencies are used for Neonatal-Perinatal Medicine (NPM) fellows. Milestones are longitudinal markers that range from novice to expert (levels 1-5). There is no standard approach to the required biannual evaluation of fellows by fellowship programs, resulting in significant variability among programs regarding procedural experience and exposure to pathology during clinical training. In this paper, we discuss the opportunities that Milestones provide, potential strategies to address challenges, and future directions.
Collapse
Affiliation(s)
- Melissa M Carbajal
- Department of Pediatrics, Section of Neonatology, Baylor College of Medicine, Houston, TX, USA.
| | - Rita Dadiz
- Departments of Pediatrics, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA
| | - Taylor Sawyer
- Department of Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
| | - Sara Kane
- Department of Pediatrics, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Mackenzie Frost
- Department of Pediatrics, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| | | | - Robert Angert
- Department of Pediatrics, New York University Grossman School of Medicine, New York, NY, USA
| |
Collapse
|
4
|
Thomas J, Sandefur B, Colletti J, Mullan A, Homme J. Integrating self-assessment into feedback for emergency medicine residents. AEM EDUCATION AND TRAINING 2022; 6:e10721. [PMID: 35155973 PMCID: PMC8823156 DOI: 10.1002/aet2.10721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Revised: 12/16/2021] [Accepted: 01/04/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND In 2013 the Accreditation Council for Graduate Medical Education (ACGME) introduced "Milestones" designed to nationally standardize the assessment of resident physicians. Previous studies compare resident self-assessment on milestones to faculty assessment, with varying degrees of agreement, but integration of self-assessment into the formative feedback process has not yet been directly studied. This study uses a conceptual framework of self-determination theory, integrated with concepts from adult learning theory, to compare the perception of the feedback quality given in semiannual reviews before and after the incorporation of resident self-assessment into the feedback process. METHODS This was an interventional study conducted in a single emergency medicine residency program at a major academic hospital over 1 calendar year. Residents first engaged in a semiannual review without self-assessment. At subsequent semiannual reviews, residents completed a Milestone-based self-assessment that was provided to the faculty member assigned to conduct their semiannual review. Residents and faculty completed surveys rating perception of feedback quality. Two-sided Wilcoxon signed-rank tests were used in comparison analysis. RESULTS One resident did not self-assess prior to the semiannual review and was excluded leaving 25 paired surveys for analysis. Residents found feedback after the self-assessment more actionable (p = 0.013), insightful (p = 0.010), and better overall (p = 0.025). Similarly, faculty felt the feedback they provided was more actionable (p < 0.001), more insightful (p < 0.001), and better communicated (p < 0.001); led to improved resident understanding of milestones (p < 0.001); and were overall more satisfied (p < 0.001). Free-text comments explore pre- and postintervention perceptions of feedback. CONCLUSIONS Integration of self-assessment into semiannual reviews improves perception of feedback given to residents as perceived by both residents and faculty. Although limited by sample size, the results are promising for a simple, evidence-based intervention to improve feedback during an existing mandated feedback opportunity.
Collapse
Affiliation(s)
- Jenna Thomas
- Department of Emergency MedicineUniversity of MichiganAnn ArborMichiganUSA
- Department of Emergency MedicineMayo ClinicRochesterMinnesotaUSA
| | | | - James Colletti
- Department of Emergency MedicineMayo ClinicRochesterMinnesotaUSA
| | - Aidan Mullan
- Department of Biostatistics and InformaticsMayo ClinicRochesterMinnesotaUSA
| | - James Homme
- Department of Emergency MedicineMayo ClinicRochesterMinnesotaUSA
| |
Collapse
|
5
|
Anaesthesiology residency humanism education program: putting outcomes in context. Br J Anaesth 2020; 124:e211-e212. [DOI: 10.1016/j.bja.2019.12.033] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2019] [Accepted: 12/23/2019] [Indexed: 11/17/2022] Open
|
6
|
Zisblatt L, Chen F, Dillman D, DiLorenzo AN, MacEachern MP, Miller Juve A, Peoples EE, Grantham AE. Critical Appraisal of Anesthesiology Educational Research for 2017. Cureus 2019; 11:e4838. [PMID: 31410321 PMCID: PMC6684110 DOI: 10.7759/cureus.4838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
Background Critical appraisals provide a method for establishing the status of an area of study or evaluating the effectiveness of literature within it. The purpose of this study was to review and appraise studies published in 2017 on medical education in anesthesiology and to provide summaries of the highest-quality medical education research articles in the field. Methods Three Ovid MEDLINE databases, Embase.com, Education Resources Information Center (ERIC), and PsycINFO, were searched followed by a manual review of articles published in the highest impact factor journals in both the fields of anesthesiology and medical education. Abstracts were double-screened and quantitative articles subsequently scored by three randomly assigned raters. Qualitative studies were scored by two raters. Two different rubrics were used for scoring quantitative and qualitative studies, both allowed for scores ranging from 1-25. Results A total of 864 unique citations were identified through the search criteria. Of those, 62 articles met the inclusion criteria, with 59 quantitative and three qualitative. The top 10 papers with the highest scores were reported and summarized. Discussion As the first article to critically review the literature available for education in anesthesiology, we hope that this study will serve as the first manuscript in an annual series that will help individuals involved in anesthesiology education gain an understanding of the highest-quality research in the field. Once this process is repeated, trends can be tracked and serve as a resource to educators and researchers in anesthesiology for years to come.
Collapse
Affiliation(s)
- Lara Zisblatt
- Department of Anesthesiology, University of Michigan, Ann Arbor, USA
| | - Fei Chen
- Department of Anesthesiology, University of North Carolina School of Medicine, Chapel Hill, USA
| | - Dawn Dillman
- Department of Anesthesiology & Perioperative Medicine, Oregon Health & Science University, Portland, USA
| | - Amy N DiLorenzo
- Department of Anesthesiology, University of Kentucky, Lexington, USA
| | - Mark P MacEachern
- Taubman Health Sciences Library, University of Michigan, Ann Arbor, USA
| | - Amy Miller Juve
- Department of Anesthesiology & Perioperative Medicine, Oregon Health & Science University, Portland, USA
| | - Emily E Peoples
- Department of Anesthesiology, University of Michigan, Ann Arbor, USA
| | | |
Collapse
|
7
|
Chen F, Arora H, Zvara DA, Connolly A, Martinelli SM. Anesthesia myTIPreport: A Web-Based Tool for Real-Time Evaluation of Accreditation Council for Graduate Medical Education’s Milestone Competencies and Clinical Feedback to Residents. A A Pract 2019; 12:412-415. [DOI: 10.1213/xaa.0000000000000976] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|