1
|
Chiel LE, Marcus CH, Gomez AR, Williams K, Buraks O, Fishman M, Angerman J, Winn AS. Peer evaluations in an anonymous and open system: Intern reactions and evaluation analysis. MEDICAL TEACHER 2024; 46:919-924. [PMID: 38065679 DOI: 10.1080/0142159x.2023.2287985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 11/22/2023] [Indexed: 06/27/2024]
Abstract
PURPOSE Feedback and evaluation from peers is fundamental to trainees' professional development but may be uncomfortable to provide non-anonymously. We aimed to understand resident perception of anonymous and open written evaluation systems and to analyze evaluations in each of these systems. MATERIALS AND METHOD We compared two years of intern peer evaluations at a large United States-based pediatric residency program - the 2018-2019 and 2019-2020 years during which intern peer evaluations were anonymous and open, respectively. We electronically surveyed interns about their perceptions of peer evaluations and analyzed four aspects of the evaluations themselves: (1) orientation, (2) caliber, (3) Likert-scale, and (4) word count. RESULTS 40 (78%) and 38 (75%) interns participated in the survey in the anonymous and open years, respectively. Respondents reported being more likely to avoid writing constructive comments in the open year. There were more high caliber comments in the open year. Likert-scale ratings of peers were lower in the open year. Word count was longer in the open year. CONCLUSIONS While interns expressed more discomfort evaluating peers in an open evaluation system, they wrote longer and more high caliber comments in an open system than in an anonymous system. Residency programs should consider professional development in writing peer evaluation.
Collapse
Affiliation(s)
- Laura E Chiel
- Division of Pulmonary Medicine, Department of Pediatrics, Boston Children's Hospital and Harvard Medical School, Boston, Massachusetts, USA
| | - Carolyn H Marcus
- Division of General Pediatrics, Department of Pediatrics, Boston Children's Hospital and Harvard Medical School, Boston, Massachusetts, USA
| | - Amanda R Gomez
- Division of Gastroenterology, Hepatology, and Nutrition, Department of Pediatrics, Ann & Robert H. Lurie Children's Hospital of Chicago and Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| | - Kathryn Williams
- Biostatistics and Research Design Center, Institutional Centers for Clinical and Translational Research, Boston Children's Hospital, Boston, Massachusetts, USA
| | | | - Michael Fishman
- Division of General Pediatrics, Department of Pediatrics, Boston Children's Hospital and Harvard Medical School, Boston, Massachusetts, USA
| | - Jessica Angerman
- Boston Combined Residency Program at Boston Children's Hospital and Health Informatics at Northwestern University, Evanston, Illinois, USA
| | - Ariel S Winn
- Division of General Pediatrics, Department of Pediatrics, Boston Children's Hospital and Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
2
|
Hanmore T, Moon CC, Curtis R, Hopman W, Baxter S. Is time really of the essence? Timeliness of narrative feedback in ophthalmology CBME assessments. MEDICAL TEACHER 2024; 46:705-710. [PMID: 37910021 DOI: 10.1080/0142159x.2023.2274286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/03/2023]
Abstract
PURPOSE Competency-based medical education relies on a strong program of assessment, and quality comments play a vital role in ensuring its success. The goal of this study is to determine the effect of the timeliness of assessment completion on the quality of the feedback. MATERIALS AND METHODS Using the Quality of Assessment for Learning (QuAL) score 2478 assessments were evaluated. The assessments included those completed between July 2017 and December 2020 for 18 ophthalmology residents. Spearman correlation, Mann-Whitney U and Kruskal-Wallis tests were used to assess variations in QuAL scores based on the timeliness of assessment completion. RESULTS The timeliness of assessment completion ranged from 0 to 299 d with the mean time for completion being 3 d. As the delay increased, the QuAL score decreased. Feedback provided 4, 5, and 14 d post-encounter demonstrated statistically significant differences in the QuAL score. Additionally, there was a significant difference in the timeliness of feedback when there is no written comment. CONCLUSIONS This study demonstrates that the timeliness of assessment completion might have an effect on the quality of written feedback. Written feedback should be completed within 14 d of the encounter to optimize quantity and quality.
Collapse
Affiliation(s)
- Tessa Hanmore
- Department of Ophthalmology, Queen's University, Kingston, Canada
- Department of Physical Medicine and Rehabilitation, Queen's University, Kingston, Canada
- Department of Psychiatry, Queen's University, Kingston, Canada
| | | | - Rachel Curtis
- Department of Ophthalmology, Queen's University, Kingston, Canada
| | - Wilma Hopman
- Department of Public Health Sciences, Queen's University, Kingston, Canada
| | - Stephanie Baxter
- Department of Ophthalmology, Queen's University, Kingston, Canada
| |
Collapse
|
3
|
Tanaka P, Soo Park Y, Chen CY, Yumul R, Macario A. Domains Influencing Faculty Decisions on the Level of Supervision Required for Anesthesiology EPAs with Analysis of Feedback Comments. JOURNAL OF SURGICAL EDUCATION 2024; 81:741-752. [PMID: 38553368 DOI: 10.1016/j.jsurg.2024.02.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 12/30/2023] [Accepted: 02/02/2024] [Indexed: 04/26/2024]
Abstract
OBJECTIVE The purpose of this qualitative study was to examine responses related to entrustment and feedback comments from an assessment tool. DESIGN Qualitative analyses using semi-structured interviews and analysis of narrative comments. SETTING Main hospital OR suite at a large academic medical center. PARTICIPANTS faculty, and residents who work in the OR suite. RESULTS Seven of the 14 theoretical domains from the Theoretical Domains Framework were identified as influencing faculty decision on entrustment: knowledge, skills, intention, memory/attention/decision processes, environmental context, and resources, beliefs of capabilities, and reinforcement. The majority (651/1116 (58.4%)) of faculty comments were critical/modest praise and relevant, consistent across all 6 EPAs. The written in feedback comments for all 1,116 Web App EPA assessments yielded a total of 1,599 sub-competency specific responses. These responses were mapped to core competencies, and at least once to 13 of the 23 ACGME subcompetencies. CONCLUSIONS Domains identified as influencing faculty decision on entrustment were knowledge, skills, intention, memory/attention/decision processes, environmental context, and resources, beliefs of capabilities, and reinforcement. Most narrative feedback comments were critical/modest praise and relevant, consistent across each of the EPAs.
Collapse
Affiliation(s)
- Pedro Tanaka
- Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, California.
| | - Yoon Soo Park
- Associate Professor, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois
| | - Chien-Yu Chen
- Department of Anesthesiology, Taipei Medical University Hospital, Taipei, Taiwan; Department of Humanities in Medicine, School of Medicine, College of Medicine, Taipei
| | - Roya Yumul
- Professor, Cedars-Sinai Medical Center, Los Angeles, California
| | - Alex Macario
- Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, California
| |
Collapse
|
4
|
Klein R, Snyder ED, Koch J, Volerman A, Alba-Nguyen S, Julian KA, Thompson V, Ufere NN, Burnett-Bowie SAM, Kumar A, White BAA, Park YS, Palamara K. Analysis of narrative assessments of internal medicine resident performance: are there differences associated with gender or race and ethnicity? BMC MEDICAL EDUCATION 2024; 24:72. [PMID: 38233807 PMCID: PMC10795394 DOI: 10.1186/s12909-023-04970-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 12/13/2023] [Indexed: 01/19/2024]
Abstract
BACKGROUND Equitable assessment is critical in competency-based medical education. This study explores differences in key characteristics of qualitative assessments (i.e., narrative comments or assessment feedback) of internal medicine postgraduate resident performance associated with gender and race and ethnicity. METHODS Analysis of narrative comments included in faculty assessments of resident performance from six internal medicine residency programs was conducted. Content analysis was used to assess two key characteristics of comments- valence (overall positive or negative orientation) and specificity (detailed nature and actionability of comment) - via a blinded, multi-analyst approach. Differences in comment valence and specificity with gender and race and ethnicity were assessed using multilevel regression, controlling for multiple covariates including quantitative competency ratings. RESULTS Data included 3,383 evaluations with narrative comments by 597 faculty of 698 residents, including 45% of comments about women residents and 13.2% about residents who identified with race and ethnicities underrepresented in medicine. Most comments were moderately specific and positive. Comments about women residents were more positive (estimate 0.06, p 0.045) but less specific (estimate - 0.07, p 0.002) compared to men. Women residents were more likely to receive non-specific, weakly specific or no comments (adjusted OR 1.29, p 0.012) and less likely to receive highly specific comments (adjusted OR 0.71, p 0.003) or comments with specific examples of things done well or areas for growth (adjusted OR 0.74, p 0.003) than men. Gendered differences in comment specificity and valence were most notable early in training. Comment specificity and valence did not differ with resident race and ethnicity (specificity: estimate 0.03, p 0.32; valence: estimate - 0.05, p 0.26) or faculty gender (specificity: estimate 0.06, p 0.15; valence: estimate 0.02 p 0.54). CONCLUSION There were significant differences in the specificity and valence of qualitative assessments associated with resident gender with women receiving more praising but less specific and actionable comments. This suggests a lost opportunity for well-rounded assessment feedback to the disadvantage of women.
Collapse
Affiliation(s)
- Robin Klein
- Department of Medicine, Division of General Internal Medicine, Emory University School of Medicine, 80 Jesse Hill Jr Dr SE, Atlanta, GA, 30303, USA.
| | - Erin D Snyder
- Department of Medicine, Division of General Internal Medicine, University of Alabama Birmingham School of Medicine, Birmingham, AL, USA
| | - Jennifer Koch
- Department of Medicine, University of Louisville, Louisville, KY, USA
| | - Anna Volerman
- Departments of Medicine and Pediatrics, University of Chicago, Chicago, IL, USA
| | - Sarah Alba-Nguyen
- Department of Medicine, Division of Hospital Medicine, University of California, San Francisco, CA, USA
| | - Katherine A Julian
- Department of Medicine, Division of General Internal Medicine, University of California, San Francisco, CA, USA
| | - Vanessa Thompson
- Department of Medicine, Division of General Internal Medicine, University of California, San Francisco, CA, USA
| | - Nneka N Ufere
- Department of Medicine, Division of Gastroenterology, Massachusetts General Hospital, Boston, MA, USA
| | | | - Anshul Kumar
- Massachusetts General Hospital Institute of Health Professions, Boston, MA, USA
| | - Bobbie Ann A White
- Massachusetts General Hospital Institute of Health Professions, Boston, MA, USA
| | - Yoon Soo Park
- Department of Medical Education, University of Illinois Chicago, Chicago, IL, USA
| | - Kerri Palamara
- Department of Medicine, Massachusetts General Hospital, Boston, MA, USA
| |
Collapse
|
5
|
Anderson LM, Rowland K, Edberg D, Wright KM, Park YS, Tekian A. An Analysis of Written and Numeric Scores in End-of-Rotation Forms from Three Residency Programs. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:497-506. [PMID: 37929204 PMCID: PMC10624145 DOI: 10.5334/pme.41] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 10/24/2023] [Indexed: 11/07/2023]
Abstract
Introduction End-of-Rotation Forms (EORFs) assess resident progress in graduate medical education and are a major component of Clinical Competency Committee (CCC) discussion. Single-institution studies suggest EORFs can detect deficiencies, but both grades and comments skew positive. In this study, we sought to determine whether the EORFs from three programs, including multiple specialties and institutions, produced useful information for residents, program directors, and CCCs. Methods Evaluations from three programs were included (Program 1, Institution A, Internal Medicine: n = 38; Program 2, Institution A, Anesthesia: n = 9; Program 3, Institution B, Anesthesia: n = 11). Two independent researchers coded each written comment for relevance (specificity and actionability) and orientation (praise or critical) using a standardized rubric. Numeric scores were analyzed using descriptive statistics. Results 4869 evaluations were collected from the programs. Of the 77,434 discrete numeric scores, 691 (0.89%) were considered "below expected level." 71.2% (2683/3767) of the total written comments were scored as irrelevant, while 3217 (85.4%) of total comments were scored positive and 550 (14.6%) were critical. When combined, 63.2% (n = 2379) of comments were scored positive and irrelevant while 6.5% (n = 246) were scored critical and relevant. Discussion <1% of comments indicated below average performance; >70% of comments scored irrelevant. Critical, relevant comments were least frequently observed, consistent across all 3 programs. The low rate of constructive feedback and the high rate of irrelevant comments are inadequate for a CCC to make informed decisions. The consistency of these findings across programs, specialties, and institutions suggests both local and systemic changes should be considered.
Collapse
Affiliation(s)
- Lauren M. Anderson
- Department of Family and Preventive Medicine, Rush University, Chicago, Illinois, US
| | - Kathleen Rowland
- Department of Family and Preventive Medicine, Rush University, Chicago, Illinois, US
| | - Deborah Edberg
- Department of Family and Preventive Medicine, Rush University, Chicago, Illinois, US
| | - Katherine M. Wright
- Department of Family & Community Medicine, Northwestern University Feinberg School of Medicine, Chicago, Illinois, US
| | - Yoon Soo Park
- Department of Medical Education, University of Illinois Chicago, Chicago, Illinois, US
| | - Ara Tekian
- Department of Medical Education, University of Illinois Chicago, Chicago, Illinois, US
| |
Collapse
|
6
|
Tanaka P, Marty A, Park YS, Kakazu C, Udani A, Pardo M, Sullivan K, Sandhu C, Turner J, Mitchell J, Macario A. Defining entrustable professional activities for first year anesthesiology residents: A Delphi study. J Clin Anesth 2023. [DOI: 10.1016/j.jclinane.2023.111116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
|
7
|
Gutierrez M, Wilson K, Bickford B, Yuhas J, Markert R, Burtson KM. Novel In-Training Evaluation Report in an Internal Medicine Residency Program: Improving the Quality of the Narrative Assessment. JOURNAL OF MEDICAL EDUCATION AND CURRICULAR DEVELOPMENT 2023; 10:23821205231206058. [PMID: 37822780 PMCID: PMC10563452 DOI: 10.1177/23821205231206058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 09/20/2023] [Indexed: 10/13/2023]
Abstract
OBJECTIVE To determine whether incorporating our novel in-training evaluation report (ITER), which prompts each resident to list at least three self-identified learning goals, improved the quality of narrative assessments as measured by the Narrative Evaluation Quality Instrument (NEQI). METHODS A total of 1468 narrative assessments from a single institution from 2017 to 2021 were deidentified, compiled, and sorted into the pre-intervention form arm and post-intervention form arm. Due to limitations in our residency management suite, incorporating learning goals required switching from an electronic form to a hand-deliver form. Comments were graded by two research personnel utilizing the NEQI's scale of 0-12, with 12 representing the maximum quality for a comment. The outcome of the study was the mean difference in NEQI score between the electronic pre-intervention period and paper post-intervention period. RESULTS The mean NEQI score for the pre-intervention period was 2.43 ± 3.34, and the mean NEQI score for the post-intervention period was 3.31 ± 1.71, with a mean difference of 0.88 (p < 0.001). In the pre-intervention period, 46% of evaluations were submitted without a narrative assessment (scored as a zero) while 1% of post-intervention period evaluations had no narrative assessment. Internal consistency reliability, as measured by Ebel's intraclass correlation coefficient (ICC), showed high agreement between the two raters (ICC = 0.92). CONCLUSIONS Our findings suggest that implementing a timely, hand-delivered paper ITER that incorporates resident learning goals can lead to overall higher-quality narrative assessments.
Collapse
Affiliation(s)
- Marc Gutierrez
- Internal Medicine Program, Affiliated with Wright Patterson AFB and Wright State University, Wright-Patterson AFB, OH, USA
| | - Kelsey Wilson
- Internal Medicine Program, Affiliated with Wright Patterson AFB and Wright State University, Wright-Patterson AFB, OH, USA
| | - Brant Bickford
- Internal Medicine Program, Affiliated with Wright Patterson AFB and Wright State University, Wright-Patterson AFB, OH, USA
| | - Joseph Yuhas
- Internal Medicine Program, Affiliated with Wright Patterson AFB and Wright State University, Wright-Patterson AFB, OH, USA
| | - Ronald Markert
- Department of Internal Medicine and Neurology, Affiliated with Wright State University, Dayton, OH, USA
| | - Kathryn M Burtson
- Internal Medicine Program, Affiliated with Wright Patterson AFB, Boonshoft School of Medicine and Wright State University, Wright-Patterson AFB, OH 45433, USA
| |
Collapse
|
8
|
The Lake Wobegon effect is real: All general surgery residents appear to be better than average. SURGERY IN PRACTICE AND SCIENCE 2022. [DOI: 10.1016/j.sipas.2022.100134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
9
|
Waheed S, Maursetter L. Evaluation Evolution: Designing Optimal Evaluations to Enhance Learning in Nephrology Fellowship. Adv Chronic Kidney Dis 2022; 29:526-533. [PMID: 36371117 DOI: 10.1053/j.ackd.2022.06.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Revised: 04/26/2022] [Accepted: 06/28/2022] [Indexed: 11/10/2022]
Abstract
Evaluations serve as the backbone of any educational program and can be broadly divided into formative and summative evaluations. Formative evaluations are "just in time" evaluations focused on informing the learning process, whereas summative evaluations compare fellows to a preset standard to determine their readiness for unsupervised practice. In the nephrology fellowship programs, evaluations assess competence in the framework of ACGME Milestones 2.0. A variety of learning venues, evaluators, and tools should be incorporated into the measurement process. It is important to determine which milestones can be best assessed in each education venue to decrease the burden of assessment fatigue. Additionally, programs can diversify the evaluators to include nurses, medical students, peers, and program coordinators in addition to faculty to provide a well-rounded assessment of the fellows and share the assessment burden. Lastly, the evaluation data should be presented to fellows in a format where it can inform goal setting. The evaluation system needs to evolve along with the changes being made in curriculum design. This will help to make fellowship learning effective and efficient.
Collapse
Affiliation(s)
- Sana Waheed
- Piedmont Nephrology and Internal Medicine, Atlanta, GA
| | - Laura Maursetter
- Division of Nephrology, Department of Medicine, University of Wisconsin School of Medicine and Public Health, Madison, WI.
| |
Collapse
|
10
|
Gordon LB, Zelaya-Floyd M, White P, Hallen S, Varaklis K, Tavakolikashi M. Interprofessional bedside rounding improves quality of feedback to resident physicians. MEDICAL TEACHER 2022; 44:907-913. [PMID: 35373712 DOI: 10.1080/0142159x.2022.2049735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE Obtaining high quality feedback in residency education is challenging, in part due to limited opportunities for faculty observation of authentic clinical work. This study reviewed the impact of interprofessional bedside rounds ('iPACE™') on the length and quality of faculty narrative evaluations of residents as compared to usual inpatient teaching rounds. METHODS Narrative comments from faculty evaluations of Internal Medicine (IM) residents both on usual teaching service as well as the iPACE™ service (spanning 2017-2020) were reviewed and coded using a deductive content analysis approach. RESULTS Six hundred ninety-two narrative evaluations by 63 attendings of 103 residents were included. Evaluations of iPACE™ residents were significantly longer than those of residents on usual teams (109 vs. 69 words, p < 0.001). iPACE™ evaluations contained a higher average occurrence of direct observations of patient/family interactions (0.72 vs. 0.32, p < 0.001), references to interprofessionalism (0.17 vs. 0.05, p < 0.001), as well as specific (3.21 vs. 2.26, p < 0.001), actionable (1.01 vs. 0.69, p < 0.001), and corrective feedback (1.2 vs. 0.88, p = 0.001) per evaluation. CONCLUSIONS This study suggests that the iPACE™ model, which prioritizes interprofessional bedside rounds, had a positive impact on the quantity and quality of feedback, as measured via narrative comments on weekly evaluations.
Collapse
Affiliation(s)
- Lesley B Gordon
- Tufts University School of Medicine, Boston, MA, USA
- Department of Medicine, Maine Medical Center, Portland, ME, USA
| | | | - Patricia White
- Department of Medical Education, Maine Medical Center, Portland, ME, USA
| | - Sarah Hallen
- Tufts University School of Medicine, Boston, MA, USA
- Division of Geriatrics, Maine Medical Center, Portland, ME, USA
| | - Kalli Varaklis
- Tufts University School of Medicine, Boston, MA, USA
- Department of Medical Education, Maine Medical Center, Portland, ME, USA
- Department of Obstetrics and Gynecology, Maine Medical Center, Portland, ME, USA
| | - Motahareh Tavakolikashi
- Department of Medical Education, Maine Medical Center, Portland, ME, USA
- Department of System Science and Industrial Engineering, Binghamton University, Binghamton, NY, USA
| |
Collapse
|
11
|
Concordance of Narrative Comments with Supervision Ratings Provided During Entrustable Professional Activity Assessments. J Gen Intern Med 2022; 37:2200-2207. [PMID: 35710663 PMCID: PMC9296736 DOI: 10.1007/s11606-022-07509-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 03/24/2022] [Indexed: 10/18/2022]
Abstract
BACKGROUND Use of EPA-based entrustment-supervision ratings to determine a learner's readiness to assume patient care responsibilities is expanding. OBJECTIVE In this study, we investigate the correlation between narrative comments and supervision ratings assigned during ad hoc assessments of medical students' performance of EPA tasks. DESIGN Data from assessments completed for students enrolled in the clerkship phase over 2 academic years were used to extract a stratified random sample of 100 narrative comments for review by an expert panel. PARTICIPANTS A review panel, comprised of faculty with specific expertise related to their roles within the EPA program, provided a "gold standard" supervision rating using the comments provided by the original assessor. MAIN MEASURES Interrater reliability (IRR) between members of review panel and correlation coefficients (CC) between expert ratings and supervision ratings from original assessors. KEY RESULTS IRR among members of the expert panel ranged from .536 for comments associated with focused history taking to .833 for complete physical exam. CC (Kendall's correlation coefficient W) between panel members' assignment of supervision ratings and the ratings provided by the original assessors for history taking, physical examination, and oral presentation comments were .668, .697, and .735 respectively. The supervision ratings of the expert panel had the highest degree of correlation with ratings provided during assessments done by master assessors, faculty trained to assess students across clinical contexts. Correlation between supervision ratings provided with the narrative comments at the time of observation and supervision ratings assigned by the expert panel differed by clinical discipline, perhaps reflecting the value placed on, and perhaps the comfort level with, assessment of the task in a given specialty. CONCLUSIONS To realize the full educational and catalytic effect of EPA assessments, assessors must apply established performance expectations and provide high-quality narrative comments aligned with the criteria.
Collapse
|
12
|
Park YS, Hamstra SJ, Yamazaki K, Holmboe E. Longitudinal Reliability of Milestones-Based Learning Trajectories in Family Medicine Residents. JAMA Netw Open 2021; 4:e2137179. [PMID: 34874406 PMCID: PMC8652607 DOI: 10.1001/jamanetworkopen.2021.37179] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
IMPORTANCE Longitudinal Milestones data reported to the Accreditation Council for Graduate Medical Education (ACGME) can be used to measure the developmental and educational progression of learners. Learning trajectories illustrate the pattern and rate at which learners acquire competencies toward unsupervised practice. OBJECTIVE To investigate the reliability of learning trajectories and patterns of learning progression that can support meaningful intervention and remediation for residents. DESIGN, SETTING, AND PARTICIPANTS This national retrospective cohort study included Milestones data from residents in family medicine, representing 6 semi-annual reporting periods from July 2016 to June 2019. INTERVENTIONS Longitudinal formative assessment using the Milestones assessment system reported to the ACGME. MAIN OUTCOMES AND MEASURES To estimate longitudinal consistency, growth rate reliability (GRR) and growth curve reliability (GCR) for 22 subcompetencies in the ACGME family medicine Milestones were used, incorporating clustering effects at the program level. Latent class growth curve models were used to examine longitudinal learning trajectories. RESULTS This study included Milestones ratings from 3872 residents in 514 programs. The Milestones reporting system reliably differentiated individual longitudinal patterns for formative purposes (mean [SD] GRR, 0.63 [0.03]); there was also evidence of precision for model-based rates of change (mean [SD] GCR, 0.91 [0.02]). Milestones ratings increased significantly across training years and reporting periods (mean [SD] of 0.55 [0.04] Milestones units per reporting period; P < .001); patterns of developmental progress varied by subcompetency. There were 3 or 4 distinct patterns of learning trajectories for each of the 22 subcompetencies. For example, for the professionalism subcompetency, residents were classified to 4 groups of learning trajectories; during the 3-year family medicine training period, trajectories diverged further after postgraduate year (PGY) 1, indicating a potential remediation point between the end of PGY 1 and the beginning of PGY 2 for struggling learners, who represented 16% of learners (620 residents). Similar inferences for learning trajectories were found for practice-based learning and improvement, systems-based practice, and interpersonal and communication skills. Subcompetencies in medical knowledge and patient care demonstrated more consistent patterns of upward growth. CONCLUSIONS AND RELEVANCE These findings suggest that the Milestones reporting system provides reliable longitudinal data for individualized tracking of progress in all subcompetencies. Learning trajectories with supporting reliability evidence could be used to understand residents' developmental progress and tailored for individualized learning plans and remediation.
Collapse
Affiliation(s)
- Yoon Soo Park
- Harvard Medical School, Boston, Massachusetts
- Massachusetts General Hospital, Boston
- University of Illinois at Chicago College of Medicine, Chicago
| | - Stanley J. Hamstra
- Accreditation Council for Graduate Medical Education, Chicago, Illinois
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
- Feinberg School of Medicine, Northwestern University, Chicago, Illinois
| | - Kenji Yamazaki
- Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Eric Holmboe
- Accreditation Council for Graduate Medical Education, Chicago, Illinois
- Feinberg School of Medicine, Northwestern University, Chicago, Illinois
| |
Collapse
|
13
|
Roshan A, Wagner N, Acai A, Emmerton-Coughlin H, Sonnadara RR, Scott TM, Karimuddin AA. Comparing the Quality of Narrative Comments by Rotation Setting. JOURNAL OF SURGICAL EDUCATION 2021; 78:2070-2077. [PMID: 34301523 DOI: 10.1016/j.jsurg.2021.06.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Accepted: 06/20/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE To investigate the effect of rotation setting on trainee-directed narrative comments within a Canadian General Surgery Residency Program. The primary outcome was to use the McMaster Narrative Comment Rating Scale (MNCRS) to evaluate the quality of narrative comments across five domains: valence of language, degree of correction versus reinforcement, specificity, actionability and overall usefulness. As distributed medical education in the postgraduate training context becomes more prevalent, delineating differences in feedback between various sites will be imperative, as it may affect how narrative comments are interpreted by clinical competency committee (CCC) members. DESIGN, SETTING, AND PARTICIPANTS A retrospective analysis of 2,469 assessments obtained between July 1, 2014 and May 5, 2019 from the General Surgery Residency Program at the University of British Columbia (UBC) was conducted. Narrative comments were rated using the McMaster Narrative Comment Rating Scale (MNCRS), a validated instrument for evaluating the quality of narrative comments. A repeated measures Analysis of Variance (ANOVA) was conducted to explore the impact of rotation setting, academic, urban tertiary, distributed urban, and distributed rural on the quality of narrative feedback. RESULTS Overall, the quality of the narrative comments varied substantially between and within rotation settings. Academic sites tended to provide more actionable comments (p = 0.01) and more corrective versus reinforcing comments, compared with other sites (p's < 0.01). Comments produced by the urban tertiary rotation setting were consistently lower in quality across all scale categories compared with other settings (p's < 0.01). CONCLUSION The type of rotation setting has a significant effect on the quality of faculty feedback for trainees. Faculty development on the provision of feedback is necessary, regardless of rotation setting, and should appropriately combine rotation-specific needs and overarching program goals to ensure trainees and clinical competence committees receive high quality narrative.
Collapse
Affiliation(s)
- Aishwarya Roshan
- University of British Columbia, Vancouver, British Columbia, Canada.
| | - Natalie Wagner
- Office of Professional Development & Educational Scholarship, Queen's University, Kingston, Ontario Canada
| | - Anita Acai
- Department of Psychology, Neuroscience & Behavior, McMaster University, Hamilton, Ontario, Canada; Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, Ontario, Canada; Office of Education Science, Department of Surgery, McMaster University, Hamilton, Ontario, Canada
| | - Heather Emmerton-Coughlin
- Department of Surgery, University of British Columbia, Vancouver, British Columbia Canada; Department of Surgery, Royal Jubilee Hospital, Victoria, British Columbia, Canada
| | - Ranil R Sonnadara
- Office of Education Science, Department of Surgery, McMaster University, Hamilton, Ontario, Canada; Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Tracy M Scott
- Department of Surgery, University of British Columbia, Vancouver, British Columbia Canada; Department of Surgery, St. Paul's Hospital, Vancouver, British Columbia, Canada
| | - Ahmer A Karimuddin
- Department of Surgery, University of British Columbia, Vancouver, British Columbia Canada; Department of Surgery, St. Paul's Hospital, Vancouver, British Columbia, Canada
| |
Collapse
|
14
|
Potisek NM, D'Alessandro DM, Huber JN, Wolbrink TA, Lockspeiser TM, Fromme HB, Ryan MS. An Annotated Bibliography of Key Studies in Medical Education in 2019: Applying the Current Literature to Educational Practice and Scholarship. Acad Pediatr 2021; 21:425-434. [PMID: 33524623 DOI: 10.1016/j.acap.2021.01.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Revised: 01/13/2021] [Accepted: 01/24/2021] [Indexed: 12/31/2022]
Abstract
Pediatric medical educators have the dual challenge of remaining up-to-date in the field of pediatrics and in the field of medical education. Due to the volume of information published in these 2 fields it can be nearly impossible to remain current in both fields of practice. To facilitate interpretation of the most recent medical education research, the authors conducted an annotated bibliography of medical education literature published in 2019. The purpose of this annotated bibliography was to identify manuscripts which had the potential to significantly influence a pediatric educator's practice. Using a 2-staged review process, the authors reviewed abstracts from 13 medical education and specialty journals. All reviews were independently completed by 2 different reviewers for each journal in both stages. A total of 4700 abstracts were reviewed and 17 key manuscripts were identified. The authors grouped the key manuscripts into 6 core themes: bedside teaching, learning climate, bias, learner autonomy, learner in trouble, and resident competency. This annotated bibliography provides the authors' condensed summary of the medical education manuscripts most likely to influence educational practices for the busy pediatric medical educator.
Collapse
Affiliation(s)
- Nicholas M Potisek
- Prisma Health Children's Hospital, University of South Carolina School of Medicine (NM Potisek), Greenville, SC; Department of Pediatrics, Wake Forest School of Medicine and Brenner Children's Hospital (NM Potisek), Winston-Salem, NC.
| | | | - Jody N Huber
- Department of Pediatrics, University of South Dakota Sanford School of Medicine (JN Huber), Sioux Falls, SD
| | - Traci A Wolbrink
- Department of Anesthesiology, Critical Care, and Pain Medicine, Boston Children's Hospital (TA Wolbrink), Boston, Mass
| | - Tai M Lockspeiser
- Department of Pediatrics, University of Colorado, School of Medicine (TM Lockspeiser), Aurora, Colo
| | - H Barrett Fromme
- Department of Pediatrics, University of Chicago Pritzker School of Medicine (HB Fromme), Chicago, Ill
| | - Michael S Ryan
- Department of Pediatrics, Children's Hospital of Richmond at Virginia Commonwealth University School of Medicine (MS Ryan), Richmond, Va
| |
Collapse
|
15
|
Lin D. Hospitalist Readiness to Assess and Evaluate Resident Progress. South Med J 2021; 114:215-217. [PMID: 33787934 DOI: 10.14423/smj.0000000000001227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Affiliation(s)
- Doris Lin
- From the Department of Medicine, Baylor College of Medicine, Houston, Texas
| |
Collapse
|
16
|
Lloyd RB, Park YS, Tekian A, Marvin R. Understanding Assessment Systems for Clinical Competency Committee Decisions: Evidence from a Multisite Study of Psychiatry Residency Training Programs. ACADEMIC PSYCHIATRY : THE JOURNAL OF THE AMERICAN ASSOCIATION OF DIRECTORS OF PSYCHIATRIC RESIDENCY TRAINING AND THE ASSOCIATION FOR ACADEMIC PSYCHIATRY 2020; 44:734-740. [PMID: 31873920 DOI: 10.1007/s40596-019-01168-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Accepted: 12/11/2019] [Indexed: 06/10/2023]
Abstract
OBJECTIVE This multisite study examines how clinical competency committees in Psychiatry synthesize resident assessments to inform milestones decisions to provide guidelines that support their use. METHODS The study convened training directors and associate training directors from three psychiatry residency programs to examine decision-making processes of clinical competency committees. Annual resident assessments for one second year and one third year resident were used in a mock clinical competency committee format to assign milestones for two consecutive reporting periods. The committees reflected on the process and rated how the assessment tools impacted the assessment of milestones and evaluated the overall process. The authors compared reliability of assessment between the mock committees and examined both reliability of end of rotation assessments and their composite scores when combined with clinical skills evaluations. RESULTS End of rotation evaluations were the most informative tool for assigning milestones and clarifying discrepancies in performance. In particular, the patient care and medical knowledge competencies were the easiest to rate, while the systems-based practice and practice-based learning and improvement were the most difficult. Reliability between committees was low although higher number of available evaluations improved reliability in decision-making. CONCLUSIONS The results indicate that the medical knowledge and patient care competencies are the easiest to rate and informed most by end of rotation evaluations and clinical skills examinations. Other evaluation tools may better capture performance on specific sub-competencies beyond workplace-based assessment, or it may be helpful to reconsider the utility of how individual sub-competencies are evaluated.
Collapse
Affiliation(s)
| | | | - Ara Tekian
- University of Illinois - Chicago, Chicago, IL, USA
| | | |
Collapse
|