1
|
Finn KM, Healy MG, Petrusa ER, Borowsky LH, Begin AS. Providing Delayed, In-Person Collected Feedback From Residents to Teaching Faculty: Lessons Learned. J Grad Med Educ 2024; 16:564-571. [PMID: 39416410 PMCID: PMC11475427 DOI: 10.4300/jgme-d-24-00029.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 05/12/2024] [Accepted: 07/19/2024] [Indexed: 10/19/2024] Open
Abstract
Background Teaching faculty request timely feedback from residents to improve their skills. Yet even with anonymous processes, this upward feedback can be difficult to obtain as residents raise concerns about identification and repercussions. Objective To examine faculty perception of the quality and content of feedback from residents after increasing anonymity and sacrificing timeliness. Methods Between 2011 and 2017, an associate program director at a large internal medicine residency program met briefly with each resident individually to obtain feedback about their teaching faculty shortly after their rotation. To improve anonymity, residents were promised their feedback would not be released until they graduated. In 2019, all feedback was collated and released at one time to faculty. We administered 3 timed, voluntary, anonymous, 36-item closed-ended surveys to faculty asking about the content and value, and to self-identify whether the feedback was praise, constructive, or criticism. Results Exactly 189 faculty participated with 140 completing all 3 surveys (74.1% response rate). Faculty reported this feedback content to be of higher quality (81.0%, 81 of 100) and quantity (82.4%, 84 of 102) in contrast to prior feedback. More than 85.4% (88 of 103) of faculty agreed this feedback was more specific. Faculty identified less praise (median 35.0% vs median 50.0%, P<.001) and more negative constructive feedback (median 20.0% vs median 5.0%, P<.001) compared to prior feedback. More than 82% (116 of 140) of faculty reported it would change their behavior, but 3 months after receiving the feedback, only 63.6% (89 or 140) felt the same way (P<.001). Faculty were divided on the necessity of a time delay, with 41.4% (58 of 140) believing it reduced the feedback's value. Despite the delay, 32.1% (45 of 140) felt they could identify residents. Conclusions Offering a substantial delay in feedback delivery increased anonymity and enabled residents to furnish more nuanced and constructive comments; however, faculty opinions diverged on whether this postponement was valuable.
Collapse
Affiliation(s)
- Kathleen M. Finn
- Kathleen M. Finn, MD, MPhil, is Internal Medicine Residency Program Director and Vice Chair of Education, Tufts Medical Center, and Associate Professor of Medicine, Tufts University School of Medicine, Boston, Massachusetts, USA
| | - Michael G. Healy
- Michael G. Healy, EdD, is Health Professions Education Researcher, Massachusetts General Hospital, and Instructor in Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - Emil R. Petrusa
- Emil R. Petrusa, PhD, is a Health Professions Education Researcher, Department of Surgery, Massachusetts General Hospital, and Professor, Harvard Medical School, Boston, Massachusetts, USA
| | - Leila H. Borowsky
- Leila H. Borowsky, MPH, is Senior Clinical Research Project Manager, Division of General Internal Medicine, Department of Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA; and
| | - Arabella S. Begin
- Arabella S. Begin, MD, DPhil, is Director of Studies in Clinical Medicine, Lincoln College, University of Oxford, Oxford, United Kingdom, and Assistant Professor of Medicine, Harvard Medical School, Boston Massachusetts, USA
| |
Collapse
|
2
|
Raikhel AV, Starks H, Berger G, Redinger J. Through the Looking Glass: Comparing Hospitalists' and Internal Medicine Residents' Perceptions of Feedback. Cureus 2024; 16:e63459. [PMID: 39077307 PMCID: PMC11285250 DOI: 10.7759/cureus.63459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/28/2024] [Indexed: 07/31/2024] Open
Abstract
INTRODUCTION Feedback is critical for resident growth and is most effective when the relationship between residents and attendings is collaborative, with shared expectations for the purpose, timing, and manner of communication for feedback. Within internal medicine, there is limited work exploring the resident and hospitalist perspectives on whether key elements are included in feedback sessions. METHODS We surveyed internal medicine residents and supervising hospitalists at a large urban training program about their perspectives on four components of effective feedback: specificity,timeliness, respectful communication, and actionability. RESULTS We received surveys from 130/184 internal medicine residents and 74/129 hospitalists (71% and 57% response rate, respectively). Residents and hospitalists differed in their perspectives about specificity and timeliness: 54% (70/129) of residents reported they did not receive specific feedback while 90% (65/72) of hospitalists reported they delivered specific feedback (p<0.01), and 33% (43/129) of residents compared with 82% (59/72) of hospitalists perceived feedback as timely (p<0.01). Internal medicine residents and hospitalists reported concordant rates of feedback sessions consisting of a two-way conversation (84%, 109/129; 89%, 64/72, respectively, p=0.82) and that communication was delivered in a respectful manner (95%, 122/129; 97%, 70/72, respectively, p=0.57). CONCLUSIONS We observed discordance between internal medicine residents and supervising hospitalist perspectives on the inclusion of two critical components of feedback: specificity and timing. The hospitalist cohort reported delivering more components of effective feedback than the resident cohort reported receiving. The etiology of this discordance is likely multifactorial and requires further investigation.
Collapse
Affiliation(s)
- Andrew V Raikhel
- Department of Hospital Medicine, VA (Veteran's Affairs) Puget Sound Healthcare System, Seattle Division, Seattle, USA
- Department of General Internal Medicine, University of Washington, Seattle, USA
| | - Helene Starks
- Department of Bioethics and Humanities, University of Washington, Seattle, USA
| | - Gabrielle Berger
- Department of General Internal Medicine, University of Washington, Seattle, USA
| | - Jeffrey Redinger
- Department of Medicine, University of Washington School of Medicine, Seattle, USA
- Department of Hospital Medicine, VA (Veteran's Affairs) Puget Sound Healthcare System, Seattle Division, Seattle, USA
| |
Collapse
|
3
|
Dujari S, Scott BJ, Gold CA, Weng Y, Kvam KA. Education Research: Educational Outcomes Associated With the Introduction of a Neurohospitalist Program. NEUROLOGY. EDUCATION 2024; 3:e200131. [PMID: 39359890 PMCID: PMC11441747 DOI: 10.1212/ne9.0000000000200131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 03/14/2024] [Indexed: 10/04/2024]
Abstract
Background and Objectives As the prevalence of the neurohospitalist (NH) practice model grows, understanding its effect on trainee education is imperative. We sought to determine the impact of an academic NH program on neurology resident evaluations of faculty teaching. Methods We performed a retrospective study of faculty teaching evaluations before and after the implementation of a full-time NH service. Primary outcomes were neurology resident evaluations of faculty teaching, which were compared in the pre-NH period (August 2010-July 2014) vs the post-NH period (August 2016-July 2018). In a secondary analysis, we used the difference-in-difference approach to analyze the effect of introducing the NH service on resident evaluation of faculty teaching compared with stroke and neurocritical care faculty controls. We performed an additional descriptive analysis of medical student evaluation of faculty teaching and described Residency In-service Training Exam scores and Accreditation Council for Graduate Medical Education (ACGME) resident survey data before and after the intervention. Results There were 368 resident and 360 medical student evaluations of faculty teaching during the study period. Compared to the pre-NH period, the post-NH period had significantly higher resident evaluations of faculty teaching in 19 of 27 questions of faculty teaching, across 5 of the 6 ACGME core competencies. Within the competencies of patient care, practice-based learning and improvement, and systems-based practice, the NH teaching faculty were rated significantly higher across all questions. In the difference-in-difference model, resident evaluations of faculty teaching following the implementation of the NH service remained significantly improved compared with controls in teaching evidence-based medicine, teaching diagnostic algorithms, and explaining rationale for clinical decisions. Medical student ratings of faculty teaching were unchanged in the pre-NH and the post-NH period. Discussion Neurology residents may benefit from the clinical expertise of NHs and their ability to teach evidence-based practice and role model systems-based practice. Given the central role NHs may play in trainee education, additional focus on both the local and national levels should be dedicated to further developing the teaching skills of NHs.
Collapse
Affiliation(s)
- Shefali Dujari
- From the Department of Neurology & Neurological Sciences (S.D., B.J.S., C.A.G., K.A.K.), and Quantitative Sciences Unit (Y.W.), Stanford University, CA
| | - Brian J Scott
- From the Department of Neurology & Neurological Sciences (S.D., B.J.S., C.A.G., K.A.K.), and Quantitative Sciences Unit (Y.W.), Stanford University, CA
| | - Carl A Gold
- From the Department of Neurology & Neurological Sciences (S.D., B.J.S., C.A.G., K.A.K.), and Quantitative Sciences Unit (Y.W.), Stanford University, CA
| | - Yingjie Weng
- From the Department of Neurology & Neurological Sciences (S.D., B.J.S., C.A.G., K.A.K.), and Quantitative Sciences Unit (Y.W.), Stanford University, CA
| | - Kathryn A Kvam
- From the Department of Neurology & Neurological Sciences (S.D., B.J.S., C.A.G., K.A.K.), and Quantitative Sciences Unit (Y.W.), Stanford University, CA
| |
Collapse
|
4
|
Tieken KR, Kelly G, Maxwell J, Visenio MR, Reynolds J, Fingeret AL. Feedback Versus Compliments Versus Both in Suturing and Knot Tying Simulation: A Randomized Controlled Trial. J Surg Res 2024; 294:99-105. [PMID: 37866070 DOI: 10.1016/j.jss.2023.09.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 08/22/2023] [Accepted: 09/03/2023] [Indexed: 10/24/2023]
Abstract
INTRODUCTION Suturing is an expected skill for students graduating from health professions programs. Previous studies investigated student experience with teaching sessions utilizing constructive feedback versus compliments but did not investigate the combination of both. METHODS In this parallel, randomized controlled trial, participants were divided into three groups: feedback (F), compliments (C), or feedback and compliments (FC). Participants received standardized instruction on simple interrupted suturing and two-handed knot-tying, and were videotaped performing this skill before and after the intervention. Performance was evaluated using a validated Objective Structured Assessment of Technical Skills (OSATS) instrument. Participants completed a preintervention and postintervention survey rating their task enjoyment and self-assessment of performance. Analysis was performed to determine differences between and within the groups using Kruskal-Wallis, Wilcoxon rank-sum, and Mann-Whitney U tests. RESULTS A total of 31 students participated: 11 in C, 10 in F, and 10 in FC. The groups had similar preintervention OSATS scores. The F and FC groups demonstrated significant improvement in OSATS score after intervention, group C was not significantly different: F median of 11.25-19.75 points (P = 0.002); FC median of 11.75-21 points (P = 0.002); C median of 13-14 points (P = 0.2266). Between the groups FC and F both had significant performance improvement compared with C (P < 0.001 and P = 0.001 respectively). The FC group had a significantly higher rating of their enjoyment of the task on the postintervention survey compared with both the C and F groups with a median rating of 10 compared with 8 and 8 (P = 0.0052 and P = 0.0126, respectively). CONCLUSIONS The combination of feedback and compliments was associated with improvement in performance on suturing and knot-tying similar to the feedback-only group. The FC group rated a higher level of enjoyment of the activity compared to feedback or compliments alone.
Collapse
Affiliation(s)
- Kelsey R Tieken
- Department of Surgery, College of Medicine, University of Nebraska Medical Center, Omaha, Nebraska
| | - Grace Kelly
- College of Medicine, University of Nebraska Medical Center, Omaha, Nebraska
| | - Jessica Maxwell
- Department of Surgery, College of Medicine, University of Nebraska Medical Center, Omaha, Nebraska
| | - Michael R Visenio
- Department of Surgery, College of Medicine, University of Nebraska Medical Center, Omaha, Nebraska
| | - Jannelle Reynolds
- Department of Medical Sciences, College of Allied Health Professions, University of Nebraska Medical Center, Omaha, Nebraska
| | - Abbey L Fingeret
- Department of Surgery, College of Medicine, University of Nebraska Medical Center, Omaha, Nebraska.
| |
Collapse
|
5
|
Natesan S, Jordan J, Sheng A, Carmelli G, Barbas B, King A, Gore K, Estes M, Gottlieb M. Feedback in Medical Education: An Evidence-based Guide to Best Practices from the Council of Residency Directors in Emergency Medicine. West J Emerg Med 2023; 24:479-494. [PMID: 37278777 PMCID: PMC10284500 DOI: 10.5811/westjem.56544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 08/18/2022] [Accepted: 09/03/2022] [Indexed: 06/07/2023] Open
Abstract
Within medical education, feedback is an invaluable tool to facilitate learning and growth throughout a physician's training and beyond. Despite the importance of feedback, variations in practice indicate the need for evidence-based guidelines to inform best practices. Additionally, time constraints, variable acuity, and workflow in the emergency department (ED) pose unique challenges to providing effective feedback. This paper outlines expert guidelines for feedback in the ED setting from members of the Council of Residency Directors in Emergency Medicine Best Practices Subcommittee, based on the best evidence available through a critical review of the literature. We provide guidance on the use of feedback in medical education, with a focus on instructor strategies for giving feedback and learner strategies for receiving feedback, and we offer suggestions for fostering a culture of feedback.
Collapse
Affiliation(s)
- Sreeja Natesan
- Duke University, Department of Emergency Medicine, Durham, North Carolina
| | - Jaime Jordan
- David Geffen School of Medicine at UCLA, Department of Emergency Medicine, Los Angeles, California
| | - Alexander Sheng
- Boston Medical Center, Department of Emergency Medicine, Boston, Massachusetts
| | - Guy Carmelli
- University of Massachusetts, Department of Emergency Medicine, Worcester, Massachusetts
| | - Brian Barbas
- Loyola University Chicago, Stritch School of Medicine, Loyola University Medical Center, Department of Emergency Medicine, Maywood, Illinois
| | - Andrew King
- The Ohio State University Wexner Medical Center, Department of Emergency Medicine, Columbus, Ohio
| | - Kataryza Gore
- Rush University Medical Center, Department of Emergency Medicine, Chicago, Illinois
| | - Molly Estes
- Loma Linda University, Department of Emergency Medicine, Loma Linda, California
| | - Michael Gottlieb
- Rush University Medical Center, Department of Emergency Medicine, Chicago, Illinois
| |
Collapse
|
6
|
Natesan S, Todd B, Hsu RS, Ren RK, Clark R, Jara‐Almonta G, Vissoci JRN, Narajeenron K. Novel tool for assessing the quality of feedback in the emergency room (FEED-ER). AEM EDUCATION AND TRAINING 2021; 5:e10698. [PMID: 34859168 PMCID: PMC8616187 DOI: 10.1002/aet2.10698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 09/09/2021] [Accepted: 09/23/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND The Accreditation Council for Graduate Medical Education (ACGME) emphasizes constructive feedback as a critical component of residency training. Despite over a decade of using competency-based milestone evaluations, emergency medicine (EM) residency programs lack a standardized method for assessing the quality of feedback. We developed two novel EM-specific feedback surveys to assess the quality of feedback in the ER (FEED-ER) from both the resident and the faculty perspectives. This study aimed to evaluate the surveys' psychometric properties. METHODS We developed FEED-ER using a Likert scale with faculty and resident versions based on the ACGME framework and a literature review. The preliminary survey consisted of 25 questions involving the feedback domains of timeliness, respect/communication, specificity, action plan, and feedback culture. We conducted two modified Delphi rounds involving 17 content experts to ensure respondent understanding of the items, item coherence to corresponding feedback domains, thematic saturation of domain content, and time duration. A multicenter study was conducted at five university-based EDs in the United States and one in Thailand in 2019. We evaluated the descriptive statistics of the frequency of responses, validity evidence, and reliability of FEED-ER. RESULTS A total of 147 EM faculty and 126 EM residents completed the survey. Internal consistency was adequate (Cronbach's alpha > 0.70) and test-retest reliability showed adequate temporal stability (ICC > 0.80) for all dimensions. Content validity was deemed acceptable (CVC > 0.80) for all items. From the 25 items of FEED-ER, 23 loaded into the originally theorized dimensions (with factor loadings > 0.50). Additionally, the five feedback domains were found to be statistically distinct, with correlations between 0.40 and 0.60. The final survey has 23 items. CONCLUSIONS This is the first study to develop and provide validity evidence for an EM-specific feedback tool that has strong psychometric properties, is reproducible and reliable, and provides an objective measure for assessing the quality of feedback in the ED.
Collapse
Affiliation(s)
- Sreeja Natesan
- Division of Emergency MedicineDuke UniversityDurhamNorth CarolinaUSA
| | - Brett Todd
- Department of Emergency MedicineOakland University William Beaumont School of MedicineBeaumont HealthRoyal OakMichiganUSA
| | - Robert S. Hsu
- Christiana Care Emergency Medicine Residency ProgramSidney Kimmel Medical CollegeThomas Jefferson UniversityCherry HillNew JerseyUSA
| | | | - Ryan Clark
- Department of Emergency MedicineUMMS‐BaystateWilliamsburgMassachusettsUSA
| | - Geoff Jara‐Almonta
- Icahn School of Medicine at Mt Sinai Dept of Emergency MedicineNew York City Health and HospitalsElmhurst Hospital Center Department of Emergency MedicineNew YorkNew YorkUSA
| | | | - Khuansiri Narajeenron
- Department of Emergency Medicine, Faculty of MedicineChulalongkorn UniversityKing Chulalongkorn Memorial Hospital, The Thai Red Cross SocietyBangkokThailand
| |
Collapse
|
7
|
Modak MB, Gray AZ. Junior doctor perceptions of education and feedback on ward rounds. J Paediatr Child Health 2021; 57:96-102. [PMID: 32844558 DOI: 10.1111/jpc.15135] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2019] [Revised: 07/25/2020] [Accepted: 07/27/2020] [Indexed: 11/27/2022]
Abstract
AIM The literature suggests that feedback is wanted and needed in clinical medicine and specifically on ward rounds, yet it is often lacking. This study aimed to examine junior doctor perceptions of education and feedback on ward rounds in one clinical department at a tertiary paediatric hospital and the key influences on these perceptions. METHODS Six semi-structured focus groups were conducted over a period of 9 months comprising of 20 participants (post-graduate year 1-5) in a general medical department of a tertiary paediatric hospital. Qualitative analysis was performed on focus group transcripts using an inductive approach and codes and themes were generated in an iterative fashion with checking of themes between two researchers. RESULTS Feedback experiences were largely positive compared to previous rotations. Three overarching themes were identified which influenced trainee perceptions of education and feedback on ward rounds. These were: consultant influences (e.g. educational engagement), trainee influences (e.g. active seeking of feedback), and structural factors (e.g. organisational constraints). CONCLUSIONS Despite positive feedback experiences, the need to improve feedback for our junior doctors is clear, but how to do this remains challenging when navigating work-learning tensions. The notion of the educational alliance between the consultant and trainee is a potential useful solution, but it requires deliberate effort and dedicated time to establish given our increasingly complex and busy clinical environments.
Collapse
Affiliation(s)
- Maitreyi B Modak
- Department of General Medicine, Royal Children's Hospital, Melbourne, Victoria, Australia
| | - Amy Z Gray
- Department of General Medicine, Royal Children's Hospital, Melbourne, Victoria, Australia.,Department of Paediatrics, The University of Melbourne, Melbourne, Victoria, Australia
| |
Collapse
|
8
|
Nable JV, Bhat R, Isserman J, Smereck J, Wilson M, Maloy K. Learner Perceptions of Electronic End-of-shift Evaluations on An Emergency Medicine Clerkship. AEM EDUCATION AND TRAINING 2021; 5:75-78. [PMID: 33521494 PMCID: PMC7821058 DOI: 10.1002/aet2.10448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Revised: 03/14/2020] [Accepted: 03/23/2020] [Indexed: 06/12/2023]
Abstract
OBJECTIVES As students on an emergency medicine (EM) rotation work with different faculty on a daily basis, EM clerkships often incorporate an end-of-shift evaluation to capture sufficient student performance data. Electronic shift evaluations have been shown to increase faculty completion compliance. This study aimed to examine learner perceptions of their individualized feedback during an EM clerkship following the adoption of an electronic evaluation tool. METHODS This retrospective study examined end-of-rotation surveys that students complete at the conclusion of their EM rotation. Survey respondents used a standard Likert scale (1-5). This study examined responses to the question: "The feedback I received on this rotation was adequate." The study period included the 3 academic years prior to and subsequent to the adoption of an electronic evaluation system (replacing paper end-of-shift evaluations). The primary outcome was the mean Likert score and the secondary outcome was the percentage of students who rated their feedback a "5" or "strongly agree." RESULTS A total of 491 students responded (83.9% response rate) to the survey during the paper evaluation period, while 427 responded (80.7% response rate) in the electronic period. The mean response improved from 4.02 (paper evaluations) to 4.22 (electronic evaluations; mean difference = 0.20, p < 0.05). The percentage of students who responded with a 5 improved (31% with paper evaluations vs. 41% with electronic evaluations, p < 0.05). CONCLUSIONS The adoption of an electronic end-of-shift evaluation system was associated with improved learner perception of their feedback as compared to paper evaluations. Electronic evaluations are a useful tool to gather just-in-time data on learner performance.
Collapse
Affiliation(s)
- Jose V. Nable
- Department of Emergency MedicineGeorgetown University School of MedicineWashingtonDC
- MedStar Georgetown University HospitalWashingtonDC
| | - Rahul Bhat
- Department of Emergency MedicineGeorgetown University School of MedicineWashingtonDC
- MedStar Georgetown University HospitalWashingtonDC
- MedStar Washington Hospital CenterWashingtonDC
| | - Jacob Isserman
- Department of Emergency MedicineGeorgetown University School of MedicineWashingtonDC
- MedStar Washington Hospital CenterWashingtonDC
| | - Janet Smereck
- Department of Emergency MedicineGeorgetown University School of MedicineWashingtonDC
- MedStar Georgetown University HospitalWashingtonDC
| | - Matthew Wilson
- Department of Emergency MedicineGeorgetown University School of MedicineWashingtonDC
- MedStar Georgetown University HospitalWashingtonDC
- MedStar Washington Hospital CenterWashingtonDC
| | - Kevin Maloy
- Department of Emergency MedicineGeorgetown University School of MedicineWashingtonDC
- MedStar Washington Hospital CenterWashingtonDC
| |
Collapse
|
9
|
Innovating Pediatric Emergency Care and Learning Through Interprofessional Briefing and Workplace-Based Assessment: A Qualitative Study. Pediatr Emerg Care 2020; 36:575-581. [PMID: 32868619 PMCID: PMC7709919 DOI: 10.1097/pec.0000000000002218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
BACKGROUND Managing pediatric emergencies can be both clinically and educationally challenging with little existing research on how to improve resident involvement. Moreover, nursing input is frequently ignored. We report here on an innovation using interprofessional briefing (iB) and workplace-based assessment (iWBA) to improve the delivery of care, the involvement of residents, and their assessment. METHODS Over a period of 3 months, we implement an innovation using iB and iWBA for residents providing emergency pediatric care. A constructivist thematic analysis approach was used to collect and analyze data from 4 focus groups (N = 18) with nurses (4), supervisors (5), and 2 groups of residents (4 + 5). RESULTS Residents, supervisors, and nurses all felt that iB had positive impacts on learning, teamwork, and patient care. Moreover, when used, iB seemed to play an important role in enhancing the impact of iWBA. Although iB and iWBA seemed to be accepted and participants described important impacts on emergency department culture, conducting of both iB and iWBA could be sometimes challenging as opposed to iB alone mainly because of time constraints. CONCLUSIONS Interprofessional briefing and iWBA are promising approaches for not only resident involvement and learning during pediatric emergencies but also enhancing team function and patient care. Nursing involvement was pivotal in the success of the innovation enhancing both care and resident learning.
Collapse
|
10
|
Buckley C, Natesan S, Breslin A, Gottlieb M. Finessing Feedback: Recommendations for Effective Feedback in the Emergency Department. Ann Emerg Med 2020; 75:445-451. [DOI: 10.1016/j.annemergmed.2019.05.016] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2018] [Indexed: 01/11/2023]
|
11
|
Chaou CH, Chang YC, Yu SR, Tseng HM, Hsiao CT, Wu KH, Monrouxe LV, Ling RNY. Clinical learning in the context of uncertainty: a multi-center survey of emergency department residents' and attending physicians' perceptions of clinical feedback. BMC MEDICAL EDUCATION 2019; 19:174. [PMID: 31142306 PMCID: PMC6542138 DOI: 10.1186/s12909-019-1597-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/14/2018] [Accepted: 05/07/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND Feedback is an essential part of clinical teaching and learning, yet it is often perceived as unsatisfactory in busy clinical settings. Clinical teachers need to balance the competing demands of clinical duty and feedback provision. The influence of the clinical environment and the mutual relationship between feedback giving and seeking has been inadequately investigated. This study therefore aimed to quantify the adequacy, perceptions, and influential factors of feedback provision during resident training in emergency departments (EDs). METHODS A multicenter online questionnaire study was undertaken. The respondents comprised ED residents and clinical teachers from four teaching hospitals in Taiwan. The questionnaire was developed via an expert panel, and a pilot study ensured validity. Ninety clinical teachers and 54 residents participated. RESULTS The respondents reported that the majority of feedback, which usually lasted 1-5 min, was initiated by the clinical teachers. Feedback satisfaction was significantly lower for the clinical teachers than for the residents (clinical teachers M = 13.8, SD = 1.83; residents M = 15.3, SD = 2.14; p < 0.0001), and positive feedback was provided infrequently in clinical settings (31.1%). Both groups of participants admitted hesitating between providing/seeking feedback and completing clinical work. Being busy, the teachers' clinical abilities, the learners' attitudes, and the relationship between both parties were reported as the most influential factors in feedback provision. CONCLUSION ED clinical feedback provision is often short, circumstantial, and initiated by clinical teachers. Providing or seeking feedback appears to be an important part of clinical learning in the context of uncertainty. The importance of the relationship between the feedback seeker and the provider highlights the interactive, reciprocal nature of clinical feedback provision.
Collapse
Affiliation(s)
- Chung-Hsien Chaou
- Chang-Gung Medical Education Research Centre, Chang Gung Memorial Hospital, Taoyuan, Taiwan.
- Department of Emergency Medicine, Chang Gung Memorial Hospital, Linkou and Chang Gung University College of Medicine, Taoyuan, Taiwan.
| | - Yu-Che Chang
- Chang-Gung Medical Education Research Centre, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- Department of Emergency Medicine, Chang Gung Memorial Hospital, Linkou and Chang Gung University College of Medicine, Taoyuan, Taiwan
| | - Shiuan-Ruey Yu
- Chang-Gung Medical Education Research Centre, Chang Gung Memorial Hospital, Taoyuan, Taiwan
| | - Hsu-Min Tseng
- Chang-Gung Medical Education Research Centre, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- Department of Health Care Management, Chang Gung University, Taoyuan, Taiwan
| | - Cheng-Ting Hsiao
- Chang-Gung Medical Education Research Centre, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- Department of Emergency Medicine, Chang Gung Memorial Hospital, Chiayi and Chang Gung University College of Medicine, Taoyuan, Taiwan
| | - Kuan-Han Wu
- Department of Emergency Medicine, Chang Gung Memorial Hospital, Kaohsiung and Chang Gung University College of Medicine, Taoyuan, Taiwan
| | - Lynn Valerie Monrouxe
- Chang-Gung Medical Education Research Centre, Chang Gung Memorial Hospital, Taoyuan, Taiwan
| | - Roy Ngerng Yi Ling
- Chang-Gung Medical Education Research Centre, Chang Gung Memorial Hospital, Taoyuan, Taiwan
| |
Collapse
|
12
|
Calvillo-Ortiz R, Raven KE, Castillo-Angeles M, Watkins AA, Barrows CE, James BC, Boyd CG, Critchlow JF, Kent TS. Using Individual Clinical Evaluations to Assess Residents' Clinical Judgment; Feasibility and Residents' Perception. JOURNAL OF SURGICAL EDUCATION 2018; 75:e31-e37. [PMID: 30292453 DOI: 10.1016/j.jsurg.2018.06.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2018] [Revised: 06/22/2018] [Accepted: 06/27/2018] [Indexed: 06/08/2023]
Abstract
OBJECTIVE In surgical training, most assessment tools focus on advanced clinical decision-making or operative skill. Available tools often require significant investment of resources and time. A high stakes oral examination is also required to become board-certified in surgery. We developed Individual Clinical Evaluation (ICE) to evaluate intern-level clinical decision-making in a time- and cost-efficient manner, and to introduce the face-to-face evaluation setting. DESIGN Intern-level ICE consists of 3 clinical scenarios commonly encountered by surgical trainees. Each scenario was developed to be presented in a step-by-step manner to an intern by an attending physician or chief resident. The interns had 17 minutes to complete the face-to-face evaluation and 3 minutes to receive feedback on their performance. The feedback was transcribed and sent to the interns along with incorrect answers. Eighty percent correct was set as a minimum to pass each scenario and continue with the next one. Interns who failed were retested until they passed. Frequency of incorrect response was tracked by question/content area. After passing the 3 scenarios, interns completed a survey about their experience with ICE. SETTING Beth Israel Deaconess Medical Center, an academic tertiary care facility located in Boston, Massachusetts. PARTICIPANTS All first-year surgery residents in our institution (n = 17) were invited to complete a survey. RESULTS All 2016-2017 surgical interns (17) completed the ICEs. A total of $171 (US) was spent conducting the ICEs, and an average of 17 minutes was used to complete each evaluation. In total, 5 different residents failed 1 scenario, with the most common mistake being: failing to stabilize respiration before starting management. After completing the 3 clinical scenarios, more than 90% of respondents agreed or strongly agreed that the evaluations were appropriately challenging for training level, and that the evaluations helped to identify personal strengths and weaknesses in skill and knowledge. The majority believed their knowledge improved as a result of the ICE and felt better prepared to manage these scenarios (88% and 76%, respectively). CONCLUSIONS The ICE is an inexpensive and time efficient way to introduce interns to board type examinations and assess their preparedness for perioperative patient care issues. Common errors were identified which were able to inform educational efforts. ICEs were well accepted by residents. Next steps include extension of the ICE to PGY2 and PGY3 residents.
Collapse
Affiliation(s)
| | - Kristin E Raven
- Department of Surgery, Beth Israel Deaconess Medical Center, Boston, Massachusetts.
| | | | - Ammara A Watkins
- Department of Surgery, Beth Israel Deaconess Medical Center, Boston, Massachusetts.
| | - Courtney E Barrows
- Department of Surgery, Beth Israel Deaconess Medical Center, Boston, Massachusetts.
| | - Benjamin C James
- Department of Surgery, Beth Israel Deaconess Medical Center, Boston, Massachusetts.
| | - Christopher G Boyd
- Department of Surgery, Beth Israel Deaconess Medical Center, Boston, Massachusetts.
| | - Jonathan F Critchlow
- Department of Surgery, Beth Israel Deaconess Medical Center, Boston, Massachusetts
| | - Tara S Kent
- Department of Surgery, Beth Israel Deaconess Medical Center, Boston, Massachusetts.
| |
Collapse
|
13
|
Rees CA, Keating EM, Lukolyo H, Swamy P, Turner TL, Marton S, Sanders J, Mohapi EQ, Kazembe PN, Schutze GE. Host clinical preceptors' perceptions of professionalism among learners completing global health electives. INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2018; 9:206-212. [PMID: 30055101 PMCID: PMC6129158 DOI: 10.5116/ijme.5b40.6e4b] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Accepted: 07/07/2018] [Indexed: 05/04/2023]
Abstract
OBJECTIVES This study aims to gain an understanding of the perceptions of host clinical preceptors in Malawi and Lesotho of the professionalism exhibited by short-term learners from the United States and Canada during short-term global health electives. METHODS Focus group discussions were conducted with 11 host clinical preceptors at two outpatient pediatric HIV clinics in sub-Saharan Africa (Malawi and Lesotho). These clinics host approximately 50 short-term global health learners from the United States and Canada each year. Focus group moderators used open-ended discussion guides to explore host clinical preceptors' perceptions of the professionalism of short-term global health learners. Thematic analysis with an inductive approach was used to identify salient themes from these focus group discussions. RESULTS Eleven of the 18 possible respondents participated in two focus group discussions. Adaptability, eagerness to learn, active listening, gratitude, initiative, and punctuality was cited as professional behaviors among short-term global health learners. Cited unprofessional behaviors included disregard of local clinicians' expertise and unresponsiveness to feedback. Host clinical preceptors described difficulty providing feedback to short-term global health learners and discrepancies between what may be considered professional in their home setting versus in the study settings. Respondents requested pre-departure orientation for learners and their own orientation before hosting learners. CONCLUSIONS Both host clinical preceptors and short-term global health learners should be aware that behaviors that may be considered best practice in one clinical setting may be perceived as unprofessional in another. Future studies to develop a common definition of professionalism during short-term global health electives are merited.
Collapse
Affiliation(s)
- Chris A. Rees
- Division of Emergency Medicine, Boston Children's Hospital, Harvard Medical School, USA
| | - Elizabeth M. Keating
- University of Utah, Department of Pediatric Emergency Medicine, Salt Lake City, UT, USA
| | - Heather Lukolyo
- Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
| | - Padma Swamy
- Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
| | - Teri L. Turner
- Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
| | - Stephanie Marton
- Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
| | - Jill Sanders
- Baylor College of Medicine Children's Foundation Lesotho, Maseru, Lesotho
| | - Edith Q. Mohapi
- Baylor College of Medicine Children's Foundation Lesotho, Maseru, Lesotho
| | - Peter N. Kazembe
- Baylor College of Medicine Children's Foundation Malawi, Lilongwe, Malawi
| | - Gordon E. Schutze
- Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
14
|
Church B, Corser WD, Harrison A. Effectiveness of a Faculty Development Course on Delivering Learner-Centered Feedback Utilizing the Flipped Training Model. Spartan Med Res J 2018; 3:6514. [PMID: 33655131 PMCID: PMC7746067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2017] [Accepted: 03/15/2018] [Indexed: 09/16/2024] Open
Abstract
CONTEXT Effective feedback is an important step in the acquisition of residents' clinical skills and a key component of most adult learning strategies. Faculty-resident feedback discussions can facilitate resident self-assessment and reflection on their performance and motivate them to study and ask questions in areas where their knowledge may be evaluated as deficient. The flipped training model approach, a type of blended learning that reverses the traditional learning environment by delivering instructional content outside of the classroom, has garnered increased support within both graduate medical education (GME) and other healthcare disciplines. METHODS The overall purpose of this exploratory pilot project was to examine the pre-post impact of a faculty feedback flipped training model course provided to a convenience sample of community-based faculty learners. After receiving campus IRB approval, the authors developed a set of five primary course goals and objectives. A convenience sample of n = 17 community-based faculty who had completed the entire course were administered a pair of pre and post-course surveys regarding their overall feedback satisfaction and comfort levels for supervising residents. RESULTS In summary, five of the 13 total survey items increased at statistically significant levels from pre-course levels. The majority of qualitative faculty comments also positively evaluated the flipped training model approach. CONCLUSIONS These promising pilot findings suggest that a flipped GME faculty feedback skills training model can help improve faculty learners' satisfaction and confidence as they supervise residents and/or medical students. The impact of these types of flipped training models for GME faculty needs to be more rigorously examined in project settings with larger samples to identify what specific types of curricular activities might prove to be most effective for diverse faculty learners in GME programs across the nation.
Collapse
Affiliation(s)
- Brandy Church
- Michigan State University Statewide Campus System, College of Osteopathic Medicine, East Lansing, MI 48824
| | - William D Corser
- Michigan State University Statewide Campus System, College of Osteopathic Medicine, East Lansing, MI 48824
| | - Angela Harrison
- Michigan State University Statewide Campus System, College of Osteopathic Medicine, East Lansing, MI 48824
| |
Collapse
|
15
|
Bui AH, Guerrier S, Feldman DL, Kischak P, Mudiraj S, Somerville D, Shebeen M, Girdusky C, Leitman IM. Is video observation as effective as live observation in improving teamwork in the operating room? Surgery 2018; 163:1191-1196. [PMID: 29625708 DOI: 10.1016/j.surg.2018.01.019] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2017] [Revised: 12/03/2017] [Accepted: 01/29/2018] [Indexed: 11/18/2022]
Abstract
BACKGROUND Teamwork in the operating room decreases the risk of preventable patient harm. Observation in the operating room allows for evaluation of compliance with best-practice surgical guidelines. This study examines the relative ability of video and live observation to promote operating room teamwork. METHODS Video and audio cameras were installed in 2014 into all operating rooms at an 875-bed, urban teaching hospital. Recordings were chosen at random for review by an internal quality improvement team. Concurrently, live observers were deployed into a random selection of operations. A customized tool was used to evaluate compliance to TeamSTEPPS skills during surgical briefs and debriefs. RESULTS A total of 1,410 briefs were evaluated: 325 (23%) through live observation and 1,085 (77%) through video; 1,398 debriefs were evaluated: 166 (12%) live and 1,232 (88%) video. For briefs, greater compliance was observed under live observation compared to video for recognition of team membership (87% vs 44%, P<.001), anticipation of complex procedural events (61% vs 45%, P<.001), and monitoring of resources (58% vs 42%, P<.001). For debriefs, greater compliance was observed under live observation for determination of team structure (90% vs 60%, P<.001), establishment of a leader (70% vs 51%, P<.001), postoperative planning (77% vs 48%, P<.001), case review and feedback (49% vs 33%, P<.001), team engagement (64% vs 41%, P<.001), and check back (61% vs 46%, P<.001) compared to video. CONCLUSION Video observations may not be as effective as evaluating live performance in promoting teamwork in the OR. Live observation enables immediate feedback, which may improve behavior and decrease barriers to compliance with surgical safety practices.
Collapse
Affiliation(s)
- Anthony H Bui
- Department of Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Shanice Guerrier
- Department of Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - David L Feldman
- Department of Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA; Hospitals Insurance Company, New York, NY, USA
| | | | | | | | - Minimole Shebeen
- Department of Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Cynthia Girdusky
- Department of Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - I Michael Leitman
- Department of Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| |
Collapse
|
16
|
Kohno T, Kohsaka S, Takei Y, Fukuda K, Ozaki Y, Yamashina A. Time Trend in Interest and Satisfaction Towards Clinical Training and Academic Activities Among Early-Career Cardiologists - The Japanese Circulation Society Post-Graduate Training Survey. Circ J 2018; 82:423-429. [PMID: 28883224 DOI: 10.1253/circj.cj-17-0398] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
BACKGROUND Satisfaction among early-career cardiologists is a key performance metric for cardiovascular (CV) educational programs. To assess the time trend in the interest and activities of early-career cardiologists regarding their training, we conducted web-based surveys in 2011 and 2015.Methods and Results:Early-career cardiologists were defined as physicians who planned to attend Japanese Circulation Society (JCS) annual meetings within 10 years of graduation. A total of 272 and 177 participants completed the survey for the years 2011 and 2015, respectively. Survey questions were designed to obtain core insights into the workplace, research interests, and demographic profile of respondents. Main outcome measures were satisfaction levels with their training program. The overall satisfaction rate for training was lower in 2015 than 2011; this was largely affected by decreases in the rates of satisfaction for valvular heart disease, ischemic heart disease, advanced heart failure, and congenital heart disease. Moreover, satisfaction with CV training was associated with the volume of invasive procedures such as coronary angiography and percutaneous coronary interventions in 2011 but not 2015. CONCLUSIONS Early-career cardiologists' satisfaction with their training decreased during the study period, especially in the field of evolving subspecialties (e.g., valvular heart disease or advanced heart failure), suggesting that prompt reevaluation of the current educational curriculum is needed to properly adapt to progress in cardiology.
Collapse
Affiliation(s)
- Takashi Kohno
- Department of Cardiology, Keio University School of Medicine
| | - Shun Kohsaka
- Department of Cardiology, Keio University School of Medicine
| | | | - Keiichi Fukuda
- Department of Cardiology, Keio University School of Medicine
| | - Yukio Ozaki
- Department of Cardiology, Fujita Health University Hospital
| | | |
Collapse
|
17
|
Kornegay JG, Kraut A, Manthey D, Omron R, Caretta‐Weyer H, Kuhn G, Martin S, Yarris LM. Feedback in Medical Education: A Critical Appraisal. AEM EDUCATION AND TRAINING 2017; 1:98-109. [PMID: 30051017 PMCID: PMC6001508 DOI: 10.1002/aet2.10024] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2016] [Revised: 01/10/2017] [Accepted: 01/12/2017] [Indexed: 05/24/2023]
Abstract
OBJECTIVE The objective was to review and critically appraise the medical education literature pertaining to feedback and highlight influential papers that inform our current understanding of the role of feedback in medical education. METHODS A search of the English language literature in querying Education Resources Information Center (ERIC), PsychINFO, PubMed, and Scopus identified 327 feedback-related papers using either quantitative (hypothesis-testing or observational investigations of educational interventions), qualitative methods (exploring important phenomena in emergency medicine [EM] education), or review methods.Two reviewers independently screened each category of publications using previously established exclusion criteria. Six reviewers then independently scored the remaining 54 publications using a qualitative, quantitative, or review paper scoring system. Each scoring system consisted of nine criteria and used parallel scoring metrics that have been previously used in critical appraisals of education research. RESULTS Fifty-four feedback papers (25 quantitative studies, 24 qualitative studies, five review papers) met the a priori criteria for inclusion and were reviewed. Eight quantitative studies, nine qualitative studies, and three review papers were ranked highly by the reviewers and are summarized in this article. CONCLUSIONS This inaugural Council of Emergency Medicine Residency Directors Academy critical appraisal highlights 20 feedback in medical education papers that describe the current state of the feedback literature. A summary of current factors that influence feedback effectiveness is discussed, along with practical implications for EM educators and the next steps for research.
Collapse
Affiliation(s)
- Joshua G. Kornegay
- Department of Emergency MedicineOregon Health & Science UniversityPortlandOR
| | - Aaron Kraut
- BerbeeWalsh Department of Emergency MedicineUniversity of Wisconsin School of Medicine and Public HealthMadisonWI
| | - David Manthey
- Department of Emergency MedicineWake Forest University Baptist HealthWinston‐SalemNC
| | - Rodney Omron
- Department of Emergency MedicineJohns Hopkins School of MedicineBaltimoreMD
| | - Holly Caretta‐Weyer
- Department of Emergency MedicineOregon Health & Science UniversityPortlandOR
| | - Gloria Kuhn
- Department of Emergency MedicineWayne State UniversityDetroitMI
| | - Sandra Martin
- Department of Emergency MedicineWayne State UniversityDetroitMI
| | - Lalena M. Yarris
- Department of Emergency MedicineOregon Health & Science UniversityPortlandOR
| |
Collapse
|
18
|
Blankush JM, Shah BJ, Barnett SH, Badran G, Mercado A, Karani R, Muller D, Leitman IM. What are the associations between the quantity of faculty evaluations and residents' perception of quality feedback? Ann Med Surg (Lond) 2017; 16:40-43. [PMID: 28386393 PMCID: PMC5369264 DOI: 10.1016/j.amsu.2017.03.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2017] [Accepted: 03/01/2017] [Indexed: 11/17/2022] Open
Abstract
Objectives To determine if there is a correlation between the numbers of evaluations submitted by faculty and the perception of the quality of feedback reported by trainees on a yearly survey. Method 147 ACGME-accredited training programs sponsored by a single medical school were included in the analysis. Eighty-seven programs (49 core residency programs and 38 advanced training programs) with 4 or more trainees received ACGME survey summary data for academic year 2013–2014. Resident ratings of satisfaction with feedback were analyzed against the number of evaluations completed per resident during the same period. R-squared correlation analysis was calculated using a Pearson correlation coefficient. Results 177,096 evaluations were distributed to the 87 programs, of which 117,452 were completed (66%). On average, faculty submitted 33.9 evaluations per resident. Core residency programs had a greater number of evaluations per resident than fellowship programs (39.2 vs. 27.1, respectively, p = 0.15). The average score for the “satisfied with feedback after assignment” survey questions was 4.2 (range 2.2–5.0). There was no overall correlation between the number of evaluations per resident and the residents' perception of feedback from faculty based on medical, surgical or hospital-based programs. Conclusions Resident perception of feedback is not correlated with number of faculty evaluations. An emphasis on faculty summative evaluation of resident performance is important but appears to miss the mark as a replacement for on-going, data-driven, structured resident feedback. Understanding the difference between evaluation and feedback is a global concept that is important for all medical educators and learners. Residents and fellows do not perceive that regular evaluations are the same as feedback. The quantity of faculty evaluations does not correlate the resident perception of quality feedback. A greater emphasis is necessary to instruct faculty on providing regular, timely and data-driven feedback to residents and fellows with specific comments on performance. Faculty summative evaluation of resident performance is important but this is not a replacement for structured feedback.
Collapse
Affiliation(s)
- Joseph M Blankush
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, USA
| | - Brijen J Shah
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, USA
| | - Scott H Barnett
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, USA
| | - Gaber Badran
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, USA
| | - Amanda Mercado
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, USA
| | - Reena Karani
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, USA
| | - David Muller
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, USA
| | - I Michael Leitman
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, USA
| |
Collapse
|
19
|
Lefroy J, Hawarden A, Gay S, McKinley R. Does formal workplace based assessment add value to informal feedback? MEDEDPUBLISH 2017; 6:27. [PMID: 38406469 PMCID: PMC10885233 DOI: 10.15694/mep.2017.000027] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2024] Open
Abstract
This article was migrated. The article was marked as recommended. Feedback is a key component of learning but effective feedback is a complex process with many aspects. One aspect may be a written summary which is passed to the learner but this may not be valued by learners. We examined the role of written feedback in the feedback process to determine whether it does more than provide a simple summary of the interaction. We conducted a secondary analysis of data gathered for a study of formative workplace based assessment. Interview data from 24 interviews with students and written summaries of workplace based assessments for 23 of them were reanalysed by two researchers who were already immersed in the data and examined all references to verbal, informal feedback and written, formal feedback or the assessment tool used. We found that students valued the verbal feedback discussion highly and that they often considered the written summaries superfluous. We also found that the act of preparing written feedback augmented the feedback discussion and tutors had adopted the language of the formal instrument in the verbal feedback and free text written feedback. What this study adds to existing research is evidence that there may be a secondary faculty development effect of requiring the preparation of written feedback which has served to enhance the educational content of feedback. Although this is not proof of causality (the requirement to provide written feedback alone producing the positive effects), we consider that the likelihood is sufficiently strong to continue the practice.
Collapse
|
20
|
McGhee J, Crowe C, Kraut A, Pierce A, Porat A, Schnapp B, Laurie A, Fu R, Yarris L. Do Emergency Medicine Residents Prefer Resident-initiated or Attending-initiated Feedback? AEM EDUCATION AND TRAINING 2017; 1:15-20. [PMID: 30051003 PMCID: PMC6001489 DOI: 10.1002/aet2.10006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2016] [Revised: 10/19/2016] [Accepted: 10/28/2016] [Indexed: 06/07/2023]
Abstract
BACKGROUND Real-time feedback is crucial to improving physician performance. Emerging theory suggests that learner-initiated feedback may be more effective in changing performance than attending-initiated feedback, but little is known about how residents perceive resident- versus attending-initiated feedback. OBJECTIVES The primary aim was to determine whether residents' satisfaction varied by learner-versus attending-initiated feedback encounters. We hypothesized that residents would be more satisfied with resident-initiated feedback. METHODS This was a multicenter study of five emergency medicine residency programs. We developed a milestones-based, real-time feedback intervention that provided behavioral anchors for ED subcompetencies and prompted a feedback discussion. The intervention was implemented at all sites for a 3-month period from March to November 2014. Residents were asked to initiate one card per shift; attendings were also invited to initiate encounters and, in either instance, asked to provide one specific suggestion for improvement. Residents confidentially rated their satisfaction with feedback on a 10-point scale. Reported satisfaction was categorized as "very satisfied" (score of 10) versus "less than very satisfied" (score < 10). Logistic regression was used to assess the difference in satisfaction between resident- versus attending-initiated feedback, and random effects were used to account for the clustering of repeated ratings within resident and by site. RESULTS A total of 785 cards was collected from five sites. Participation varied by site (range = 21-487 cards per site). Of the 587 cards with both feedback initiator and satisfaction data, 67% (396/587) were resident-initiated, and median satisfaction score was 10 (range = 4-10). There was no difference in the odds of being "very satisfied" by resident- vesus attending-initiated encounters (odds ratio = 1.08, 95% confidence interval = 0.41 to 2.83). CONCLUSIONS Our results suggest that residents are likely to be as satisfied with self-initiated feedback as attending-initiated feedback. Further research is needed to determine whether resident-initiated feedback is more likely to be incorporated into practice and result in objective performance improvements.
Collapse
Affiliation(s)
| | | | | | - Ava Pierce
- University of Texas SouthwesternDallasTX
| | - Avital Porat
- Ichan School of Medicine at Mount SanaiNew YorkNY
| | | | | | - Rongwei Fu
- Oregon Health & Science UniversityPortlandOR
| | | |
Collapse
|
21
|
Bentley S, Hu K, Messman A, Moadel T, Khandelwal S, Streich H, Noelker J. Are All Competencies Equal in the Eyes of Residents? A Multicenter Study of Emergency Medicine Residents' Interest in Feedback. West J Emerg Med 2016; 18:76-81. [PMID: 28116012 PMCID: PMC5226767 DOI: 10.5811/westjem.2016.11.32626] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2016] [Accepted: 11/30/2016] [Indexed: 11/11/2022] Open
Abstract
Introduction Feedback, particularly real-time feedback, is critical to resident education. The emergency medicine (EM) milestones were developed in 2012 to enhance resident assessment, and many programs use them to provide focused resident feedback. The purpose of this study was to evaluate EM residents’ level of interest in receiving real-time feedback on each of the 23 competencies/sub-competencies. Methods This was a multicenter cross-sectional study of EM residents. We surveyed participants on their level of interest in receiving real-time on-shift feedback on each of the 23 competencies/sub-competencies. Anonymous paper or computerized surveys were distributed to residents at three four-year training programs and three three-year training programs with a total of 223 resident respondents. Residents rated their level of interest in each milestone on a six-point Likert-type response scale. We calculated average level of interest for each of the 23 sub-competencies, for all 223 respondents and separately by postgraduate year (PGY) levels of training. One-way analyses of variance were performed to determine if there were differences in ratings by level of training. Results The overall survey response rate across all institutions was 82%. Emergency stabilization had the highest mean rating (5.47/6), while technology had the lowest rating (3.24/6). However, we observed no differences between levels of training on any of the 23 competencies/sub-competencies. Conclusion Residents seem to ascribe much more value in receiving feedback on domains involving high-risk, challenging procedural skills as compared to low-risk technical and communication skills. Further studies are necessary to determine whether residents’ perceived importance of competencies/sub-competencies needs to be considered when developing an assessment or feedback program based on these 23 EM competencies/sub-competencies.
Collapse
Affiliation(s)
- Suzanne Bentley
- Icahn School of Medicine at Mount Sinai, Elmhurst Hospital Center, Department of Emergency Medicine, Department of Medical Education, New York, New York
| | - Kevin Hu
- Icahn School of Medicine at Mount Sinai, Department of Emergency Medicine, New York, New York
| | - Anne Messman
- Wayne State University School of Medicine, Department of Emergency Medicine, Detroit, Michigan
| | - Tiffany Moadel
- Yale School of Medicine, Department of Emergency Medicine, New Haven, Connecticut
| | - Sorabh Khandelwal
- The Ohio State University, Department of Emergency Medicine, Columbus, Ohio
| | - Heather Streich
- University of Virginia, Department of Emergency Medicine, Charlottesville, Virginia
| | - Joan Noelker
- Washington University in St. Louis, Department of Medicine, Division of Emergency Medicine, St. Louis, Missouri
| |
Collapse
|
22
|
Nadir NA, Bentley S, Papanagnou D, Bajaj K, Rinnert S, Sinert R. Characteristics of Real-Time, Non-Critical Incident Debriefing Practices in the Emergency Department. West J Emerg Med 2016; 18:146-151. [PMID: 28116028 PMCID: PMC5226751 DOI: 10.5811/westjem.2016.10.31467] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2016] [Revised: 10/03/2016] [Accepted: 10/27/2016] [Indexed: 11/11/2022] Open
Abstract
INTRODUCTION Benefits of post-simulation debriefings as an educational and feedback tool have been widely accepted for nearly a decade. Real-time, non-critical incident debriefing is similar to post-simulation debriefing; however, data on its practice in academic emergency departments (ED), is limited. Although tools such as TeamSTEPPS® (Team Strategies and Tools to Enhance Performance and Patient Safety) suggest debriefing after complicated medical situations, they do not teach debriefing skills suited to this purpose. Anecdotal evidence suggests that real-time debriefings (or non-critical incident debriefings) do in fact occur in academic EDs;, however, limited research has been performed on this subject. The objective of this study was to characterize real-time, non-critical incident debriefing practices in emergency medicine (EM). METHODS We conducted this multicenter cross-sectional study of EM attendings and residents at four large, high-volume, academic EM residency programs in New York City. Questionnaire design was based on a Delphi panel and pilot testing with expert panel. We sought a convenience sample from a potential pool of approximately 300 physicians across the four sites with the goal of obtaining >100 responses. The survey was sent electronically to the four residency list-serves with a total of six monthly completion reminder emails. We collected all data electronically and anonymously using SurveyMonkey.com; the data were then entered into and analyzed with Microsoft Excel. RESULTS The data elucidate various characteristics of current real-time debriefing trends in EM, including its definition, perceived benefits and barriers, as well as the variety of formats of debriefings currently being conducted. CONCLUSION This survey regarding the practice of real-time, non-critical incident debriefings in four major academic EM programs within New York City sheds light on three major, pertinent points: 1) real-time, non-critical incident debriefing definitely occurs in academic emergency practice; 2) in general, real-time debriefing is perceived to be of some value with respect to education, systems and performance improvement; 3) although it is practiced by clinicians, most report no formal training in actual debriefing techniques. Further study is needed to clarify actual benefits of real-time/non-critical incident debriefing as well as details on potential pitfalls of this practice and recommendations for best practices for use.
Collapse
Affiliation(s)
- Nur-Ain Nadir
- OSF St. Francis Medical Center, University of Illinois College of Medicine at Peoria, Department of Emergency Medicine, Peoria, Illinois; Kings County Hospital and SUNY Downstate Medical Center, Department of Emergency Medicine, New York, New York
| | - Suzanne Bentley
- Elmhurst Hospital Center, Icahn School of Medicine at Mount Sinai, Department of Emergency Medicine and Department of Medical Education, Elmhurst, New York
| | - Dimitrios Papanagnou
- Thomas Jefferson University Hospital, Department of Emergency Medicine, Philadelphia, Pennsylvania
| | - Komal Bajaj
- Jacobi Medical Center, Department of Obstetrics and Gynecology, New York, New York
| | - Stephan Rinnert
- Kings County Hospital and SUNY Downstate Medical Center, Department of Emergency Medicine, New York, New York
| | - Richard Sinert
- Kings County Hospital and SUNY Downstate Medical Center, Department of Emergency Medicine, New York, New York
| |
Collapse
|
23
|
Perinatal Disparities Between American Indians and Alaska Natives and Other US Populations: Comparative Changes in Fetal and First Day Mortality, 1995-2008. Matern Child Health J 2016; 19:1802-12. [PMID: 25663653 DOI: 10.1007/s10995-015-1694-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
To compare fetal and first day outcomes of American Indian and Alaskan Natives (AIAN) with non-AIAN populations. Singleton deliveries to AIAN and non-AIAN populations were selected from live birth-infant death cohort and fetal deaths files from 1995-1998 and 2005-2008. We examined changes over time in maternal characteristics of deliveries and disparities and changes in risks of fetal, first day (<24 h), and cause-specific deaths. We calculated descriptive statistics, odds ratios and confidence intervals, and ratio of odds ratios (RORs) to indicate changes in disparities. Along with black mothers, AIANs exhibited the highest proportion of risk factors including the highest proportion of diabetes in both time periods (4.6 and 6.5 %). Over time, late fetal death for AIANs decreased 17 % (aOR = 0.83, 95 % CI 0.72-0.97), but we noted a 47 % increased risk over time for Hispanics (aOR = 1.47, 95 % CI 1.40-1.55). Our data indicated no change over time among AIANs for first day death. For AIANs compared to whites, increased risks and disparities persisted for mortality due to congenital anomalies (ROR = 1.28, 95 % CI 1.03-1.60). For blacks compared to AIANs, the increased risks of fetal death (2005-2008: aOR = 0.60, 95 % CI 0.53-0.68) persisted. For Hispanics, lower risks compared to AIANs persisted, but protective effect declined over time. Disparities between AIAN and other groups persist, but there is variability by race/ethnicity in improvement of perinatal outcomes over time. Variability in access to care and pregnancy management should be considered in relation to health equity for fetal and early infant deaths.
Collapse
|
24
|
Askew KL, O'Neill JC, Hiestand B, Manthey DE. Combined Versus Detailed Evaluation Components in Medical Student Global Rating Indexes. West J Emerg Med 2015; 16:885-8. [PMID: 26594284 PMCID: PMC4651588 DOI: 10.5811/westjem.2015.9.27257] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2015] [Revised: 09/04/2015] [Accepted: 09/26/2015] [Indexed: 11/24/2022] Open
Abstract
Introduction To determine if there is any correlation between any of the 10 individual components of a global rating index on an emergency medicine (EM) student clerkship evaluation form. If there is correlation, to determine if a weighted average of highly correlated components loses predictive value for the final clerkship grade. Methods This study reviewed medical student evaluations collected over two years of a required fourth-year rotation in EM. Evaluation cards, comprised of a detailed 10-part evaluation, were completed after each shift. We used a correlation matrix between evaluation category average scores, using Spearman’s rho, to determine if there was any correlation of the grades between any of the 10 items on the evaluation form. Results A total of 233 students completed the rotation over the two-year period of the study. There were strong correlations (>0.80) between assessment components of medical knowledge, history taking, physical exam, and differential diagnosis. There were also strong correlations between assessment components of team rapport, patient rapport, and motivation. When these highly correlated were combined to produce a four-component model, linear regression demonstrated similar predictive power in terms of final clerkship grade (R2=0.71, CI95=0.65–0.77 and R2=0.69, CI95=0.63–0.76 for the full and reduced models respectively). Conclusion This study revealed that several components of the evaluation card had a high degree of correlation. Combining the correlated items, a reduced model containing four items (clinical skills, interpersonal skills, procedural skills, and documentation) was as predictive of the student’s clinical grade as the full 10-item evaluation. Clerkship directors should be aware of the performance of their individual global rating scales when assessing medical student performance, especially if attempting to measure greater than four components.
Collapse
Affiliation(s)
- Kim L Askew
- Wake Forest School of Medicine, Department of Emergency Medicine, Winston-Salem, North Carolina
| | - James C O'Neill
- Wake Forest School of Medicine, Department of Emergency Medicine, Winston-Salem, North Carolina
| | - Brian Hiestand
- Wake Forest School of Medicine, Department of Emergency Medicine, Winston-Salem, North Carolina
| | - David E Manthey
- Wake Forest School of Medicine, Department of Emergency Medicine, Winston-Salem, North Carolina
| |
Collapse
|
25
|
Abstract
BACKGROUND Peer feedback is increasingly being used by residency programs to provide an added dimension to the assessment process. Studies show that peer feedback is useful, uniquely informative, and reliable compared to other types of assessments. Potential barriers to implementation include insufficient training/preparation, negative consequences for working relationships, and a perceived lack of benefit. OBJECTIVE We explored the perceptions of residents involved in peer-to-peer feedback, focusing on factors that influence accuracy, usefulness, and application of the information. METHODS Family medicine residents at the University of Michigan who were piloting an online peer assessment tool completed a brief survey to offer researchers insight into the peer feedback process. Focus groups were conducted to explore residents' perceptions that are most likely to affect giving and receiving peer feedback. RESULTS Survey responses were provided by 28 of 30 residents (93%). Responses showed that peer feedback provided useful (89%, 25 of 28) and unique (89%, 24 of 27) information, yet only 59% (16 of 27) reported that it benefited their training. Focus group participants included 21 of 29 eligible residents (72%). Approaches to improve residents' ability to give and accept feedback included preparatory training, clearly defined goals, standardization, fewer and more qualitatively oriented encounters, 1-on-1 delivery, immediacy of timing, and cultivation of a feedback culture. CONCLUSIONS Residents perceived feedback as important and offered actionable suggestions to enhance accuracy, usefulness, and application of the information shared. The findings can be used to inform residency programs that are interested in creating a meaningful peer feedback process.
Collapse
|
26
|
Educating the next generation of pulmonary fellows in transbronchial needle aspiration. Leading the blind to see. Ann Am Thorac Soc 2015; 11:828-32. [PMID: 24762085 DOI: 10.1513/annalsats.201403-112oi] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
Transbronchial needle aspiration (TBNA) remains an invaluable diagnostic tool in the evaluation of mediastinal and hilar abnormalities, specifically in the evaluation of patients with lung cancer. Training in TBNA has remained integral in pulmonary fellowship programs, but unfortunately the training methods, volumes, and outcomes have been variable. This has subsequently led to wide variations in practice patterns, diagnostic yield, and operator confidence. The introduction of endobronchial ultrasound-guided TBNA appears to have stimulated a resurgence in training and performance of TBNA. However, with this new technology, many questions have surfaced regarding training methods, volumes, and who should receive training. Within this context, we describe the history, current state, and future directions of the education of TBNA during pulmonary fellowship training.
Collapse
|
27
|
Yarris LM, Jones D, Kornegay JG, Hansen M. The Milestones Passport: A Learner-Centered Application of the Milestone Framework to Prompt Real-Time Feedback in the Emergency Department. J Grad Med Educ 2014; 6:555-60. [PMID: 26279784 PMCID: PMC4535223 DOI: 10.4300/jgme-d-13-00409.1] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/07/2013] [Revised: 01/23/2014] [Accepted: 04/14/2014] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND In July 2013, emergency medicine residency programs implemented the Milestone assessment as part of the Next Accreditation System. OBJECTIVE We hypothesized that applying the Milestone framework to real-time feedback in the emergency department (ED) could affect current feedback processes and culture. We describe the development and implementation of a Milestone-based, learner-centered intervention designed to prompt real-time feedback in the ED. METHODS We developed and implemented the Milestones Passport, a feedback intervention incorporating subcompetencies, in our residency program in July 2013. Our primary outcomes were feasibility, including faculty and staff time and costs, number of documented feedback encounters in the first 2 months of implementation, and user-reported time required to complete the intervention. We also assessed learner and faculty acceptability. RESULTS Development and implementation of the Milestones Passport required 10 hours of program coordinator time, 120 hours of software developer time, and 20 hours of faculty time. Twenty-eight residents and 34 faculty members generated 257 Milestones Passport feedback encounters. Most residents and faculty reported that the encounters required fewer than 5 minutes to complete, and 48% (12 of 25) of the residents and 68% (19 of 28) of faculty reported satisfaction with the Milestones Passport intervention. Faculty satisfaction with overall feedback in the ED improved after the intervention (93% versus 54%, P = .003), whereas resident satisfaction with feedback did not change significantly. CONCLUSIONS The Milestones Passport feedback intervention was feasible and acceptable to users; however, learner satisfaction with the Milestone assessment in the ED was modest.
Collapse
|
28
|
Kohno T, Kohsaka S, Ohshima K, Takei Y, Yamashina A, Fukuda K. Attitudes of early-career cardiologists in Japan about their cardiovascular training programs. Am J Cardiol 2014; 114:629-34. [PMID: 24998089 DOI: 10.1016/j.amjcard.2014.05.046] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/17/2014] [Revised: 05/27/2014] [Accepted: 05/27/2014] [Indexed: 11/18/2022]
Abstract
Understanding the perspective of early-career cardiologists is important to design effective responses to the challenges in modern cardiovascular (CV) training programs. We conducted a web-based survey on a total of 272 early-career cardiologists (within 10 postgraduate years) who registered for the 2011 annual Japanese Circulation Society Meeting. Main outcome measures were satisfaction with their training, confidence in their clinical skills, and professional expectations, scaled from 0 to 10. The median training time was 6 years, with 2 years for internal medicine and 4 years for CV disease. Most received their training in university hospitals at some point during their career (79.5%) and were interested in a subspecialty training, such as interventional cardiology (38.6%), electrophysiology (15.1%), and advanced heart failure (10.3%); only 9.6% showed interest in general cardiology. The respondents felt comfortable in managing common CV conditions such as coronary artery disease (average score 6.3 ± 2.4 on an 11-point Likert scale) but less so in peripheral arterial disease (3.8 ± 2.8), arrhythmias (3.7 ± 2.3), and congenital heart disease (2.9 ± 2.2). Their satisfaction rate with their CV training positively correlated with their clinical proficiency level and was associated with volume of coronary angiograms, percutaneous coronary interventions, and echocardiograms completed. In conclusion, the current young cardiologists have a positive perception of and interest in procedure-based subspecialty training, and their training satisfaction was related to volume of cardiac procedures. Additional effort is needed in enforcing the training in underappreciated subspecialty areas.
Collapse
Affiliation(s)
- Takashi Kohno
- Division of Cardiology, Department of Medicine, Keio University School of Medicine, Tokyo, Japan
| | - Shun Kohsaka
- Division of Cardiology, Department of Medicine, Keio University School of Medicine, Tokyo, Japan.
| | - Kazuki Ohshima
- Division of Cardiology, Department of Medicine, Keio University School of Medicine, Tokyo, Japan
| | - Yasuyoshi Takei
- Department of Cardiology, Tokyo Medical University, Tokyo, Japan
| | - Akira Yamashina
- Department of Cardiology, Tokyo Medical University, Tokyo, Japan
| | - Keiichi Fukuda
- Division of Cardiology, Department of Medicine, Keio University School of Medicine, Tokyo, Japan
| |
Collapse
|
29
|
Bounds R, Bush C, Aghera A, Rodriguez N, Stansfield RB, Santen SA. Emergency medicine residents' self-assessments play a critical role when receiving feedback. Acad Emerg Med 2013; 20:1055-61. [PMID: 24127710 DOI: 10.1111/acem.12231] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2013] [Revised: 05/10/2013] [Accepted: 05/17/2013] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Emergency medicine (EM) faculty often aim to improve resident performance by enhancing the quality and delivery of feedback. The acceptance and integration of external feedback is influenced by multiple factors. However, it is interpreted through the "lens" of the learner's own self-assessment. Ideally, following an educational activity with feedback, a learner should be able to generate and act upon specific learning goals to improve performance. Examining the source of generated learning goals, whether from one's self-assessment or from external feedback, might shed light on the factors that lead to improvement and guide educational initiatives. Using a standard oral board scenario, the objective of this study was to determine the effects that residents' self-assessment and specific feedback from faculty have on not only the generation of learning goals but also the execution of these goals for performance improvement. METHODS In this cross-sectional educational study at four academic programs, 72 senior EM residents participated in a standardized oral board scenario. Following the scenario, residents completed a self-assessment form. Next, examiners used a standardized checklist to provide both positive and negative feedback. Subsequently, residents were asked to generate "SMART" learning goals (specific, measurable, attainable, realistic, and time-bound). The investigators categorized the learning goals as stemming from the residents' self-assessments, feedback, or both. Within 4 weeks, the residents were asked to recall their learning goals and describe any actions taken to achieve those goals. These were grouped into similar categories. Descriptive statistics were used to summarize the data. RESULTS A total of 226 learning goals were initially generated (mean ± SD = 3.1 ± 1.3 per resident). Forty-seven percent of the learning goals were generated by the residents' self-assessments only, while 27% were generated by the feedback given alone. Residents who performed poorly on the case incorporated feedback more often than high performers when generating learning goals. Follow-up data collection showed that 62 residents recalled 89 learning goals, of which 52 were acted upon. On follow-up, the numbers of learning goals from self-assessment and feedback were equal (25% each, 13 of 52), while the greatest number of reportedly executed learning goals came from self-assessments and feedback in agreement (40%). CONCLUSIONS Following feedback on an oral board scenario, residents generated the majority of their learning goals from their own self-assessments. Conversely, at the follow-up period, they recalled an increased number of learning goals stemming from feedback, while the largest proportion of learning goals acted upon stemmed from both feedback and self-assessments in agreement. This suggests that educators need to incorporate residents' self-assessments into any delivered feedback to have the greatest influence on future learning goals and actions taken to improve performance.
Collapse
Affiliation(s)
- Richard Bounds
- Department of Emergency Medicine; Christiana Care Health System; Newark DE
| | - Colleen Bush
- Department of Emergency Medicine; Michigan State University; East Lansing MI
| | - Amish Aghera
- Department of Emergency Medicine; Maimonides Medical Center; New York NY
| | - Nestor Rodriguez
- Department of Emergency Medicine; University of Wisconsin School of Medicine and Public Health; Madison WI
| | - R. Brent Stansfield
- Department of Emergency Medicine; University of Michigan Medical School; Ann Arbor MI
| | - Sally A. Santen
- Department of Emergency Medicine; University of Michigan Medical School; Ann Arbor MI
| | | |
Collapse
|
30
|
A Needs Assessment of Musculoskeletal Fellowship Training: A Survey of Practicing Musculoskeletal Radiologists. AJR Am J Roentgenol 2013; 200:732-40. [DOI: 10.2214/ajr.12.9105] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
31
|
Fisher J, Lin M, Coates WC, Kuhn GJ, Farrell SE, Maggio LA, Shayne P. Critical appraisal of emergency medicine educational research: the best publications of 2011. Acad Emerg Med 2013; 20:200-8. [PMID: 23406080 DOI: 10.1111/acem.12070] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2012] [Revised: 08/21/2012] [Accepted: 08/30/2012] [Indexed: 11/29/2022]
Abstract
OBJECTIVES The objective was to critically appraise and highlight medical education research studies published in 2011 that were methodologically superior and whose outcomes were pertinent to teaching and education in emergency medicine (EM). METHODS A search of the English language literature in 2011 querying PubMed, Scopus, Education Resources Information Center (ERIC), and PsychInfo identified EM studies that used hypothesis-testing or observational investigations of educational interventions. Six reviewers independently ranked all publications based on 10 criteria, including four related to methodology, that were chosen a priori to standardize evaluation by reviewers. This method was used previously to appraise medical education published in 2008, 2009, and 2010. RESULTS Forty-eight educational research papers were identified. Comparing the literature of 2011 to that of 2008 through 2010, the number of published educational research papers meeting the criteria increased over time from 30, to 36, to 41, and now to 48. Five medical education research studies met the a priori criteria for inclusion as exemplary and are reviewed and summarized in this article. The number of funded studies remained fairly stable over the past 3 years, at 13 (2008), 16 (2009), 9 (2010), and 13 (2011). As in past years, research involving the use of technology accounted for almost half (n = 22) of the publications. Observational study designs accounted for 28 of the papers, while nine studies featured an experimental design. CONCLUSIONS Forty-eight EM educational studies published in 2011 and meeting the criteria were identified. This critical appraisal reviews and highlights five studies that met a priori quality indicators. Current trends and common methodologic pitfalls in the 2011 papers are noted.
Collapse
Affiliation(s)
- Jonathan Fisher
- Department of Emergency Medicine; Beth Israel Deaconess Medical Center; Boston; MA
| | - Michelle Lin
- The Department of Emergency Medicine; University of California at San Francisco; San Francisco; CA
| | - Wendy C. Coates
- The Department of Emergency Medicine; Harbor-UCLA Medical Center; University of California, Los Angeles-David Geffen School of Medicine, and Los Angeles Biomedical Research Institute at Harbor-UCLA; Los Angeles; CA
| | - Gloria J. Kuhn
- The Department of Emergency Medicine; Wayne State University; Farming Hills; MI
| | - Susan E. Farrell
- The Office of Graduate Medical Education; Partners Healthcare System; Center for Teaching and Learning; Harvard Medical School; and Department of Emergency Medicine; Brigham and Women's Hospital; Boston; MA
| | - Lauren A. Maggio
- The Lane Medical Library; Stanford University School of Medicine; Stanford; CA
| | - Philip Shayne
- and The Department of Emergency Medicine; Emory University School of Medicine; Atlanta; GA
| |
Collapse
|
32
|
Newgard CD, Beeson MS, Kessler CS, Kuppermann N, Linden JA, Gallahue F, Wolf S, Hatten B, Akhtar S, Dooley-Hash SL, Yarris L. Establishing an emergency medicine education research network. Acad Emerg Med 2012; 19:1468-75. [PMID: 23279253 DOI: 10.1111/acem.12028] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2012] [Accepted: 07/03/2012] [Indexed: 10/27/2022]
Abstract
This project was developed from the research network track at the 2012 Academic Emergency Medicine consensus conference on education research in emergency medicine (EM). Using a combination of consensus techniques, the modified Delphi method, and qualitative research methods, the authors describe multiple aspects of developing, implementing, managing, and growing an EM education research network. A total of 175 conference attendees and 24 small-group participants contributed to discussions regarding an education research network; participants were experts in research networks, education, and education research. This article summarizes relevant conference discussions and expert opinion for recommendations on the structure of an education research network, basic operational framework, site selection, leadership, subcommittees, guidelines for authorship, logistics, and measuring success while growing and maintaining the network.
Collapse
Affiliation(s)
- Craig D. Newgard
- Center for Policy and Research in Emergency Medicine and the Department of Emergency Medicine; Oregon Health & Science University; Portland; OR
| | - Michael S. Beeson
- Department of Emergency Medicine; Akron General Medical Center; Akron; OH
| | - Chad S. Kessler
- Departments of Emergency Medicine & Internal Medicine; Jesse Brown Veterans Administration Medical Center; Chicago; IL
| | - Nathan Kuppermann
- Department of Emergency Medicine; University of California at Davis; Sacramento; CA
| | - Judith A. Linden
- Department of Emergency Medicine; Boston University School of Medicine; Boston; MA
| | - Fiona Gallahue
- Division of Emergency Medicine; University of Washington; Seattle; WA
| | - Stephen Wolf
- Department of Emergency Medicine; Denver Health Medical Center; Denver; CO
| | | | - Saadia Akhtar
- Department of Emergency Medicine; Beth Israel Medical Center; Albert Einstein College of Medicine; New York City; NY
| | | | - Lalena Yarris
- Center for Policy and Research in Emergency Medicine and the Department of Emergency Medicine; Oregon Health & Science University; Portland; OR
| |
Collapse
|
33
|
Wingate MS, Barfield WD, Petrini J, Smith R. Disparities in fetal death and first day death: the influence of risk factors in 2 time periods. Am J Public Health 2012; 102:e68-73. [PMID: 22698022 DOI: 10.2105/ajph.2012.300790] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
OBJECTIVES We examined how changes in risk factors over time influence fetal, first day, and combined fetal-first day mortality and subsequent racial/ethnic disparities. METHODS We selected deliveries to US resident non-Hispanic White and Black mothers from the linked live birth-infant death cohort and fetal deaths files (1995-1996; 2001-2002) and calculated changes over time of mortality rates, odds, and relative odds ratios (RORs) overall and among mothers with modifiable risk factors (smoking, diabetes, or hypertensive disorders). RESULTS Adjusted odds ratios (AORs) for fetal mortality overall (AOR=0.99; 95% confidence interval [CI]=0.96, 1.01) and among Blacks (AOR=0.98; 95% CI=0.93, 1.03) indicated no change over time. Among women with modifiable risk factors, the RORs indicated no change in disparities. The ROR was not significant for fetal mortality (ROR=0.96; 95% CI=0.83, 1.01) among smokers, but there was evidence of some decline. There was evidence of increase in RORs in fetal death among mothers with diabetes and hypertensive disorders, but differences were not significant. CONCLUSIONS Disparities in fetal, first day, and combined fetal-first day mortality have persisted and reflect discrepancies in care provision or other factors more challenging to measure.
Collapse
Affiliation(s)
- Martha S Wingate
- Department of Health Care Organization and Policy, University of Alabama at Birmingham, Birmingham, AL 35294-0022, USA.
| | | | | | | |
Collapse
|