1
|
Hearn SL, Yang J, Chapman C, DiPonio LA. What would you like to ask your audience? Enhancing feedback value in PM&R grand rounds through customizable evaluation forms with presenter-generated questions. PM R 2023; 15:1574-1579. [PMID: 37366308 DOI: 10.1002/pmrj.12989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Revised: 04/02/2023] [Accepted: 04/12/2023] [Indexed: 06/28/2023]
Abstract
BACKGROUND Feedback and evaluation are important in the professional development of academic physiatrists. Yet, physical medicine and rehabilitation (PM&R) learners giving academic presentations receive limited narrative feedback through generic evaluation forms. OBJECTIVE To assess whether customizable evaluation forms that integrate a presenter's specific questions would be associated with an increase in quantity and quality of narrative feedback received from the audience. DESIGN Separate samples pre-post intervention study. SETTING A large academic PM&R department's grand rounds. PARTICIPANTS PM&R faculty and trainees attending grand rounds (10-50 attendees with one presenter per session). The study included 20 presentations pre intervention (across 1 year) and 38 presentations post intervention (across about 3 years). INTERVENTION A customizable evaluation form that integrates a presenter's own questions into a tailored evaluation form comprising both standardized and presenter-built questions. MAIN OUTCOME MEASURE(S) Narrative feedback quantity was defined as the mean percentage and number of evaluation forms per presentation with at least one comment. Narrative feedback quality included three metrics: mean percentage and number of evaluation forms per presentation with comments that (1) contained ≥8 words, (2) referenced something specific, and (3) offered an actionable suggestion. RESULTS Compared to preintervention, presentations in the postintervention period had a greater mean percentage of evaluation forms containing at least one comment (pre = 33.4%, post =74.7%, p < .001), a comment that contained ≥8 words (pre = 20.2%, post = 44.2%, p < .001), a comment that referenced something specific (pre = 19.6%, post = 55.1%, p < .001), and a comment that offered an actionable suggestion (pre = 10.2%, post = 22.2%, p < .001). CONCLUSIONS Use of a customizable evaluation form in PM&R grand rounds that integrates a presenter's own questions was associated with a greater mean percentage of evaluation forms containing comments as well as comments meeting quality metrics related to length, specificity, and actionability.
Collapse
Affiliation(s)
- Sandra L Hearn
- Department of Physical Medicine and Rehabilitation, University of Michigan, Ann Arbor, Michigan, USA
| | - Jun Yang
- Research Innovation Scholarship Education (RISE), University of Michigan, Ann Arbor, Michigan, USA
| | - Chris Chapman
- Health Information Technology and Services, University of Michigan, Ann Arbor, Michigan, USA
| | - Lisa A DiPonio
- Department of Physical Medicine and Rehabilitation, University of Michigan, Ann Arbor, Michigan, USA
- Department of Physical Medicine and Rehabilitation, Ann Arbor Veterans Affairs Medical Center, Ann Arbor, Michigan, USA
| |
Collapse
|
2
|
Barbato KBG, Carvalho LSD, Barreira Marangoni V, Souza FD, Vaena MMDV. Core Competencies Self-Assessment and Patient-Practitioner Orientation during the First Year of a Brazilian Orthopedic Residency. Rev Bras Ortop 2023; 58:e742-e749. [PMID: 37908538 PMCID: PMC10615602 DOI: 10.1055/s-0043-1768621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Accepted: 10/18/2022] [Indexed: 11/02/2023] Open
Abstract
Objective Training a competent physician requires to direct the resident profile of graduate students for practice activities. We sought to identify the doctor-patient relationship orientation and the self-assessment of the core competencies, which they pointed out needed to be developed. Methods All 56 orthopedic residents admitted between 2016 and 2019 participated in the present prospective observational study. The Patient Practitioner Orientation Scale (PPOS) and a self-assessment questionnaire were answered at the beginning and end of the first year of residency (R1) in Orthopedics and Traumatology. We calculated mean and standard deviation for PPOS items and scores and analyzed them through the paired t-test. Self-Assessment Questionnaire answer options were "yes" or "I need to improve it" and skills were classified in decreasing order of the frequency of "I need to improve it" responses with description of absolute number and percentage. We compared frequencies using Fisher Test. P-values < 0.05 were considered statistically significant. GraphPad Prism 8.4.3 (GraphPad Software, San Diego, CA, USA) and Microsoft Excel (Microsoft Corporation, Redmond, WA, USA) were used for statistical analysis. Results In the period between the beginning and the end of R1, the total PPOS mean score significantly decreased from 4.63 to 4.50 ( p = 0.024), more biomedical-focused. Around one-third of the residents identified competencies of patient care, practice-based learning and improvement, and interpersonal and communication skills as needed to improve. Conclusions The PPOS and self-assessment activities could promote reflection practices and are possible tools for learner-centered competency assessment. Biomedical guidance tends to prevail as the training of physicians progresses, and periodic self-assessments can be worked on to build a growth mindset.
Collapse
Affiliation(s)
- Kelly Biancardini Gomes Barbato
- Área de Medicina Interna, Instituto Nacional de Traumatologia e Ortopedia, Rio de Janeiro, RJ, Brasil
- Divisão de Ensino e Pesquisa, Instituto Nacional de Traumatologia e Ortopedia, Rio de Janeiro, RJ, Brasil
- Escola de Medicina Souza Marques, Fundação Técnico-Educacional Souza Marques, Rio de Janeiro, RJ, Brasil
| | - Luciana Santos de Carvalho
- Unidade de Educação Permanente, Instituto Nacional de Traumatologia e Ortopedia, Rio de Janeiro, RJ, Brasil
| | - Viviani Barreira Marangoni
- Unidade de Educação Permanente, Instituto Nacional de Traumatologia e Ortopedia, Rio de Janeiro, RJ, Brasil
- Atividade integradora – Ciclo Básico, Centro Universitário Arthur Sa Earp Neto, Rio de Janeiro, RJ, Brasil
| | - Fábio de Souza
- Área de Medicina Interna, Instituto Nacional de Traumatologia e Ortopedia, Rio de Janeiro, RJ, Brasil
- Departamento de Cardiologia, Universidade Federal do Estado do Rio de Janeiro, Rio de Janeiro, RJ, Brasil
| | - Marcella Martins de Vasconcelos Vaena
- Coordenação Diagnóstica e Terapêutica de Hemoterapia, Instituto Nacional de Saúde da mulher, da criança e do adolescente Fernandes Figueira, Fundação Oswaldo Cruz, Rio de Janeiro, RJ, Brasil
- Departamento de Hemoterapia, Instituto Nacional de Câncer, Rio de Janeiro, RJ, Brasil
- Divisão Campus Cittá, Instituto de Educação Médica Estácio de Sá, Universidade Estácio de Sá, Rio de Janeiro, RJ, Brasil
| |
Collapse
|
3
|
Dory V, Wagner M, Cruess R, Cruess S, Young M. If we assess, will they learn? Students' perspectives on the complexities of assessment-for-learning. CANADIAN MEDICAL EDUCATION JOURNAL 2023; 14:94-104. [PMID: 37719398 PMCID: PMC10500400 DOI: 10.36834/cmej.73875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/19/2023]
Abstract
Introduction Assessment can positively influence learning, however designing effective assessment-for-learning interventions has proved challenging. We implemented a mandatory assessment-for-learning system comprising a workplace-based assessment of non-medical expert competencies and a progress test in undergraduate medical education and evaluated its impact. Methods We conducted semi-structured interviews with year-3 and 4 medical students at McGill University to explore how the assessment system had influenced their learning in year 3. We conducted theory-informed thematic analysis of the data. Results Eleven students participated, revealing that the assessment influenced learning through several mechanisms. Some required little student engagement (i.e., feed-up, test-enhanced learning, looking things up after an exam). Others required substantial engagement (e.g., studying for tests, selecting raters for quality feedback, using feedback). Student engagement was moderated by the perceived credibility of the system and of the costs and benefits of engagement. Credibility was shaped by students' goals-in-context: becoming a good doctor, contributing to the healthcare team, succeeding in assessments. Discussion Our assessment system failed to engage students enough to leverage its full potential. We discuss the inherent flaws and external factors that hindered student engagement. Assessment designers should leverage easy-to-control mechanisms to support assessment-for-learning and anticipate significant collaborative work to modify learning cultures.
Collapse
Affiliation(s)
- Valérie Dory
- Department of General Practice, Faculty of Medicine, Université de Liège, Liège, Belgium
- Department of Medicine and Centre for Medical Education, Faculty of Medicine, McGill University, Quebec, Canada
- Institute of Health Sciences Education and Academic Centre of General Practice, Université catholique de Louvain, Brussels, Belgium
| | - Maryam Wagner
- Institute of Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Quebec, Canada
| | - Richard Cruess
- Institute of Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Quebec, Canada
| | - Sylvia Cruess
- Institute of Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Quebec, Canada
| | - Meredith Young
- Institute of Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Quebec, Canada
| |
Collapse
|
4
|
Alfakhry G, Mustafa K, Ybrode K, Jazayerli B, Milly H, Abohajar S, Hassan H, Alhomsi K, Jamous I. Evaluation of a workplace assessment method designed to improve self-assessment in operative dentistry: a quasi-experiment. BMC MEDICAL EDUCATION 2023; 23:491. [PMID: 37400864 DOI: 10.1186/s12909-023-04474-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 06/24/2023] [Indexed: 07/05/2023]
Abstract
BACKGROUND Dental education has placed continued emphasis on self-regulated learning (SRL) and its subprocess, self-assessment. This study set out to evaluate the effectiveness of a novel workplace assessment method in developing trainees' self-assessment of operative procedures. METHODS A Direct Observation of Procedural Skills (DOPS) form was modified for the use and measurement of self-assessment. Participants were trained on how to conduct self-assessment using the designed assessment form and its grading rubric. Feedback and feedforward sessions were given to address self-assessment and performance issues. A P-value less than 0.10 was considered significant and the confidence level was set at 90%. RESULTS Thirty-two Year 5 dental students with an age mean of 22.45 (SD = 0.8) completed five self DOPS encounters during the clinical operative dentistry module in 2022. The aggregated total deviation (absolute difference) between self-assessment and teacher assessment decreased consistently in the five assessment encounters with a significant mean difference and a medium effect size (P = 0.064, partial Eta squared = 0.069). Participants' self-assessment accuracy differed from one skill to another and their ability to identify areas of improvement as perceived by teachers improved significantly (P = 0.011, partial Eta squared = 0.099). Participants' attitudes towards the assessment method were positive. CONCLUSIONS The findings suggest that the self DOPS method was effective in developing participants' ability to self-assess. Future research should explore the effectiveness of this assessment method in a wider range of clinical procedures.
Collapse
Affiliation(s)
- Ghaith Alfakhry
- Program of Medical Education, Syrian Virtual University, Damascus, Syria.
- Education Quality and Scientific Research Office, Al-Sham Private University, Baramekeh, City Center, Damascus Governorate, Syria.
- Faculty of Dental Medicine, Damascus University, Damascus, Syria.
| | - Khattab Mustafa
- Program of Medical Education, Syrian Virtual University, Damascus, Syria
- Department of Endodontics and Operative Dentistry, Faculty of Dental Medicine, Damascus University, Damascus, Syria
| | - Kamal Ybrode
- Department of Endodontics and Operative Dentistry, Faculty of Dental Medicine, Damascus University, Damascus, Syria
| | - Bashar Jazayerli
- Program of Medical Education, Syrian Virtual University, Damascus, Syria
- Faculty of Dental Medicine, Damascus University, Damascus, Syria
| | - Hussam Milly
- Department of Endodontics and Operative Dentistry, Faculty of Dental Medicine, Damascus University, Damascus, Syria
| | - Salam Abohajar
- Faculty of Dental Medicine, Damascus University, Damascus, Syria
- Department of Fixed Prosthodontics, Faculty of Dental Medicine, Damascus University, Damascus, Syria
| | - Hussam Hassan
- Department of Periodontology, Faculty of Dental Medicine, Damascus University, Damascus, Syria
| | - Khaled Alhomsi
- Department of Biomedical Sciences, Al-Sham Private University, Damascus, Syria
| | - Issam Jamous
- Program of Medical Education, Syrian Virtual University, Damascus, Syria
- Department of Fixed Prosthodontics, Faculty of Dental Medicine, Damascus University, Damascus, Syria
| |
Collapse
|
5
|
Robb KA, Rosenbaum ME, Peters L, Lenoch S, Lancianese D, Miller JL. Self-Assessment in Feedback Conversations: A Complicated Balance. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:248-254. [PMID: 35947481 DOI: 10.1097/acm.0000000000004917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Learner-centered feedback models encourage educators to ask learners to self-assess at the start of feedback conversations. This study examines how learners perceive and respond to self-assessment prompts during feedback conversations and assesses medical students' perceptions of and approach to self-assessment used as the basis for these conversations. METHOD All rising second-, third-, and fourth-year medical students at a midwestern U.S. medical school were invited to participate in this study. Students participated in 1-on-1 interviews between June and August 2019 during which they were asked open-ended questions about their experiences with self-assessment and feedback during medical school. The interviews were audio recorded and transcribed, and comments related to self-assessment in feedback conversations were extracted. Thematic analysis was used to identify recurrent ideas and patterns within the transcripts, and all excerpts were reviewed and coded to ensure that the identified themes adequately captured the range of student responses. RESULTS A total of 25 students participated in the study. Although some students noted improvement in their self-assessment abilities with increasing experience, no consistent gender, race, or training-level differences were found in reported attitudes or preferences. Students identified many benefits of self-assessment and generally appreciated being asked to self-assess before receiving feedback. Students had varied responses to specific self-assessment prompts, with no clear preferences for any particular self-assessment questions. Students described weighing multiple factors, such as image concerns and worries about impact on subsequent evaluations, when deciding how to respond to self-assessment prompts. CONCLUSIONS The process by which learners formulate and share self-assessments in feedback conversations is not straightforward. Although educators should continue to elicit self-assessments in feedback discussions, they should recognize the limitations of these self-assessments and strive to create a safe environment in which learners feel empowered to share their true impressions.
Collapse
Affiliation(s)
- Katharine A Robb
- K.A. Robb is clinical assistant professor, Department of Pediatrics, Division of Critical Care, University of Iowa Carver College of Medicine, Iowa City, Iowa; ORCID: http://orcid.org/0000-0002-3071-3429
| | - Marcy E Rosenbaum
- M.E. Rosenbaum is professor, Department of Family Medicine, University of Iowa Carver College of Medicine, Iowa City, Iowa; ORCID: http://orcid.org/0000-0002-8000-5711
| | - Lauren Peters
- L. Peters is a PhD candidate, Department of Communication Studies, University of Iowa, Iowa City, Iowa
| | - Susan Lenoch
- S. Lenoch is instructional services manager, Office of Consultation and Research in Medical Education, University of Iowa, Iowa City, Iowa; ORCID: http://orcid.org/0000-0001-6069-6650
| | - Donna Lancianese
- D. Lancianese is program coordinator, Office of Consultation and Research in Medical Education, University of Iowa, Iowa City, Iowa
| | - Jane L Miller
- J.L. Miller is clinical associate professor, Department of Family Medicine, University of Iowa Carver College of Medicine, Iowa City, Iowa; ORCID: http://orcid.org/0000-0001-9518-3396
| |
Collapse
|
6
|
"Maybe it's the first time somebody's been honest with you": exploring how residents reconcile feedback variability. CAN J EMERG MED 2023; 25:143-149. [PMID: 36580210 DOI: 10.1007/s43678-022-00435-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 12/03/2022] [Indexed: 12/30/2022]
Abstract
BACKGROUND Supervisors in postgraduate medical education may deliver different feedback for the same quality of performance. Residents may struggle to make sense of inconsistent and sometimes contradictory information. We sought to explore how residents experience feedback from different supervisors, how they process inconsistent information, and what factors influence their experiences. METHODS Eighteen residents participated in semi-structured interviews to explore their perspectives on feedback. Using a constructivist grounded theory approach, we engaged in iterative cycles of data collection and analysis, sampling until theoretical sufficiency was reached. Constant comparative analysis was used to identify and define themes. RESULTS We identified a central theme of reconciliation, which we defined as the act of processing inconsistent feedback and determining how to engage with it. This reconciliation was informed by the credibility of, and residents' relationship with, supervisors and was achieved through conversations with peers and mentors, observation of other supervisors' behavior toward their performance, and reflection on their own performance. Participants expressed a reluctance to discard feedback, even if they felt it was incongruent with previous feedback or their own self-concept and self-assessment. CONCLUSION The findings of this study show that while residents are regular consumers of feedback, not all feedback is used equally. Residents actively reconcile sometimes-contradictory feedback and must work to balance a general reluctance to discard feedback, while developing an understanding of its credibility. This work reinforces the importance of pedagogical relationships and identifies that facilitated reflection that explicitly acknowledges feedback inconsistencies may be important in the reconciliation process.
Collapse
|
7
|
Muacevic A, Adler JR, Agboola O, Vajta Gomez JP, Alapati A, Alston S. Partnering With Residents on the Redesign of the Internal Medicine Resident Self-Evaluation Form. Cureus 2023; 15:e33304. [PMID: 36741634 PMCID: PMC9894636 DOI: 10.7759/cureus.33304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/03/2023] [Indexed: 01/05/2023] Open
Abstract
INTRODUCTION The positive impact of resident-driven synthesis of assessment data has been associated with increased intrinsic motivation to learn and create an individualized strategy to improve performance. The objective of the study was to incorporate residents' recommendations for restructuring the self-assessment metric into a tool that will promote a well-organized and effective self-improvement plan. MATERIALS AND METHODS Residents and faculty collaborated on pre- and post-intervention questionnaires to assess the barriers to the timely completion of the current self-evaluation form and gather information on the tool's ability to stimulate the formation of concrete goals. The residents were also invited to provide their recommendations on the structure of the new tool and the educational domains that were assessed by the tool. The post-survey also evaluated the capacity of the proposed tool to guide residents in establishing specific goals. Results: The new form is concise and more precise in assisting the learner in developing short-term and long-term goals and the strategies and resources to achieve them. Discussion: Collaborating with the learners created an opportunity to address the faculty's and residents' most important concerns about the effectiveness of the metric. CONCLUSION In a learner-centered model, resident participation is critical in designing/redesigning a practical self-assessment tool for residents in Internal Medicine.
Collapse
|
8
|
Carbajal MM, Dadiz R, Sawyer T, Kane S, Frost M, Angert R. Part 5: Essentials of Neonatal-Perinatal Medicine Fellowship: evaluation of competence and proficiency using Milestones. J Perinatol 2022; 42:809-814. [PMID: 35149835 DOI: 10.1038/s41372-021-01306-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Revised: 11/03/2021] [Accepted: 12/23/2021] [Indexed: 11/09/2022]
Abstract
The Accreditation Council for Graduate Medical Education (ACGME) Pediatric Subspecialty Milestone Project competencies are used for Neonatal-Perinatal Medicine (NPM) fellows. Milestones are longitudinal markers that range from novice to expert (levels 1-5). There is no standard approach to the required biannual evaluation of fellows by fellowship programs, resulting in significant variability among programs regarding procedural experience and exposure to pathology during clinical training. In this paper, we discuss the opportunities that Milestones provide, potential strategies to address challenges, and future directions.
Collapse
Affiliation(s)
- Melissa M Carbajal
- Department of Pediatrics, Section of Neonatology, Baylor College of Medicine, Houston, TX, USA.
| | - Rita Dadiz
- Departments of Pediatrics, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA
| | - Taylor Sawyer
- Department of Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
| | - Sara Kane
- Department of Pediatrics, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Mackenzie Frost
- Department of Pediatrics, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| | | | - Robert Angert
- Department of Pediatrics, New York University Grossman School of Medicine, New York, NY, USA
| |
Collapse
|
9
|
Thomas J, Sandefur B, Colletti J, Mullan A, Homme J. Integrating self-assessment into feedback for emergency medicine residents. AEM EDUCATION AND TRAINING 2022; 6:e10721. [PMID: 35155973 PMCID: PMC8823156 DOI: 10.1002/aet2.10721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Revised: 12/16/2021] [Accepted: 01/04/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND In 2013 the Accreditation Council for Graduate Medical Education (ACGME) introduced "Milestones" designed to nationally standardize the assessment of resident physicians. Previous studies compare resident self-assessment on milestones to faculty assessment, with varying degrees of agreement, but integration of self-assessment into the formative feedback process has not yet been directly studied. This study uses a conceptual framework of self-determination theory, integrated with concepts from adult learning theory, to compare the perception of the feedback quality given in semiannual reviews before and after the incorporation of resident self-assessment into the feedback process. METHODS This was an interventional study conducted in a single emergency medicine residency program at a major academic hospital over 1 calendar year. Residents first engaged in a semiannual review without self-assessment. At subsequent semiannual reviews, residents completed a Milestone-based self-assessment that was provided to the faculty member assigned to conduct their semiannual review. Residents and faculty completed surveys rating perception of feedback quality. Two-sided Wilcoxon signed-rank tests were used in comparison analysis. RESULTS One resident did not self-assess prior to the semiannual review and was excluded leaving 25 paired surveys for analysis. Residents found feedback after the self-assessment more actionable (p = 0.013), insightful (p = 0.010), and better overall (p = 0.025). Similarly, faculty felt the feedback they provided was more actionable (p < 0.001), more insightful (p < 0.001), and better communicated (p < 0.001); led to improved resident understanding of milestones (p < 0.001); and were overall more satisfied (p < 0.001). Free-text comments explore pre- and postintervention perceptions of feedback. CONCLUSIONS Integration of self-assessment into semiannual reviews improves perception of feedback given to residents as perceived by both residents and faculty. Although limited by sample size, the results are promising for a simple, evidence-based intervention to improve feedback during an existing mandated feedback opportunity.
Collapse
Affiliation(s)
- Jenna Thomas
- Department of Emergency MedicineUniversity of MichiganAnn ArborMichiganUSA
- Department of Emergency MedicineMayo ClinicRochesterMinnesotaUSA
| | | | - James Colletti
- Department of Emergency MedicineMayo ClinicRochesterMinnesotaUSA
| | - Aidan Mullan
- Department of Biostatistics and InformaticsMayo ClinicRochesterMinnesotaUSA
| | - James Homme
- Department of Emergency MedicineMayo ClinicRochesterMinnesotaUSA
| |
Collapse
|
10
|
Esteves A, McConnell M, Ferretti E, Garber A, Fung-Kee-Fung K. "When in Doubt, Ask the Patient": A Quantitative, Patient-Oriented Approach to Formative Assessment of CanMEDS Roles. MEDEDPORTAL : THE JOURNAL OF TEACHING AND LEARNING RESOURCES 2021; 17:11169. [PMID: 34368437 PMCID: PMC8292435 DOI: 10.15766/mep_2374-8265.11169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Accepted: 05/07/2021] [Indexed: 06/13/2023]
Abstract
INTRODUCTION Since the introduction of competency-based frameworks into postgraduate medical curricula, educators have struggled to implement robust assessment tools that document the progression of necessary skills. The global movement towards competency-based medical education demands validated assessment tools. Our objective was to provide validity evidence for the Ottawa CanMEDS Competency Assessment Tool (OCCAT), designed to assess clinical performance in the communicator, professional, and health advocate CanMEDS roles. METHODS We developed the OCCAT, a 29-item questionnaire informed by specialty-specific Entrustable Professional Activities and consultation with stakeholders, including patients. Our sample included nine neonatal-perinatal medicine and maternal fetal medicine fellows rotating through antenatal high-risk clinics at the Ottawa Hospital. Following 70 unique encounters, the OCCAT was completed by patients and learners. Generalizability theory was used to determine overall reliability of scores. Differences in self and patient ratings were assessed using analyses of variance. RESULTS Generalizability analysis demonstrated that both questionnaires produced reliable scores (G-coefficient > 0.9). Self-scores were significantly lower than patient scores across all competencies, F(1, 6) = 13.9, p = .007. Variability analysis demonstrated that trainee scores varied across all competencies, suggesting both groups were able to recognize competencies as distinct and discriminate favorable behaviors belonging to each. DISCUSSION Our findings lend support to the movement to integrate self-assessment and patient feedback in formal evaluations for the purpose of enriched learner experiences and improved patient outcomes. We anticipate that the OCCAT will facilitate bridging to competency-based medical education.
Collapse
Affiliation(s)
- Ashley Esteves
- Senior Medical Student, University of Ottawa Faculty of Medicine
| | - Meghan McConnell
- Associate Professor, Department of Innovation in Medical Education and Department of Anesthesiology and Pain Medicine, University of Ottawa Faculty of Medicine
| | - Emanuela Ferretti
- Neonatologist and Associate Professor, Division of Neonatology, Department of Pediatrics, Children's Hospital of Eastern Ontario and University of Ottawa Faculty of Medicine
| | - Adam Garber
- Associate Program Director and Associate Professor, Department of Obstetrics and Gynecology, University of Ottawa Faculty of Medicine
| | - Karen Fung-Kee-Fung
- Professor, Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, University of Ottawa Faculty of Medicine
| |
Collapse
|
11
|
Dai CM, Bertram K, Chahine S. Feedback Credibility in Healthcare Education: a Systematic Review and Synthesis. MEDICAL SCIENCE EDUCATOR 2021; 31:923-933. [PMID: 34457934 PMCID: PMC8368112 DOI: 10.1007/s40670-020-01167-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 11/19/2020] [Indexed: 05/21/2023]
Abstract
PURPOSE The purpose of this study was to systematically review and synthesize factors that influence learners' perceptions of credibility when feedback is provided by an authority figure in a healthcare environment. METHODS This study reviewed literature from medicine, psychology, and education using systematic review and qualitative synthesis methods. In a multi-step process, major electronic bibliographic databases were searched for relevant studies until October 2020. RESULTS The search identified 9216 articles. A total of 134 abstracts underwent full-text review. Of these, 22 articles met inclusion criteria. The studies were heterogenous and the majority utilized a qualitative design with interviews and focus groups. A few studies employed mixed methodology (n = 2) and two studies used a quantitative design. Four main themes were identified: feedback characteristics, context of feedback, source credibility, and recipient characteristics. CONCLUSION As programs implement major educational change initiatives to create more formative assessment practices, feedback will become even more crucial. The four main themes identified are important factors that contribute to the perception of feedback credibility. While the factors are described independently, they may be viewed as interrelated and the association between these factors and feedback may be driven more by learning culture than each characteristic. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s40670-020-01167-w.
Collapse
Affiliation(s)
- Cecilia M. Dai
- Schulich School of Medicine & Dentistry, Western University, London, Ontario Canada
- Health Sciences Addition, Western University, Room H110B, London, Ontario N6A 5C1 Canada
| | - Kaitlyn Bertram
- Royal Victoria Regional Health Centre (RVH), University of Toronto Department of Family and Community Medicine, Toronto, Ontario Canada
| | - Saad Chahine
- Centre for Education Research & Innovation, Schulich School of Medicine & Dentistry, Western University, London, Ontario Canada
| |
Collapse
|
12
|
|
13
|
Geranmayeh M, Khakbazan Z, Azizi F, Mehran A. Effects of Feedback on Midwifery Students' Self-Assessed Performance and Their Self-Assessment Ability: A Quasi-Experimental Study. INTERNATIONAL QUARTERLY OF COMMUNITY HEALTH EDUCATION 2019; 40:299-305. [PMID: 31652075 DOI: 10.1177/0272684x19885512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study aimed to evaluate the effects of verbal and written feedback in clinical midwifery placement on students' self-assessed performance and their self-assessment ability. This three-group quasi-experimental study was conducted on 120 students. Participants in the control group received clinical education through the routine method, while in the feedback groups received either verbal or written feedback methods on the basis of the sandwich feedback model. In the last day of clinical education, a checklist was simultaneously filled out by participants and a second instructor. There was significant direct correlation between the scores of performance assessment by both the second instructor and students in the control group (r = .38, p = .01), the verbal feedback group (r = .63, p < .001), and the written feedback group (r = .74, p < .001). The rates of student-instructor agreement in the control, verbal feedback, and written feedback groups were 32.5%, 70%, and 77.5%, respectively. Feedback is effective in significantly improving students' self-assessment ability.
Collapse
Affiliation(s)
- Mehrnaz Geranmayeh
- Department of Reproductive Health and Midwifery, School of Nursing and Midwifery, Tehran University of Medical Sciences, Iran
| | - Zohre Khakbazan
- Department of Reproductive Health and Midwifery, School of Nursing and Midwifery, Tehran University of Medical Sciences, Iran
| | - Farahnaz Azizi
- School of Nursing and Midwifery, Astara Azad University, Gilan, Iran
| | - Abbas Mehran
- Department of Midwifery, School of Nursing and Midwifery, Tehran University of Medical Sciences, Iran
| |
Collapse
|
14
|
Scott I, Gingerich A, Eva KW. Twelve tips for clinicians dealing with uncertainty when assessing learners. MEDICAL TEACHER 2019; 41:888-894. [PMID: 30299204 DOI: 10.1080/0142159x.2018.1494381] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Clinician educators often experience distress caused by uncertainty regarding how effectively to participate in assessment practices in a way that supports both their programs and their students. Uncertainty is a common state for clinicians, particularly for those who see patients with early or ill-defined illness presentations. While clinicians often feel ill at ease when facing uncertainty in the clinical realm, becoming comfortable with uncertainty and learning to manage such states are now recognized as vital components of clinical practice. Clinicians, as a result, have adopted a series of strategies to lessen the unease that uncertainty can create. While similar experiences plague clinician educators placed in assessment roles, much less attention has been given to how we can support individuals in the education setting. Here, the distress of uncertainty may be greater due to clinician educators having less experience with assessment practices. Fortunately, strategies that are effective in the clinical domain can be translated into the assessment realm to accommodate uncertainty when assessing learners. In this 12 tips article we offer guidance on the translation of such strategies.
Collapse
Affiliation(s)
- I Scott
- Centre for Health Education Scholarship (CHES), Department of Family Science, Faculty of Medicine, University of British Columbia , Canada
| | - A Gingerich
- University Hospital of Northern British Columbia, Prince George, British Columbia , Canada
| | - K W Eva
- Centre for Health Education Scholarship (CHES), Department of Family Science, Faculty of Medicine, University of British Columbia , Canada
| |
Collapse
|
15
|
Moroz A, King A, Kim B, Fusco H, Carmody K. Constructing a Shared Mental Model for Feedback Conversations: Faculty Workshop Using Video Vignettes Developed by Residents. MEDEDPORTAL : THE JOURNAL OF TEACHING AND LEARNING RESOURCES 2019; 15:10821. [PMID: 31139740 PMCID: PMC6519682 DOI: 10.15766/mep_2374-8265.10821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/23/2019] [Accepted: 11/12/2018] [Indexed: 06/09/2023]
Abstract
INTRODUCTION Providing feedback is a fundamental principle in medical education; however, as educators, our community lacks the necessary skills to give meaningful, impactful feedback to those under our supervision. By improving our feedback-giving skills, we provide concrete ways for trainees to optimize their performance, ultimately leading to better patient care. METHODS In this faculty development workshop, faculty groups used six feedback video vignettes scripted, enacted, and produced by residents to arrive at a shared mental model of feedback. During workshop development, we used qualitative analysis for faculty narratives combined with the findings from a focused literature review to define dimensions of feedback. RESULTS Twenty-three faculty (physical medicine and rehabilitation and neurology) participated in seven small-group workshops. Analysis of group discussion notes yielded 343 codes that were collapsed into 25 coding categories. After incorporating the results of a focused literature review, we identified 48 items grouped into 10 dimensions of feedback. Online session evaluation indicated that faculty members liked the workshop's format and thought they were better at providing feedback to residents as a result of the workshop. DISCUSSION Small faculty groups were able to develop a shared mental model of dimensions of feedback that was also grounded in medical education literature. The theme of specificity of feedback was prominent and echoed recent medical education research findings. Defining performance expectations for feedback providers in the form of a practical and psychometrically sound rubric can enhance reliable scoring of feedback performance assessments and should be the next step in our work.
Collapse
Affiliation(s)
- Alex Moroz
- Associate Professor, Department of Rehabilitation Medicine, New York University School of Medicine
| | - Anna King
- Chief Resident, Department of Rehabilitation Medicine, New York University School of Medicine
| | - Baruch Kim
- Chief Resident, Department of Rehabilitation Medicine, New York University School of Medicine
| | - Heidi Fusco
- Clinical Assistant Professor, Department of Rehabilitation Medicine, New York University School of Medicine
| | - Kristin Carmody
- Associate Professor, Department of Emergency Medicine, New York University School of Medicine
| |
Collapse
|
16
|
LaDonna KA, Watling C. In search of meaningful feedback conversations. MEDICAL EDUCATION 2018; 52:250-251. [PMID: 29441636 DOI: 10.1111/medu.13518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
|