1
|
Hall AM, Gray A, Ragsdale JW. Making narrative feedback meaningful. CLINICAL TEACHER 2024:e13766. [PMID: 38651603 DOI: 10.1111/tct.13766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 03/12/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND Narrative written feedback given to students by faculty often fails to identify areas for improvement and recommended actions to lead to this improvement. When these elements are missing, it is challenging for students to improve and for medical schools to use narrative feedback in promotion decisions, to guide coaching plans and to pass on meaningful information to residency programs. Large-group faculty development has improved narrative written feedback, but less is known about individualised faculty development to supplement large-group sessions. To fill this gap, we built a curriculum with general and individualised faculty development to improve narrative written feedback from Internal Medicine faculty to clerkship students. APPROACH We used Kern's steps to build a curriculum with general and individualised one-on-one faculty development to improve the problem of inadequate narrative written feedback. We used a novel narrative feedback rubric for pre and post-intervention faculty scores. RESULTS/FINDINGS/EVALUATION Through general and individualised one-on-one faculty development with peer comparison scores, we were able to improve narrative written feedback from 3.7/6 to 4.6/6, for an increase of 23%. IMPLICATIONS We found our faculty development program effective in improving feedback and was easy to implement. Our rubric was easy to use, and faculty were receptive to feedback in one-on-one meetings. We plan to extend this work locally to other divisions/departments and into graduate medical education; it should also be easily extended to other medical disciplines or health professions.
Collapse
Affiliation(s)
- Alan M Hall
- Departments of Internal Medicine and Pediatrics, University of Kentucky College of Medicine, Lexington, Kentucky, USA
| | - Adam Gray
- Department of Internal Medicine, University of Kentucky College of Medicine, Lexington, Kentucky, USA
| | - John W Ragsdale
- Department of Internal Medicine, University of Kentucky College of Medicine, Lexington, Kentucky, USA
| |
Collapse
|
2
|
Vennemeyer S, Kinnear B, Gao A, Zhu S, Nattam A, Knopp MI, Warm E, Wu DT. User-Centered Evaluation and Design Recommendations for an Internal Medicine Resident Competency Assessment Dashboard. Appl Clin Inform 2023; 14:996-1007. [PMID: 38122817 PMCID: PMC10733060 DOI: 10.1055/s-0043-1777103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 10/25/2023] [Indexed: 12/23/2023] Open
Abstract
OBJECTIVES Clinical Competency Committee (CCC) members employ varied approaches to the review process. This makes the design of a competency assessment dashboard that fits the needs of all members difficult. This work details a user-centered evaluation of a dashboard currently utilized by the Internal Medicine Clinical Competency Committee (IM CCC) at the University of Cincinnati College of Medicine and generated design recommendations. METHODS Eleven members of the IM CCC participated in semistructured interviews with the research team. These interviews were recorded and transcribed for analysis. The three design research methods used in this study included process mapping (workflow diagrams), affinity diagramming, and a ranking experiment. RESULTS Through affinity diagramming, the research team identified and organized opportunities for improvement about the current system expressed by study participants. These areas include a time-consuming preprocessing step, lack of integration of data from multiple sources, and different workflows for each step in the review process. Finally, the research team categorized nine dashboard components based on rankings provided by the participants. CONCLUSION We successfully conducted user-centered evaluation of an IM CCC dashboard and generated four recommendations. Programs should integrate quantitative and qualitative feedback, create multiple views to display these data based on user roles, work with designers to create a usable, interpretable dashboard, and develop a strong informatics pipeline to manage the system. To our knowledge, this type of user-centered evaluation has rarely been attempted in the medical education domain. Therefore, this study provides best practices for other residency programs to evaluate current competency assessment tools and to develop new ones.
Collapse
Affiliation(s)
- Scott Vennemeyer
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
| | - Benjamin Kinnear
- Department of Pediatrics, College of Medicine, University of Cincinnati, Ohio, United States
- Department of Internal Medicine, College of Medicine, University of Cincinnati, Ohio, United States
| | - Andy Gao
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- Medical Sciences Baccalaureate Program, College of Medicine, University of Cincinnati, Ohio, United States
| | - Siyi Zhu
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- School of Design, College of Design, Architecture, Art, and Planning (DAAP), University of Cincinnati, Ohio, United States
| | - Anunita Nattam
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- Medical Sciences Baccalaureate Program, College of Medicine, University of Cincinnati, Ohio, United States
| | - Michelle I. Knopp
- Department of Internal Medicine, College of Medicine, University of Cincinnati, Ohio, United States
- Division of Hospital Medicine, Cincinnati Children's Hospital Medical Center, Department of Pediatrics, College of Medicine, University of Cincinnati, Ohio, United States
| | - Eric Warm
- Department of Internal Medicine, College of Medicine, University of Cincinnati, Ohio, United States
| | - Danny T.Y. Wu
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- Department of Pediatrics, College of Medicine, University of Cincinnati, Ohio, United States
- Medical Sciences Baccalaureate Program, College of Medicine, University of Cincinnati, Ohio, United States
- School of Design, College of Design, Architecture, Art, and Planning (DAAP), University of Cincinnati, Ohio, United States
| |
Collapse
|
3
|
Tay AZ, Tang PY, New LM, Zhang X, Leow WQ. Detecting residents at risk of attrition - A Singapore pathology residency's experience. Acad Pathol 2023; 10:100075. [PMID: 37095782 PMCID: PMC10121803 DOI: 10.1016/j.acpath.2023.100075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 01/21/2023] [Accepted: 02/05/2023] [Indexed: 04/26/2023] Open
Abstract
The SingHealth Pathology Residency Program (SHPRP) is a 5-year postgraduate training program in Singapore. We face the problem of resident attrition, which has a significant impact on the individual, program and healthcare providers. Our residents are regularly evaluated, using in-house evaluations as well as assessments required in our partnership with the Accreditation Council for Graduate Medical Education International (ACGME-I). We hence sought to determine if these assessments were able to distinguish residents who would attrite from residents who would graduate successfully. Retrospective analysis of existing residency assessments was performed on all residents who have separated from SHPRP and compared with residents currently in senior residency or graduated from the program. Statistical analysis was performed on quantitative assessment methods of Resident In-Service Examination (RISE), 360-degree feedback, faculty assessment, Milestones and our own annual departmental mock examination. Word frequency analysis of narrative feedback from faculty assessment was used to generate themes. Since 2011, 10 out of 34 residents have separated from the program. RISE, Milestone data and the departmental mock examination showed statistical significance in discriminating residents at risk of attrition for specialty-related reasons from successful residents. Analysis of narrative feedback showed that successful residents performed better in areas of organization, preparation with clinical history, application of knowledge, interpersonal communication and achieving sustained progress. Existing assessment methods used in our pathology residency program are effective in detecting residents at risk of attrition. This also suggests applications in the way that we select, assess and teach residents.
Collapse
Affiliation(s)
- Amos Z.E. Tay
- Department of Anatomic Pathology, Singapore General Hospital, Singapore
- Duke-NUS Medical School, Singapore
- Corresponding author. Department of Anatomic Pathology, Singapore General Hospital, Academia, Level 10, Diagnostic Tower, 20 College Road, Singapore, 169856, Singapore.
| | - Po Yin Tang
- Department of Anatomic Pathology, Singapore General Hospital, Singapore
- Duke-NUS Medical School, Singapore
| | - Lee May New
- Department of Anatomic Pathology, Singapore General Hospital, Singapore
| | - Xiaozhu Zhang
- Department of Anatomic Pathology, Singapore General Hospital, Singapore
| | - Wei-Qiang Leow
- Department of Anatomic Pathology, Singapore General Hospital, Singapore
- Duke-NUS Medical School, Singapore
| |
Collapse
|
4
|
Mooney CJ, Pascoe JM, Blatt AE, Lang VJ, Kelly MS, Braun MK, Burch JE, Stone RT. Predictors of faculty narrative evaluation quality in medical school clerkships. MEDICAL EDUCATION 2022; 56:1223-1231. [PMID: 35950329 DOI: 10.1111/medu.14911] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 08/01/2022] [Accepted: 08/08/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Narrative approaches to assessment provide meaningful and valid representations of trainee performance. Yet, narratives are frequently perceived as vague, nonspecific and low quality. To date, there is little research examining factors associated with narrative evaluation quality, particularly in undergraduate medical education. The purpose of this study was to examine associations of faculty- and student-level characteristics with the quality of faculty member's narrative evaluations of clerkship students. METHODS The authors reviewed faculty narrative evaluations of 50 students' clinical performance in their inpatient medicine and neurology clerkships, resulting in 165 and 87 unique evaluations in the respective clerkships. The authors evaluated narrative quality using the Narrative Evaluation Quality Instrument (NEQI). The authors used linear mixed effects modelling to predict total NEQI score. Explanatory covariates included the following: time to evaluation completion, number of weeks spent with student, faculty total weeks on service per year, total faculty years in clinical education, student gender, faculty gender, and an interaction term between student and faculty gender. RESULTS Significantly higher narrative evaluation quality was associated with a shorter time to evaluation completion, with NEQI scores decreasing by approximately 0.3 points every 10 days following students' rotations (p = .004). Additionally, women faculty had statistically higher quality narrative evaluations with NEQI scores 1.92 points greater than men faculty (p = .012). All other covariates were not significant. CONCLUSIONS The quality of faculty members' narrative evaluations of medical students was associated with time to evaluation completion and faculty gender but not faculty experience in clinical education, faculty weeks on service, or the amount of time spent with students. Findings advance understanding on ways to improve the quality of narrative evaluations which are imperative given assessment models that will increase the volume and reliance on narratives.
Collapse
Affiliation(s)
- Christopher J Mooney
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Jennifer M Pascoe
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Amy E Blatt
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Valerie J Lang
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | | | - Melanie K Braun
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Jaclyn E Burch
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | | |
Collapse
|
5
|
Mooney CJ, Blatt A, Pascoe J, Lang V, Kelly M, Braun M, Burch J, Stone RT. Predictors of Narrative Evaluation Quality in Undergraduate Medical Education Clerkships. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:S168. [PMID: 37838897 DOI: 10.1097/acm.0000000000004809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Affiliation(s)
- Christopher J Mooney
- Author affiliations: C.J. Mooney, A. Blatt, J. Pascoe, V. Lang, M. Braun, J. Burch, R.T. Stone, University of Rochester School of Medicine and Dentistry; M. Kelly, Massachusetts General Hospital
| | | | | | | | | | | | | | | |
Collapse
|
6
|
Gordon LB, Zelaya-Floyd M, White P, Hallen S, Varaklis K, Tavakolikashi M. Interprofessional bedside rounding improves quality of feedback to resident physicians. MEDICAL TEACHER 2022; 44:907-913. [PMID: 35373712 DOI: 10.1080/0142159x.2022.2049735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE Obtaining high quality feedback in residency education is challenging, in part due to limited opportunities for faculty observation of authentic clinical work. This study reviewed the impact of interprofessional bedside rounds ('iPACE™') on the length and quality of faculty narrative evaluations of residents as compared to usual inpatient teaching rounds. METHODS Narrative comments from faculty evaluations of Internal Medicine (IM) residents both on usual teaching service as well as the iPACE™ service (spanning 2017-2020) were reviewed and coded using a deductive content analysis approach. RESULTS Six hundred ninety-two narrative evaluations by 63 attendings of 103 residents were included. Evaluations of iPACE™ residents were significantly longer than those of residents on usual teams (109 vs. 69 words, p < 0.001). iPACE™ evaluations contained a higher average occurrence of direct observations of patient/family interactions (0.72 vs. 0.32, p < 0.001), references to interprofessionalism (0.17 vs. 0.05, p < 0.001), as well as specific (3.21 vs. 2.26, p < 0.001), actionable (1.01 vs. 0.69, p < 0.001), and corrective feedback (1.2 vs. 0.88, p = 0.001) per evaluation. CONCLUSIONS This study suggests that the iPACE™ model, which prioritizes interprofessional bedside rounds, had a positive impact on the quantity and quality of feedback, as measured via narrative comments on weekly evaluations.
Collapse
Affiliation(s)
- Lesley B Gordon
- Tufts University School of Medicine, Boston, MA, USA
- Department of Medicine, Maine Medical Center, Portland, ME, USA
| | | | - Patricia White
- Department of Medical Education, Maine Medical Center, Portland, ME, USA
| | - Sarah Hallen
- Tufts University School of Medicine, Boston, MA, USA
- Division of Geriatrics, Maine Medical Center, Portland, ME, USA
| | - Kalli Varaklis
- Tufts University School of Medicine, Boston, MA, USA
- Department of Medical Education, Maine Medical Center, Portland, ME, USA
- Department of Obstetrics and Gynecology, Maine Medical Center, Portland, ME, USA
| | - Motahareh Tavakolikashi
- Department of Medical Education, Maine Medical Center, Portland, ME, USA
- Department of System Science and Industrial Engineering, Binghamton University, Binghamton, NY, USA
| |
Collapse
|
7
|
Macpherson I, Roqué MV, Martín-Sánchez JC, Segarra I. Analysis in the ethical decision-making of dental, nurse and physiotherapist students, through case-based learning. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2022; 26:277-287. [PMID: 34085360 DOI: 10.1111/eje.12700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 03/03/2021] [Accepted: 05/28/2021] [Indexed: 06/12/2023]
Abstract
INTRODUCTION Training in ethical competencies is perceived with special interest among the objectives of health education. The dimensions of the person such as integrity, autonomy and dignity influence the choice of interventions, but the different specialties of the health sciences conceive these dimensions with different perspectives depending on the clinical setting. These divergences can be detected during the first years of undergraduate studies, and it is important to know the professional bias and its possible causes. MATERIALS AND METHODS A procedure was developed through case-based learning (CBL) to assess various characteristics of decision-making during the early stages of student training. A semi-quantitative method was designed based on the narrative responses of a case with ethical implications in the field of gender violence. The method was applied to 294 undergraduate students in nursing (95), physiotherapy (109) and dentistry (90) from the Faculty of Health Sciences of a Spanish university. A frequency analysis of the narrative responses of the students to the proposed case was carried out, using the chi-square test to determine any association between the variables studied: gender, specialty and ethical knowledge. RESULTS Four types of response categories were detected, as a result of combining the personal conversation, report to legal authority or require assistance of other teams. The most common option in dentists is conversation only, while physical therapists include the assistance of other teams. In nursing, a balance is observed between both possibilities. The results show that student responses differ significantly among specialties and also differ significantly according to test scores on ethical knowledge. However, no significant differences were found between the responses provided by men and women. CONCLUSION Most of the health sciences students highly valued their own capacity for dialogue and reflection to approach situations with complex ethical dimensions. We consider that case-based learning (CBL), in combination with narrative analysis is a valid means of evaluating the professional ethical competencies of students in health sciences careers applied to a common goal.
Collapse
Affiliation(s)
- Ignacio Macpherson
- Bioethics Unit, Department of Humanities, International University of Catalonia, Sant Cugat del Vallés, Spain
| | - María Victoria Roqué
- Bioethics Unit, Department of Humanities, International University of Catalonia, Sant Cugat del Vallés, Spain
| | - Juan Carlos Martín-Sánchez
- Biostatistics Unit, Department of Basic Sciences, International University of Catalonia, Sant Cugat del Vallés, Spain
| | - Ignacio Segarra
- Department of Pharmacy, Faculty of Health Sciences, Catholic University of Murcia, Murcia, Spain
| |
Collapse
|
8
|
Kelleher M, Kinnear B, Sall DR, Weber DE, DeCoursey B, Nelson J, Klein M, Warm EJ, Schumacher DJ. Warnings in early narrative assessment that might predict performance in residency: signal from an internal medicine residency program. PERSPECTIVES ON MEDICAL EDUCATION 2021; 10:334-340. [PMID: 34476730 PMCID: PMC8633188 DOI: 10.1007/s40037-021-00681-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 07/08/2021] [Accepted: 07/11/2021] [Indexed: 05/10/2023]
Abstract
INTRODUCTION Narrative assessment data are valuable in understanding struggles in resident performance. However, it remains unknown which themes in narrative data that occur early in training may indicate a higher likelihood of struggles later in training, allowing programs to intervene sooner. METHODS Using learning analytics, we identified 26 internal medicine residents in three cohorts that were below expected entrustment during training. We compiled all narrative data in the first 6 months of training for these residents as well as 13 typically performing residents for comparison. Narrative data were blinded for all 39 residents during initial phases of an inductive thematic analysis for initial coding. RESULTS Many similarities were identified between the two cohorts. Codes that differed between typical and lower entrusted residents were grouped into two types of themes: three explicit/manifest and three implicit/latent with six total themes. The explicit/manifest themes focused on specific aspects of resident performance with assessors describing 1) Gaps in attention to detail, 2) Communication deficits with patients, and 3) Difficulty recognizing the "big picture" in patient care. Three implicit/latent themes, focused on how narrative data were written, were also identified: 1) Feedback described as a deficiency rather than an opportunity to improve, 2) Normative comparisons to identify a resident as being behind their peers, and 3) Warning of possible risk to patient care. DISCUSSION Clinical competency committees (CCCs) usually rely on accumulated data and trends. Using the themes in this paper while reviewing narrative comments may help CCCs with earlier recognition and better allocation of resources to support residents' development.
Collapse
Affiliation(s)
- Matthew Kelleher
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| | - Benjamin Kinnear
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Dana R Sall
- HonorHealth Internal Medicine Residency Program, Scottsdale, Arizona and University of Arizona College of Medicine, Phoenix, AZ, USA
| | - Danielle E Weber
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Bailey DeCoursey
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Jennifer Nelson
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Melissa Klein
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Eric J Warm
- Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Daniel J Schumacher
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| |
Collapse
|
9
|
Hartman ND, Manthey DE, Strowd LC, Potisek NM, Vallevand A, Tooze J, Goforth J, McDonough K, Askew KL. Effect of Perceived Level of Interaction on Faculty Evaluations of 3rd Year Medical Students. MEDICAL SCIENCE EDUCATOR 2021; 31:1327-1332. [PMID: 34457975 PMCID: PMC8368453 DOI: 10.1007/s40670-021-01307-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/11/2021] [Indexed: 06/13/2023]
Abstract
INTRODUCTION Several factors are known to affect the way clinical performance evaluations (CPEs) of medical students are completed by supervising physicians. We sought to explore the effect of faculty perceived "level of interaction" (LOI) on these evaluations. METHODS Our third-year CPE requires evaluators to identify perceived LOI with each student as low, moderate, or high. We examined CPEs completed during the academic year 2018-2019 for differences in (1) clinical and professionalism ratings, (2) quality of narrative comments, (3) quantity of narrative comments, and (4) percentage of evaluation questions left unrated. RESULTS A total of 3682 CPEs were included in the analysis. ANOVA revealed statistically significant differences between LOI and clinical ratings (p ≤ .001), with mean ratings from faculty with a high LOI significantly higher than from faculty with a moderate or low LOI (p ≤ .001). Chi-squared analysis demonstrated differences based on faculty LOI and whether questions were left unrated (p ≤ .001), quantity of narrative comments (p ≤ .001), and specificity of narrative comments (p ≤ .001). CONCLUSIONS Faculty who perceive higher LOI were more likely to assign that student higher ratings, complete more of the clinical evaluation and were more likely to provide narrative feedback with more specific, higher-quality comments. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s40670-021-01307-w.
Collapse
Affiliation(s)
- Nicholas D. Hartman
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - David E. Manthey
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Lindsay C. Strowd
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Nicholas M. Potisek
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Andrea Vallevand
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Janet Tooze
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Jon Goforth
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Kimberly McDonough
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Kim L. Askew
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| |
Collapse
|
10
|
Hernandez CA, Daroowalla F, LaRochelle JS, Ismail N, Tartaglia KM, Fagan MJ, Kisielewski M, Walsh K. Determining Grades in the Internal Medicine Clerkship: Results of a National Survey of Clerkship Directors. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:249-255. [PMID: 33149085 DOI: 10.1097/acm.0000000000003815] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PURPOSE Trust in and comparability of assessments are essential in clerkships in undergraduate medical education for many reasons, including ensuring competency in clinical skills and application of knowledge important for the transition to residency and throughout students' careers. The authors examined how assessments are used to determine internal medicine (IM) core clerkship grades across U.S. medical schools. METHODS A multisection web-based survey of core IM clerkship directors at 134 U.S. medical schools with membership in the Clerkship Directors in Internal Medicine was conducted in October through November 2018. The survey included a section on assessment practices to characterize current grading scales used, who determines students' final clerkship grades, the nature/type of summative assessments, and how assessments are weighted. Respondents were asked about perceptions of the influence of the National Board of Medical Examiners (NBME) Medicine Subject Examination (MSE) on students' priorities during the clerkship. RESULTS The response rate was 82.1% (110/134). There was considerable variability in the summative assessments and their weighting in determining final grades. The NBME MSE (91.8%), clinical performance (90.9%), professionalism (70.9%), and written notes (60.0%) were the most commonly used assessments. Clinical performance assessments and the NBME MSE accounted for the largest percentage of the total grade (on average 52.8% and 23.5%, respectively). Eighty-seven percent of respondents were concerned that students' focus on the NBME MSE performance detracted from patient care learning. CONCLUSIONS There was considerable variability in what IM clerkships assessed and how those assessments were translated into grades. The NBME MSE was a major contributor to the final grade despite concerns about the impact on patient care learning. These findings underscore the difficulty in comparing learners across institutions and serve to advance discussions for how to improve accuracy and comparability of grading in the clinical environment.
Collapse
Affiliation(s)
- Caridad A Hernandez
- C.A. Hernandez is professor of medicine, Departments of Internal Medicine and Medical Education, University of Central Florida College of Medicine, Orlando, Florida
| | - Feroza Daroowalla
- F. Daroowalla is associate professor of medicine, Department of Medical Education, and Internal Medicine Clerkship Director, University of Central Florida College of Medicine, Orlando, Florida
| | - Jeffrey S LaRochelle
- J.S. LaRochelle is professor of medicine, Department of Medical Education, and assistant dean of medical education, University of Central Florida College of Medicine, Orlando, Florida
| | - Nadia Ismail
- N. Ismail is associate professor of medicine, Department of Medicine, and associate dean, curriculum, Baylor College of Medicine, Houston, Texas
| | - Kimberly M Tartaglia
- K.M. Tartaglia is associate professor of clinical medicine and pediatrics, Division of Hospital Medicine, The Ohio State University, Columbus, Ohio
| | - Mark J Fagan
- M.J. Fagan is professor of medicine emeritus, Department of Medicine, Alpert Medical School of Brown University, Providence, Rhode Island
| | - Michael Kisielewski
- M. Kisielewski is Surveys and Research Manager, Alliance for Academic Internal Medicine, Alexandria, Virginia
| | - Katherine Walsh
- K. Walsh is associate professor of clinical internal medicine, Division of Hematology and Internal Medicine Inpatient Clerkship Director, The Ohio State University, Columbus, Ohio
| |
Collapse
|
11
|
Tavares W, Kuper A, Kulasegaram K, Whitehead C. The compatibility principle: on philosophies in the assessment of clinical competence. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2020; 25:1003-1018. [PMID: 31677146 DOI: 10.1007/s10459-019-09939-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Accepted: 10/25/2019] [Indexed: 06/10/2023]
Abstract
The array of different philosophical positions underlying contemporary views on competence, assessment strategies and justification have led to advances in assessment science. Challenges may arise when these philosophical positions are not considered in assessment design. These can include (a) a logical incompatibility leading to varied or difficult interpretations of assessment results, (b) an "anything goes" approach, and (c) uncertainty regarding when and in what context various philosophical positions are appropriate. We propose a compatibility principle that recognizes that different philosophical positions commit assessors/assessment researchers to particular ideas, assumptions and commitments, and applies ta logic of philosophically-informed, assessment-based inquiry. Assessment is optimized when its underlying philosophical position produces congruent, aligned and coherent views on constructs, assessment strategies, justification and their interpretations. As a way forward we argue that (a) there can and should be variability in the philosophical positions used in assessment, and these should be clearly articulated to promote understanding of assumptions and make sense of justifications; (b) we focus on developing the merits, boundaries and relationships within and/or between philosophical positions in assessment; (c) we examine a core set of principles related to the role and relevance of philosophical positions; (d) we elaborate strategies and criteria to delineate compatible from incompatible; and (f) we articulate a need to broaden knowledge/competencies related to these issues. The broadened use of philosophical positions in assessment in the health professions affect the "state of play" and can undermine assessment programs. This may be overcome with attention to the alignment between underlying assumptions/commitments.
Collapse
Affiliation(s)
- Walter Tavares
- The Wilson Centre, Department of Medicine, University of Toronto/University Health Network, 200 Elizabeth Street, 1ES-565, Toronto, ON, M5G 2C4, Canada.
- Post-MD Education (Post-Graduate Medical Education/Continued Professional Development), University of Toronto, Toronto, ON, Canada.
| | - Ayelet Kuper
- The Wilson Centre, Department of Medicine, University of Toronto/University Health Network, 200 Elizabeth Street, 1ES-565, Toronto, ON, M5G 2C4, Canada
- Division of General Internal Medicine, Sunnybrook Health Sciences Centre, Toronto, Canada
- Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Kulamakan Kulasegaram
- The Wilson Centre, Department of Medicine, University of Toronto/University Health Network, 200 Elizabeth Street, 1ES-565, Toronto, ON, M5G 2C4, Canada
- Department of Family and Community Medicine, Women's College Hospital, Faculty of Medicine, University of Toronto, Toronto, Canada
- MD Program, University of Toronto, Toronto, Canada
| | - Cynthia Whitehead
- The Wilson Centre, Department of Medicine, University of Toronto/University Health Network, 200 Elizabeth Street, 1ES-565, Toronto, ON, M5G 2C4, Canada
- Department of Family and Community Medicine, Women's College Hospital, Faculty of Medicine, University of Toronto, Toronto, Canada
| |
Collapse
|
12
|
Dory V, Cummings BA, Mondou M, Young M. Nudging clinical supervisors to provide better in-training assessment reports. PERSPECTIVES ON MEDICAL EDUCATION 2020; 9:66-70. [PMID: 31848999 PMCID: PMC7012977 DOI: 10.1007/s40037-019-00554-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
INTRODUCTION In-training assessment reports (ITARs) summarize assessment during a clinical placement to inform decision-making and provide formal feedback to learners. Faculty development is an effective but resource-intensive means of improving the quality of completed ITARs. We examined whether the quality of completed ITARs could be improved by 'nudges' from the format of ITAR forms. METHODS Our first intervention consisted of placing the section for narrative comments at the beginning of the form, and using prompts for recommendations (Do more, Keep doing, Do less, Stop doing). In a second intervention, we provided a hyperlink to a detailed assessment rubric and shortened the checklist section. We analyzed a sample of 360 de-identified completed ITARs from six disciplines across the three academic years where the different versions of the ITAR were used. Two raters independently scored the ITARs using the Completed Clinical Evaluation Report Rating (CCERR) scale. We tested for differences between versions of the ITAR forms using a one-way ANOVA for the total CCERR score, and MANOVA for the nine CCERR item scores. RESULTS Changes to the form structure (nudges) improved the quality of information generated as measured by the CCERR instrument, from a total score of 18.0/45 (SD 2.6) to 18.9/45 (SD 3.1) and 18.8/45 (SD 2.6), p = 0.04. Specifically, comments were more balanced, more detailed, and more actionable compared with the original ITAR. DISCUSSION Nudge interventions, which are inexpensive and feasible, should be included in multipronged approaches to improve the quality of assessment reports.
Collapse
Affiliation(s)
- Valérie Dory
- Department of Medicine and Centre for Medical Education; Faculty of Medicine, McGill University, Montreal, QC, Canada.
| | - Beth-Ann Cummings
- Undergraduate Medical Education, Department of Medicine, and Institute of Health Sciences Education; Faculty of Medicine, McGill University, Montreal, QC, Canada
| | - Mélanie Mondou
- Department of Medicine and Institute of Health Sciences Education; Faculty of Medicine, McGill University, Montreal, QC, Canada
| | - Meredith Young
- Department of Medicine and Institute of Health Sciences Education; Faculty of Medicine, McGill University, Montreal, QC, Canada
| |
Collapse
|
13
|
Pearce J. In defence of constructivist, utility-driven psychometrics for the 'post-psychometric era'. MEDICAL EDUCATION 2020; 54:99-102. [PMID: 31867758 DOI: 10.1111/medu.14039] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Affiliation(s)
- Jacob Pearce
- Australian Council for Educational Research - Assessment and Psychometric Research, Camberwell, Victoria, Australia
| |
Collapse
|
14
|
Kelly MS, Mooney CJ, Rosati JF, Braun MK, Thompson Stone R. Education Research: The Narrative Evaluation Quality Instrument: Development of a tool to assess the assessor. Neurology 2020; 94:91-95. [PMID: 31932402 DOI: 10.1212/wnl.0000000000008794] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
OBJECTIVE Determining the quality of narrative evaluations to assess medical student neurology clerkship performance remains a challenge. This study sought to develop a tool to comprehensively and systematically assess quality of student narrative evaluations. METHODS The Narrative Evaluation Quality Instrument (NEQI) was created to assess several components within clerkship narrative evaluations: performance domains, specificity, and usefulness to learner. In this retrospective study, 5 investigators scored 123 narrative evaluations using the NEQI. Inter-rater reliability was estimated by calculating interclass correlation coefficients (ICC) across 615 NEQI scores. RESULTS The average overall NEQI score was 6.4 (SD 2.9), with mean component arm scores of 2.6 for performance domains (SD 0.9), 1.8 for specificity (SD 1.1), and 2.0 for usefulness (SD 1.4). Each component arm exhibited moderate reliability: performance domains ICC 0.65 (95% confidence interval [CI] 0.58-0.72), specificity ICC 0.69 (95% CI 0.61-0.77), and usefulness ICC 0.73 (95% CI 0.66-0.80). Overall NEQI score exhibited good reliability (0.81; 95% CI 0.77-0.86). CONCLUSION The NEQI is a novel, reliable tool to comprehensively assess the quality of narrative evaluation of neurology clerks and will enhance the study of interventions seeking to improve clerkship evaluation.
Collapse
Affiliation(s)
- Michael S Kelly
- From the Department of Neurology (R.T.S., J.R., M.B.), University of Rochester School of Medicine and Dentistry (C.M., M.K.), NY
| | - Christopher J Mooney
- From the Department of Neurology (R.T.S., J.R., M.B.), University of Rochester School of Medicine and Dentistry (C.M., M.K.), NY
| | - Justin F Rosati
- From the Department of Neurology (R.T.S., J.R., M.B.), University of Rochester School of Medicine and Dentistry (C.M., M.K.), NY
| | - Melanie K Braun
- From the Department of Neurology (R.T.S., J.R., M.B.), University of Rochester School of Medicine and Dentistry (C.M., M.K.), NY
| | - Robert Thompson Stone
- From the Department of Neurology (R.T.S., J.R., M.B.), University of Rochester School of Medicine and Dentistry (C.M., M.K.), NY
| |
Collapse
|
15
|
Woodruff JN. Accounting for complexity in medical education: a model of adaptive behaviour in medicine. MEDICAL EDUCATION 2019; 53:861-873. [PMID: 31106901 DOI: 10.1111/medu.13905] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2018] [Revised: 02/07/2019] [Accepted: 04/04/2019] [Indexed: 06/09/2023]
Abstract
CONTEXT Medicine is practised in complex systems. Physicians engage in clinical and operational problems that are dynamic and lack full transparency. As a consequence, the behaviour of medical systems and diseases is often unpredictable. Medical science has equipped physicians with powerful tools to favourably impact health, but a reductionist approach alone is insufficient to optimally address the complex challenges posed by illness and public health. Concepts from complexity science, such as continuous quality improvement and teamwork, strive to fill the gap between biomedical knowledge and the realities of practice. However, the superficial treatment of these systems-thinking concepts in medical education has distorted their implementation and undermined their impact. 'Systems thinking' has been conflated with 'systematic thinking'; concepts which are adaptive in nature are being taught as standardised, reductionist tools. METHODS Using concepts from complexity science, the history of science and psychology, this problem is outlined and a theoretical model of professional development is proposed. RESULTS This model proposes that complex problem solving and adaptive behaviour, not technical expertise, are distinguishing features of professionalism. DISCUSSION The impact of this model on our understanding of physician autonomy, professionalism, teamwork and continuous quality improvement is discussed. This model has significant implications for the structure and content of medical education. Strategies for enhancing medical training, including interventions in recruitment, the curriculum and evaluation, are reviewed. Such adjustments would prepare trainees to more effectively utilise biomedical knowledge and tools in the complex high-stakes reality of medical practice.
Collapse
Affiliation(s)
- James N Woodruff
- Department of Medicine, The University of Chicago, Chicago, Illinois, USA
- The Pritzker School of Medicine, Chicago, Illinois, USA
| |
Collapse
|
16
|
Wilby KJ, Govaerts MJB, Dolmans DHJM, Austin Z, van der Vleuten C. Reliability of narrative assessment data on communication skills in a summative OSCE. PATIENT EDUCATION AND COUNSELING 2019; 102:1164-1169. [PMID: 30711383 DOI: 10.1016/j.pec.2019.01.018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2018] [Revised: 12/20/2018] [Accepted: 01/24/2019] [Indexed: 06/09/2023]
Abstract
OBJECTIVE To quantitatively estimate the reliability of narrative assessment data regarding student communication skills obtained from a summative OSCE and to compare reliability to that of communication scores obtained from direct observation. METHODS Narrative comments and communication scores (scale 1-5) were obtained for 14 graduating pharmacy students across 6 summative OSCE stations with 2 assessors per station who directly observed student performance. Two assessors who had not observed the OSCE reviewed narratives and independently scored communication skills according to the same 5-point scale. Generalizability theory was used to estimate reliability. Correlation was used to evaluate the relationship between scores from each assessment method. RESULTS A total of 168 narratives and communication scores were obtained. The G-coefficients were 0.571 for scores provided by assessors present during the OSCE and 0.612 for scores from assessors who provided scores based on narratives only. Correlation between the two sets of scores was 0.5. CONCLUSION Reliability of communication scores is not dependent on whether assessors directly observe student performance or assess written narratives, yet both conditions appear to measure communication skills somewhat differently. PRACTICE IMPLICATIONS Narratives may be useful for summative decision-making and help overcome the current limitations of using solely quantitative scores.
Collapse
Affiliation(s)
- Kyle John Wilby
- College of Pharmacy, Qatar University, PO Box 2713, Doha, Qatar.
| | - Marjan J B Govaerts
- School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, Netherlands
| | - Diana H J M Dolmans
- School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, Netherlands
| | - Zubin Austin
- Leslie Dan Faculty of Pharmacy, University of Toronto, 144 College St., Toronto ON, M5S 3M2, Canada
| | - Cees van der Vleuten
- School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, Netherlands
| |
Collapse
|
17
|
Khodadadi R, Herrera LN, Schmit EO, Williams W, Estrada C, Zinski A. Identifying High-Performing Students in Inpatient Clerkships: A Qualitative Study. MEDICAL SCIENCE EDUCATOR 2019; 29:199-204. [PMID: 34457468 PMCID: PMC8368919 DOI: 10.1007/s40670-018-00667-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVES Examine fundamental behaviors and characteristics that attending physicians in inpatient settings utilize to identify high-performing clerkship students. METHODS We employed written comment data from a cross-sectional survey of Internal Medicine and Pediatrics attending physicians at a single academic medical center in the southern USA. Free-text responses regarding factors that faculty consider when assigning honors grades were analyzed by four trained researchers (interrater agreement 0.87) using conventional content analysis to identify themes. RESULTS Seventy-nine of 141 (56%) attending physicians who were surveyed provided 90 comments.Four major theme areas for recognizing higher performing clerkship students were identified: Taking Ownership of Patient Care (35%), Medical Knowledge and Clinical Reasoning (20%), Team Orientation (15%), and Awareness of Opportunities for Growth and Progress (13%). CONCLUSION Internal Medicine and Pediatric attending physicians identified characteristics that contributed to four themes in the determination of a high-performing medical student. These findings are particularly salient, as they highlight that commitment to patients, application of clinical knowledge and skills, teamwork, and awareness of growth and progress are valued by attending physicians for identifying top performing students in inpatient settings.
Collapse
Affiliation(s)
- Ryan Khodadadi
- Department of Medicine, Mayo Clinic, 200 First Street SW, Rochester, MN 55905 USA
- University of Alabama at Birmingham School of Medicine, Birmingham, AL USA
| | - Lauren Nicholas Herrera
- University of Alabama at Birmingham School of Medicine, Birmingham, AL USA
- Department of Medicine, Baylor College of Medicine, One Baylor Plaza, Houston, TX 77030 USA
| | - Erinn O. Schmit
- Pediatric Hospital Medicine, Department of Pediatrics, University of Alabama at Birmingham (UAB) School of Medicine, 1600 7th Avenue South, Birmingham, AL 35233-1771 USA
| | - Winter Williams
- General Internal Medicine, Department of Medicine, University of Alabama at Birmingham (UAB) School of Medicine and Birmingham Veterans Affairs Medical Center, 510 20th St S #720B, Birmingham, AL 35233 USA
| | - Carlos Estrada
- General Internal Medicine, Department of Medicine, University of Alabama at Birmingham (UAB) School of Medicine and Birmingham Veterans Affairs Medical Center, 510 20th St S #720B, Birmingham, AL 35233 USA
| | - Anne Zinski
- Department of Medical Education, University of Alabama at Birmingham (UAB), 1670 University Blvd, Birmingham, AL 35233 USA
| |
Collapse
|
18
|
Richards CA. Comments on "numerical versus narrative: A comparison between methods to measure medical student performance during clinical clerkships". MEDICAL TEACHER 2018; 40:428-429. [PMID: 29094637 DOI: 10.1080/0142159x.2017.1393055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
|