1
|
McGuire N, Acai A, Sonnadara RR. The McMaster Narrative Comment Rating Tool: Development and Initial Validity Evidence. TEACHING AND LEARNING IN MEDICINE 2023:1-13. [PMID: 37964518 DOI: 10.1080/10401334.2023.2276799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Accepted: 10/05/2023] [Indexed: 11/16/2023]
Abstract
CONSTRUCT The McMaster Narrative Comment Rating Tool aims to capture critical features reflecting the quality of written narrative comments provided in the medical education context: valence/tone of language, degree of correction versus reinforcement, specificity, actionability, and overall usefulness. BACKGROUND Despite their role in competency-based medical education, not all narrative comments contribute meaningfully to the development of learners' competence. To develop solutions to mitigate this problem, robust measures of narrative comment quality are needed. While some tools exist, most were created in specialty-specific contexts, have focused on one or two features of feedback, or have focused on faculty perceptions of feedback, excluding learners from the validation process. In this study, we aimed to develop a detailed, broadly applicable narrative comment quality assessment tool that drew upon features of high-quality assessment and feedback and could be used by a variety of raters to inform future research, including applications related to automated analysis of narrative comment quality. APPROACH In Phase 1, we used the literature to identify five critical features of feedback. We then developed rating scales for each of the features, and collected 670 competency-based assessments completed by first-year surgical residents in the first six-weeks of training. Residents were from nine different programs at a Canadian institution. In Phase 2, we randomly selected 50 assessments with written feedback from the dataset. Two education researchers used the scale to independently score the written comments and refine the rating tool. In Phase 3, 10 raters, including two medical education researchers, two medical students, two residents, two clinical faculty members, and two laypersons from the community, used the tool to independently and blindly rate written comments from another 50 randomly selected assessments from the dataset. We compared scores between and across rater pairs to assess reliability. FINDINGS Single and average measures intraclass correlation (ICC) scores ranged from moderate to excellent (ICCs = .51-.83 and .91-.98) across all categories and rater pairs. All tool domains were significantly correlated (p's <.05), apart from valence, which was only significantly correlated with degree of correction versus reinforcement. CONCLUSION Our findings suggest that the McMaster Narrative Comment Rating Tool can reliably be used by multiple raters, across a variety of rater types, and in different surgical contexts. As such, it has the potential to support faculty development initiatives on assessment and feedback, and may be used as a tool to conduct research on different assessment strategies, including automated analysis of narrative comments.
Collapse
Affiliation(s)
- Natalie McGuire
- Office of Professional Development and Educational Scholarship, Queen's University, Kingston, Ontario, Canada
| | - Anita Acai
- Department of Psychiatry and Behavioural Neurosciences and McMaster Education Research, Innovation and Theory (MERIT) Program, McMaster University, and St. Joseph's Education Research Centre (SERC), St. Joseph's Healthcare Hamilton, Hamilton, Canada
| | - Ranil R Sonnadara
- Office of Education Science, Department of Surgery, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
2
|
Mooney CJ, Stone RT, Wang L, Blatt AE, Pascoe JM, Lang VJ. Examining Generalizability of Faculty Members' Narrative Assessments. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S210. [PMID: 37983456 DOI: 10.1097/acm.0000000000005417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2023]
Affiliation(s)
- Christopher J Mooney
- Author affiliations: C.J. Mooney, R.T. Stone, L. Wang, A.E. Blatt, J.M. Pascoe, V.J. Lang, University of Rochester School of Medicine and Dentistry
| | | | | | | | | | | |
Collapse
|
3
|
Chakroun M, Dion VR, Ouellet K, Graillon A, Désilets V, Xhignesse M, St-Onge C. Quality of Narratives in Assessment: Piloting a List of Evidence-Based Quality Indicators. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:XX. [PMID: 37252269 PMCID: PMC10215990 DOI: 10.5334/pme.925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 05/12/2023] [Indexed: 05/31/2023]
Abstract
Background & Need for Innovation Appraising the quality of narratives used in assessment is challenging for educators and administrators. Although some quality indicators for writing narratives exist in the literature, they remain context specific and not always sufficiently operational to be easily used. Creating a tool that gathers applicable quality indicators and ensuring its standardized use would equip assessors to appraise the quality of narratives. Steps taken for Development and Implementation of innovation We used DeVellis' framework to develop a checklist of evidence-informed indicators for quality narratives. Two team members independently piloted the checklist using four series of narratives coming from three different sources. After each series, team members documented their agreement and achieved a consensus. We calculated frequencies of occurrence for each quality indicator as well as the interrater agreement to assess the standardized application of the checklist. Outcomes of Innovation We identified seven quality indicators and applied them on narratives. Frequencies of quality indicators ranged from 0% to 100%. Interrater agreement ranged from 88.7% to 100% for the four series. Critical Reflection Although we were able to achieve a standardized application of a list of quality indicators for narratives used in health sciences education, it does not exclude the fact that users would need training to be able to write good quality narratives. We also noted that some quality indicators were less frequent than others and we suggested a few reflections on this.
Collapse
Affiliation(s)
- Molk Chakroun
- Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Vincent R. Dion
- Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Kathleen Ouellet
- Paul Grand’Maison de la Sociétédes médecins de l’Universitéde Sherbrooke research chair in medical education, Sherbrooke, Québec, CA
| | - Ann Graillon
- Centre de pédagogie et des sciences de la santé, Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Valérie Désilets
- Department of Pediatrics, Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Marianne Xhignesse
- Department of Family and Emergency Medicine, Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Christina St-Onge
- Department of Medicine, Faculty of medicine and health sciences, Universitéde Sherbrooke, Paul Grand’Maison de la Sociétédes médecins de l’Universitéde Sherbrooke research chair in medical education, Sherbrooke, Québec, CA
| |
Collapse
|
4
|
Gutierrez M, Wilson K, Bickford B, Yuhas J, Markert R, Burtson KM. Novel In-Training Evaluation Report in an Internal Medicine Residency Program: Improving the Quality of the Narrative Assessment. JOURNAL OF MEDICAL EDUCATION AND CURRICULAR DEVELOPMENT 2023; 10:23821205231206058. [PMID: 37822780 PMCID: PMC10563452 DOI: 10.1177/23821205231206058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 09/20/2023] [Indexed: 10/13/2023]
Abstract
OBJECTIVE To determine whether incorporating our novel in-training evaluation report (ITER), which prompts each resident to list at least three self-identified learning goals, improved the quality of narrative assessments as measured by the Narrative Evaluation Quality Instrument (NEQI). METHODS A total of 1468 narrative assessments from a single institution from 2017 to 2021 were deidentified, compiled, and sorted into the pre-intervention form arm and post-intervention form arm. Due to limitations in our residency management suite, incorporating learning goals required switching from an electronic form to a hand-deliver form. Comments were graded by two research personnel utilizing the NEQI's scale of 0-12, with 12 representing the maximum quality for a comment. The outcome of the study was the mean difference in NEQI score between the electronic pre-intervention period and paper post-intervention period. RESULTS The mean NEQI score for the pre-intervention period was 2.43 ± 3.34, and the mean NEQI score for the post-intervention period was 3.31 ± 1.71, with a mean difference of 0.88 (p < 0.001). In the pre-intervention period, 46% of evaluations were submitted without a narrative assessment (scored as a zero) while 1% of post-intervention period evaluations had no narrative assessment. Internal consistency reliability, as measured by Ebel's intraclass correlation coefficient (ICC), showed high agreement between the two raters (ICC = 0.92). CONCLUSIONS Our findings suggest that implementing a timely, hand-delivered paper ITER that incorporates resident learning goals can lead to overall higher-quality narrative assessments.
Collapse
Affiliation(s)
- Marc Gutierrez
- Internal Medicine Program, Affiliated with Wright Patterson AFB and Wright State University, Wright-Patterson AFB, OH, USA
| | - Kelsey Wilson
- Internal Medicine Program, Affiliated with Wright Patterson AFB and Wright State University, Wright-Patterson AFB, OH, USA
| | - Brant Bickford
- Internal Medicine Program, Affiliated with Wright Patterson AFB and Wright State University, Wright-Patterson AFB, OH, USA
| | - Joseph Yuhas
- Internal Medicine Program, Affiliated with Wright Patterson AFB and Wright State University, Wright-Patterson AFB, OH, USA
| | - Ronald Markert
- Department of Internal Medicine and Neurology, Affiliated with Wright State University, Dayton, OH, USA
| | - Kathryn M Burtson
- Internal Medicine Program, Affiliated with Wright Patterson AFB, Boonshoft School of Medicine and Wright State University, Wright-Patterson AFB, OH 45433, USA
| |
Collapse
|
5
|
Mooney CJ, Pascoe JM, Blatt AE, Lang VJ, Kelly MS, Braun MK, Burch JE, Stone RT. Predictors of faculty narrative evaluation quality in medical school clerkships. MEDICAL EDUCATION 2022; 56:1223-1231. [PMID: 35950329 DOI: 10.1111/medu.14911] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 08/01/2022] [Accepted: 08/08/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Narrative approaches to assessment provide meaningful and valid representations of trainee performance. Yet, narratives are frequently perceived as vague, nonspecific and low quality. To date, there is little research examining factors associated with narrative evaluation quality, particularly in undergraduate medical education. The purpose of this study was to examine associations of faculty- and student-level characteristics with the quality of faculty member's narrative evaluations of clerkship students. METHODS The authors reviewed faculty narrative evaluations of 50 students' clinical performance in their inpatient medicine and neurology clerkships, resulting in 165 and 87 unique evaluations in the respective clerkships. The authors evaluated narrative quality using the Narrative Evaluation Quality Instrument (NEQI). The authors used linear mixed effects modelling to predict total NEQI score. Explanatory covariates included the following: time to evaluation completion, number of weeks spent with student, faculty total weeks on service per year, total faculty years in clinical education, student gender, faculty gender, and an interaction term between student and faculty gender. RESULTS Significantly higher narrative evaluation quality was associated with a shorter time to evaluation completion, with NEQI scores decreasing by approximately 0.3 points every 10 days following students' rotations (p = .004). Additionally, women faculty had statistically higher quality narrative evaluations with NEQI scores 1.92 points greater than men faculty (p = .012). All other covariates were not significant. CONCLUSIONS The quality of faculty members' narrative evaluations of medical students was associated with time to evaluation completion and faculty gender but not faculty experience in clinical education, faculty weeks on service, or the amount of time spent with students. Findings advance understanding on ways to improve the quality of narrative evaluations which are imperative given assessment models that will increase the volume and reliance on narratives.
Collapse
Affiliation(s)
- Christopher J Mooney
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Jennifer M Pascoe
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Amy E Blatt
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Valerie J Lang
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | | | - Melanie K Braun
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Jaclyn E Burch
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | | |
Collapse
|
6
|
Chakroun M, Dion VR, Ouellet K, Graillon A, Désilets V, Xhignesse M, St-Onge C. Narrative Assessments in Higher Education: A Scoping Review to Identify Evidence-Based Quality Indicators. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:1699-1706. [PMID: 35612917 DOI: 10.1097/acm.0000000000004755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Narrative comments are increasingly used in assessment to document trainees' performance and to make important decisions about academic progress. However, little is known about how to document the quality of narrative comments, since traditional psychometric analysis cannot be applied. The authors aimed to generate a list of quality indicators for narrative comments, to identify recommendations for writing high-quality narrative comments, and to document factors that influence the quality of narrative comments used in assessments in higher education. METHOD The authors conducted a scoping review according to Arksey & O'Malley's framework. The search strategy yielded 690 articles from 6 databases. Team members screened abstracts for inclusion and exclusion, then extracted numerical and qualitative data based on predetermined categories. Numerical data were used for descriptive analysis. The authors completed the thematic analysis of qualitative data with iterative discussions until they achieved consensus for the interpretation of the results. RESULTS After the full-text review of 213 selected articles, 47 were included. Through the thematic analysis, the authors identified 7 quality indicators, 12 recommendations for writing quality narratives, and 3 factors that influence the quality of narrative comments used in assessment. The 7 quality indicators are (1) describes performance with a focus on particular elements (attitudes, knowledge, skills); (2) provides a balanced message between positive elements and elements needing improvement; (3) provides recommendations to learners on how to improve their performance; (4) compares the observed performance with an expected standard of performance; (5) provides justification for the mark/score given; (6) uses language that is clear and easily understood; and (7) uses a nonjudgmental style. CONCLUSIONS Assessors can use these quality indicators and recommendations to write high-quality narrative comments, thus reinforcing the appropriate documentation of trainees' performance, facilitating solid decision making about trainees' progression, and enhancing the impact of narrative feedback for both learners and programs.
Collapse
Affiliation(s)
- Molk Chakroun
- M. Chakroun is a PhD student, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0002-0518-1782
| | - Vincent R Dion
- V.R. Dion was research assistant, Paul Grand'Maison de la Société des médecins de l'Université de Sherbrooke Research Chair in Medical Education, at the time of this work, and is now a first-year medical student, Université de Sherbrooke, Sherbrooke, Québec, Canada
| | - Kathleen Ouellet
- K. Ouellet is research coordinator, Centre de pédagogie et des sciences de la santé, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0001-9829-151X
| | - Ann Graillon
- A. Graillon is associate professor, Department of Pediatrics, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0003-3677-7113
| | - Valérie Désilets
- V. Désilets is associate professor, Department of Pediatrics, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0001-7399-119X
| | - Marianne Xhignesse
- M. Xhignesse is full professor, Department of Family and Emergency Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0002-3257-5912
| | - Christina St-Onge
- C. St-Onge is full professor, Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, and holds the Paul Grand'Maison de la Société des médecins de l'Université de Sherbrooke Research Chair in Medical Education, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0001-5313-0456
| |
Collapse
|