1
|
Mooney CJ, Pascoe JM, Blatt AE, Lang VJ, Kelly MS, Braun MK, Burch JE, Stone RT. Predictors of faculty narrative evaluation quality in medical school clerkships. MEDICAL EDUCATION 2022; 56:1223-1231. [PMID: 35950329 DOI: 10.1111/medu.14911] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 08/01/2022] [Accepted: 08/08/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Narrative approaches to assessment provide meaningful and valid representations of trainee performance. Yet, narratives are frequently perceived as vague, nonspecific and low quality. To date, there is little research examining factors associated with narrative evaluation quality, particularly in undergraduate medical education. The purpose of this study was to examine associations of faculty- and student-level characteristics with the quality of faculty member's narrative evaluations of clerkship students. METHODS The authors reviewed faculty narrative evaluations of 50 students' clinical performance in their inpatient medicine and neurology clerkships, resulting in 165 and 87 unique evaluations in the respective clerkships. The authors evaluated narrative quality using the Narrative Evaluation Quality Instrument (NEQI). The authors used linear mixed effects modelling to predict total NEQI score. Explanatory covariates included the following: time to evaluation completion, number of weeks spent with student, faculty total weeks on service per year, total faculty years in clinical education, student gender, faculty gender, and an interaction term between student and faculty gender. RESULTS Significantly higher narrative evaluation quality was associated with a shorter time to evaluation completion, with NEQI scores decreasing by approximately 0.3 points every 10 days following students' rotations (p = .004). Additionally, women faculty had statistically higher quality narrative evaluations with NEQI scores 1.92 points greater than men faculty (p = .012). All other covariates were not significant. CONCLUSIONS The quality of faculty members' narrative evaluations of medical students was associated with time to evaluation completion and faculty gender but not faculty experience in clinical education, faculty weeks on service, or the amount of time spent with students. Findings advance understanding on ways to improve the quality of narrative evaluations which are imperative given assessment models that will increase the volume and reliance on narratives.
Collapse
Affiliation(s)
- Christopher J Mooney
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Jennifer M Pascoe
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Amy E Blatt
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Valerie J Lang
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | | | - Melanie K Braun
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Jaclyn E Burch
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | | |
Collapse
|
2
|
Kinnear B, Schumacher DJ, Driessen EW, Varpio L. How argumentation theory can inform assessment validity: A critical review. MEDICAL EDUCATION 2022; 56:1064-1075. [PMID: 35851965 PMCID: PMC9796688 DOI: 10.1111/medu.14882] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 07/07/2022] [Accepted: 07/15/2022] [Indexed: 05/21/2023]
Abstract
INTRODUCTION Many health professions education (HPE) scholars frame assessment validity as a form of argumentation in which interpretations and uses of assessment scores must be supported by evidence. However, what are purported to be validity arguments are often merely clusters of evidence without a guiding framework to evaluate, prioritise, or debate their merits. Argumentation theory is a field of study dedicated to understanding the production, analysis, and evaluation of arguments (spoken or written). The aim of this study is to describe argumentation theory, articulating the unique insights it can offer to HPE assessment, and presenting how different argumentation orientations can help reconceptualize the nature of validity in generative ways. METHODS The authors followed a five-step critical review process consisting of iterative cycles of focusing, searching, appraising, sampling, and analysing the argumentation theory literature. The authors generated and synthesised a corpus of manuscripts on argumentation orientations deemed to be most applicable to HPE. RESULTS We selected two argumentation orientations that we considered particularly constructive for informing HPE assessment validity: New rhetoric and informal logic. In new rhetoric, the goal of argumentation is to persuade, with a focus on an audience's values and standards. Informal logic centres on identifying, structuring, and evaluating arguments in real-world settings, with a variety of normative standards used to evaluate argument validity. DISCUSSION Both new rhetoric and informal logic provide philosophical, theoretical, or practical groundings that can advance HPE validity argumentation. New rhetoric's foregrounding of audience aligns with HPE's social imperative to be accountable to specific stakeholders such as the public and learners. Informal logic provides tools for identifying and structuring validity arguments for analysis and evaluation.
Collapse
Affiliation(s)
- Benjamin Kinnear
- Department of PediatricsUniversity of Cincinnati College of MedicineCincinnatiOhioUSA
- School of Health Professions Education (SHE)Maastricht UniversityMaastrichtThe Netherlands
| | - Daniel J. Schumacher
- Department of PediatricsUniversity of Cincinnati College of MedicineCincinnatiOhioUSA
| | - Erik W. Driessen
- School of Health Professions Education Faculty of HealthMedicine and Life Sciences of Maastricht UniversityMaastrichtThe Netherlands
| | - Lara Varpio
- Uniformed Services University of the Health SciencesBethesdaMarylandUSA
| |
Collapse
|
3
|
Lacasse M, Renaud JS, Côté L, Lafleur A, Codsi MP, Dove M, Pélissier-Simard L, Pitre L, Rheault C. [Feedback Guide for direct observation of family medicine residents in Canada: a francophone tool]. CANADIAN MEDICAL EDUCATION JOURNAL 2022; 13:29-54. [PMID: 35321416 PMCID: PMC8909829 DOI: 10.36834/cmej.72587] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
BACKGROUND There is no CanMEDS-FM-based milestone tool to guide feedback during direct observation (DO). We have developed a guide to support documentation of feedback for direct observation (DO) in Canadian family medicine (FM) programs. METHODS The Guide was designed in three phases with the collaboration of five Canadian FM programs with at least a French-speaking teaching site: 1) literature review and needs assessment; 2) development of the DO Feedback Guide; 3) testing the Guide in a video simulation context with qualitative content analysis. RESULTS Phase 1 demonstrated the need for a narrative guide aimed at 1) specifying mutual expectations according to the resident's level of training and the clinical context, 2) providing the supervisor with tools and structure in his observations 3) to facilitate documentation of feedback. Phase 2 made it possible to develop the Guide, in paper and electronic formats, meeting the needs identified. In phase 3, 15 supervisors used the guide for three levels of residence. The Guide was adjusted following this testing to recall the phases of the clinical encounter that were often forgotten during feedback (before consultation, diagnosis and follow-up), and to suggest types of formulation to be favored (stimulating questions, questions of clarification, reflections). CONCLUSION Based on evidence and a collaborative approach, this Guide will equip French-speaking Canadian supervisors and residents performing DO in family medicine.
Collapse
Affiliation(s)
| | | | - Luc Côté
- Université Laval, Québec, Canada
| | | | | | | | | | | | | |
Collapse
|
4
|
Ginsburg S, Watling CJ, Schumacher DJ, Gingerich A, Hatala R. Numbers Encapsulate, Words Elaborate: Toward the Best Use of Comments for Assessment and Feedback on Entrustment Ratings. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S81-S86. [PMID: 34183607 DOI: 10.1097/acm.0000000000004089] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The adoption of entrustment ratings in medical education is based on a seemingly simple premise: to align workplace-based supervision with resident assessment. Yet it has been difficult to operationalize this concept. Entrustment rating forms combine numeric scales with comments and are embedded in a programmatic assessment framework, which encourages the collection of a large quantity of data. The implicit assumption that more is better has led to an untamable volume of data that competency committees must grapple with. In this article, the authors explore the roles of numbers and words on entrustment rating forms, focusing on the intended and optimal use(s) of each, with a focus on the words. They also unpack the problematic issue of dual-purposing words for both assessment and feedback. Words have enormous potential to elaborate, to contextualize, and to instruct; to realize this potential, educators must be crystal clear about their use. The authors set forth a number of possible ways to reconcile these tensions by more explicitly aligning words to purpose. For example, educators could focus written comments solely on assessment; create assessment encounters distinct from feedback encounters; or use different words collected from the same encounter to serve distinct feedback and assessment purposes. Finally, the authors address the tyranny of documentation created by programmatic assessment and urge caution in yielding to the temptation to reduce words to numbers to make them manageable. Instead, they encourage educators to preserve some educational encounters purely for feedback, and to consider that not all words need to become data.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- S. Ginsburg is professor of medicine, Department of Medicine, Sinai Health System and Faculty of Medicine, University of Toronto, scientist, Wilson Centre for Research in Education, University of Toronto, Toronto, Ontario, Canada, and Canada Research Chair in Health Professions Education; ORCID: http://orcid.org/0000-0002-4595-6650
| | - Christopher J Watling
- C.J. Watling is professor and director, Centre for Education Research and Innovation, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada; ORCID: https://orcid.org/0000-0001-9686-795X
| | - Daniel J Schumacher
- D.J. Schumacher is associate professor of pediatrics, Cincinnati Children's Hospital Medical Center and University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0001-5507-8452
| | - Andrea Gingerich
- A. Gingerich is assistant professor, Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada; ORCID: https://orcid.org/0000-0001-5765-3975
| | - Rose Hatala
- R. Hatala is professor, Department of Medicine, and director, Clinical Educator Fellowship, Center for Health Education Scholarship, University of British Columbia, Vancouver, British Columbia, Canada; ORCID: https://orcid.org/0000-0003-0521-2590
| |
Collapse
|
5
|
Bearman M, Ajjawi R, Bennett S, Boud D. The hidden labours of designing the Objective Structured Clinical Examination: a Practice Theory study. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2021; 26:637-651. [PMID: 33196956 DOI: 10.1007/s10459-020-10015-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Accepted: 11/03/2020] [Indexed: 06/11/2023]
Abstract
Objective Structured Clinical Examinations (OSCEs) have become ubiquitous as a form of assessment in medical education but involve substantial resource demands and considerable local variation. A detailed understanding of the processes by which OSCEs are designed and administered could improve feasibility and sustainability. This exploration of OSCE design is informed by Practice Theory, which suggests assessment design processes are dynamic, social and situated activities. The overall purpose is to provide insights that inform on-the-ground OSCE administration. Fifteen interviews were conducted with OSCE academics and administrators from three medical schools in Australia, the United Kingdom and Canada. Drawing from post-qualitative inquiry, Schatzki's Practice Theory was used both as a sensibility and as an analytic framework. OSCE design was characterised by planning activities, administration activities, negotiation activities and bureaucratic activities; it involves significant and resource-intensive effort in negotiation and coordination. There was considerable local variation but at the same time activities were remarkably consonant across national boundaries. There was a tension between general understandings such as reliability and validity that underpin the OSCE and the improvisational practices associated with design and administration. Our findings highlighted the role of blueprints as a key coordinating artefact but with too many rules and procedures prompting cycles of bureaucracy and complexity. Emphasising coordination rather than standardisation might ease workloads, support adaptation to local environments and prevent an overly reductive approach to this assessment format.
Collapse
Affiliation(s)
- Margaret Bearman
- Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University, Geelong, Australia.
| | - Rola Ajjawi
- Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University, Geelong, Australia
| | - Sue Bennett
- School of Education, University of Wollongong, Wollongong, Australia
| | - David Boud
- Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University, Geelong, Australia
- Faculty of Arts and Social Sciences, University of Technology Sydney, Ultimo, Australia
- Centre for Research on Work and Learning, Middlesex University, London, UK
| |
Collapse
|
6
|
Wilby KJ, Dolmans DHJM, Austin Z, Govaerts MJB. Assessors' interpretations of narrative data on communication skills in a summative OSCE. MEDICAL EDUCATION 2019; 53:1003-1012. [PMID: 31304615 DOI: 10.1111/medu.13924] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Revised: 03/08/2019] [Accepted: 05/29/2019] [Indexed: 06/10/2023]
Abstract
OBJECTIVES Increasingly, narrative assessment data are used to substantiate and enhance the robustness of assessor judgements. However, the interpretation of written assessment comments is inherently complex and relies on human (expert) judgements. The purpose of this study was to explore how expert assessors process and construe or bring meaning to narrative data when interpreting narrative assessment comments written by others in the setting of standardised performance assessment. METHODS Narrative assessment comments on student communication skills and communication scores across six objective structured clinical examination stations were obtained for 24 final-year pharmacy students. Aggregated narrative data across all stations were sampled for nine students (three good, three average and three poor performers, based on communication scores). A total of 10 expert assessors reviewed the aggregated set of narrative comments for each student. Cognitive (information) processing was captured through think-aloud procedures and verbal protocol analysis. RESULTS Expert assessors primarily made use of two strategies to interpret the narratives, namely comparing and contrasting, and forming mental images of student performance. Assessors appeared to use three different perspectives when interpreting narrative comments, including those of: (i) the student (placing him- or herself in the shoes of the student); (ii) the examiner (adopting the role of examiner and reinterpreting comments according to his or her own standards or beliefs), and (iii) the professional (acting as the profession's gatekeeper by considering the assessment to be a representation of real-life practice). CONCLUSIONS The present findings add to current understandings of assessors' interpretations of narrative performance data by identifying the strategies and different perspectives used by expert assessors to frame and bring meaning to written comments. Assessors' perspectives affect assessors' interpretations of assessment comments and are likely to be influenced by their beliefs, interpretations of the assessment setting and personal performance theories. These results call for the use of multiple assessors to account for variations in assessor perspectives in the interpretation of narrative assessment data.
Collapse
Affiliation(s)
- Kyle John Wilby
- School of Pharmacy, University of Otago, Dunedin, New Zealand
| | - Diana H J M Dolmans
- School of Health Professions Education (SHE), Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands
| | - Zubin Austin
- Leslie Dan Faculty of Pharmacy, University of Toronto, Toronto, Ontario, Canada
| | - Marjan J B Govaerts
- School of Health Professions Education (SHE), Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands
| |
Collapse
|