1
|
Laurin S, Castonguay V, Dory V, Cusson L, Côté L. "They were very very nice but just not very good": The interplay between resident-supervisor relationships and assessment in the emergency setting. AEM EDUCATION AND TRAINING 2024; 8:e10976. [PMID: 38532737 PMCID: PMC10962126 DOI: 10.1002/aet2.10976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 02/17/2024] [Accepted: 03/04/2024] [Indexed: 03/28/2024]
Abstract
Purpose Clinical supervisors hesitate to report learner weaknesses, a widely documented phenomenon referred to as "failure to fail." They also struggle to discuss weaknesses with learners themselves. Their reluctance to report and discuss learner weaknesses threatens the validity of assessment-of-learning decisions and the effectiveness of assessment for learning. Personal and interpersonal factors have been found to act as barriers to reporting learners' difficulties, but the precise role of the resident-supervisor relationship remains underexplored, specifically in the emergency setting. This study aims to better understand if and how factors related to the resident-supervisor relationship are involved in assessment of and for learning in the emergency setting. Methods We conducted a qualitative study, using semistructured interviews of 15 clinical supervisors in emergency medicine departments affiliated with our institution. Transcripts were independently coded by three members of the team using an iterative mixed deductive-inductive thematic analysis approach. The team then synthesized the coding and discussed analysis following guidelines for thematic analysis. Results Participating emergency medicine supervisors valued resident-supervisor relationships built on collaboration and trust and believed that such relationships support learning. They described how these relationships influenced assessment of and for learning and how in turn assessment influenced the relationship. Almost all profiles of resident-supervisor relationships in our study could hinder the disclosing of resident weaknesses, through a variety of mechanisms. To protect residents and themselves from the discomfort of disclosing weaknesses and to avoid deteriorating the resident-supervisor relationship, many downplayed or even masked residents' difficulties. Supervisors who described themselves as able to provide negative assessment of and for learning often adopted a more distant or professional stance. Conclusions This study contributes to a growing literature on failure to fail by confirming the critical impact that the resident-supervisor relationship has on the willingness and ability of emergency medicine supervisors to play their part as assessors.
Collapse
Affiliation(s)
- Suzanne Laurin
- Department of Family Medicine and Emergency MedicineUniversité de MontréalMontréalQuébecCanada
- Centre for Applied Health Sciences EducationUniversité de MontréalMontréalQuébecCanada
| | - Véronique Castonguay
- Department of Family Medicine and Emergency MedicineUniversité de MontréalMontréalQuébecCanada
- Centre for Applied Health Sciences EducationUniversité de MontréalMontréalQuébecCanada
| | - Valérie Dory
- Department of General PracticeUniversité de LiègeLiègeBelgium
| | - Lise Cusson
- Department of Family Medicine and Emergency MedicineUniversité de MontréalMontréalQuébecCanada
| | - Luc Côté
- Department of Family Medicine and Emergency MedicineUniversité LavalQuébecQuébecCanada
| |
Collapse
|
2
|
Caretta-Weyer HA, Smirnova A, Barone MA, Frank JR, Hernandez-Boussard T, Levinson D, Lombarts KMJMH, Lomis KD, Martini A, Schumacher DJ, Turner DA, Schuh A. The Next Era of Assessment: Building a Trustworthy Assessment System. PERSPECTIVES ON MEDICAL EDUCATION 2024; 13:12-23. [PMID: 38274558 PMCID: PMC10809864 DOI: 10.5334/pme.1110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 12/18/2023] [Indexed: 01/27/2024]
Abstract
Assessment in medical education has evolved through a sequence of eras each centering on distinct views and values. These eras include measurement (e.g., knowledge exams, objective structured clinical examinations), then judgments (e.g., workplace-based assessments, entrustable professional activities), and most recently systems or programmatic assessment, where over time multiple types and sources of data are collected and combined by competency committees to ensure individual learners are ready to progress to the next stage in their training. Significantly less attention has been paid to the social context of assessment, which has led to an overall erosion of trust in assessment by a variety of stakeholders including learners and frontline assessors. To meaningfully move forward, the authors assert that the reestablishment of trust should be foundational to the next era of assessment. In our actions and interventions, it is imperative that medical education leaders address and build trust in assessment at a systems level. To that end, the authors first review tenets on the social contextualization of assessment and its linkage to trust and discuss consequences should the current state of low trust continue. The authors then posit that trusting and trustworthy relationships can exist at individual as well as organizational and systems levels. Finally, the authors propose a framework to build trust at multiple levels in a future assessment system; one that invites and supports professional and human growth and has the potential to position assessment as a fundamental component of renegotiating the social contract between medical education and the health of the public.
Collapse
Affiliation(s)
- Holly A. Caretta-Weyer
- Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, California, USA
| | - Alina Smirnova
- Department of Family Medicine, University of Calgary, Calgary, Alberta, Canada
- Kern Institute for the Transformation of Medical Education, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
| | - Michael A. Barone
- NBME, Philadelphia, Pennsylvania, USA
- Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Jason R. Frank
- Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, CA
| | | | - Dana Levinson
- Josiah Macy Jr Foundation, Philadelphia, Pennsylvania, USA
| | - Kiki M. J. M. H. Lombarts
- Department of Medical Psychology, Amsterdam University Medical Centers, University of Amsterdam, NL
- Amsterdam Public Health research institute, Amsterdam, NL
| | - Kimberly D. Lomis
- Undergraduate Medical Education Innovations, American Medical Association, Chicago, Illinois, USA
| | - Abigail Martini
- Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio, USA
| | - Daniel J. Schumacher
- Division of Emergency Medicine, Cincinnati Children’s Hospital Medical Center/University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| | - David A. Turner
- American Board of Pediatrics, Chapel Hill, North Carolina, USA
| | - Abigail Schuh
- Division of Emergency Medicine, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
| |
Collapse
|
3
|
Van Ostaeyen S, Embo M, Rotsaert T, De Clercq O, Schellens T, Valcke M. A Qualitative Textual Analysis of Feedback Comments in ePortfolios: Quality and Alignment with the CanMEDS Roles. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:584-593. [PMID: 38144672 PMCID: PMC10742175 DOI: 10.5334/pme.1050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Accepted: 11/08/2023] [Indexed: 12/26/2023]
Abstract
Introduction Competency-based education requires high-quality feedback to guide students' acquisition of competencies. Sound assessment and feedback systems, such as ePortfolios, are needed to facilitate seeking and giving feedback during clinical placements. However, it is unclear whether the written feedback comments in ePortfolios are of high quality and aligned with the current competency focus. Therefore, this study investigates the quality of written feedback comments in ePortfolios of healthcare students, as well as how these feedback comments align with the CanMEDS roles. Methods A qualitative textual analysis was conducted. 2,349 written feedback comments retrieved from the ePortfolios of 149 healthcare students (specialist medicine, general practice, occupational therapy, speech therapy and midwifery) were analysed retrospectively using deductive content analysis. Two structured categorisation matrices, one based on four literature-derived feedback quality criteria (performance, judgment, elaboration and improvement) and another one on the seven CanMEDS roles (Medical Expert, Communicator, Collaborator, Leader, Health Advocate, Scholar and Professional), guided the analysis. Results The minority of the feedback comments (n = 352; 14.9%) could be considered of high quality because they met all four quality criteria. Most feedback comments were of moderate quality and met only two to three quality criteria. Regarding the CanMEDS roles, the Medical Expert role was most frequently represented in the feedback comments, as opposed to the roles Leader and Health Advocate. Discussion The results highlighted that providing high-quality feedback is challenging. To respond to these challenges, it is recommended to set up individual and continuous feedback training.
Collapse
Affiliation(s)
- Sofie Van Ostaeyen
- Department of Educational Sciences at Ghent University in Belgium, Belgium
| | - Mieke Embo
- Department of Nursing and Midwifery at the University of Antwerp, Belgium
- Department of Educational Sciences at Ghent University and in the Expertise Network Health and Care at the Artevelde University of Applied Sciences in Belgium, Belgium
| | - Tijs Rotsaert
- Department of Educational Sciences at Ghent University in Belgium, Belgium
| | - Orphée De Clercq
- Language and Translation Technology Team at Ghent University in Belgium, Belgium
| | - Tammy Schellens
- Department of Educational Sciences at Ghent University in Belgium, Belgium
| | - Martin Valcke
- Department of Educational Sciences at Ghent University in Belgium, Belgium
| |
Collapse
|
4
|
Van Ostaeyen S, De Langhe L, De Clercq O, Embo M, Schellens T, Valcke M. Automating the Identification of Feedback Quality Criteria and the CanMEDS Roles in Written Feedback Comments Using Natural Language Processing. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:540-549. [PMID: 38144670 PMCID: PMC10742245 DOI: 10.5334/pme.1056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Accepted: 10/03/2023] [Indexed: 12/26/2023]
Abstract
Introduction Manually analysing the quality of large amounts of written feedback comments is time-consuming and demands extensive resources and human effort. Therefore, this study aimed to explore whether a state-of-the-art large language model (LLM) could be fine-tuned to identify the presence of four literature-derived feedback quality criteria (performance, judgment, elaboration and improvement) and the seven CanMEDS roles (Medical Expert, Communicator, Collaborator, Leader, Health Advocate, Scholar and Professional) in written feedback comments. Methods A set of 2,349 labelled feedback comments of five healthcare educational programs in Flanders (Belgium) (specialistic medicine, general practice, midwifery, speech therapy and occupational therapy) was split into 12,452 sentences to create two datasets for the machine learning analysis. The Dutch BERT models BERTje and RobBERT were used to train four multiclass-multilabel classification models: two to identify the four feedback quality criteria and two to identify the seven CanMEDS roles. Results The classification models trained with BERTje and RobBERT to predict the presence of the four feedback quality criteria attained macro average F1-scores of 0.73 and 0.76, respectively. The F1-score of the model predicting the presence of the CanMEDS roles trained with BERTje was 0.71 and 0.72 with RobBERT. Discussion The results showed that a state-of-the-art LLM is able to identify the presence of the four feedback quality criteria and the CanMEDS roles in written feedback comments. This implies that the quality analysis of written feedback comments can be automated using an LLM, leading to savings of time and resources.
Collapse
Affiliation(s)
| | - Loic De Langhe
- Language and Translation Technology Team at Ghent University, Belgium
| | - Orphée De Clercq
- Language and Translation Technology Team at Ghent University, Belgium
| | - Mieke Embo
- Department of Educational Sciences at Ghent University and in the Expertise Network Health and Care at the Artevelde University of Applied Sciences, Belgium
| | - Tammy Schellens
- Department of Educational Sciences at Ghent University, Belgium
| | - Martin Valcke
- Department of Educational Sciences at Ghent University, Belgium
| |
Collapse
|
5
|
Choo EK, Woods R, Walker ME, O’Brien JM, Chan TM. The Quality of Assessment for Learning score for evaluating written feedback in anesthesiology postgraduate medical education: a generalizability and decision study. CANADIAN MEDICAL EDUCATION JOURNAL 2023; 14:78-85. [PMID: 38226296 PMCID: PMC10787859 DOI: 10.36834/cmej.75876] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
Background Competency based residency programs depend on high quality feedback from the assessment of entrustable professional activities (EPA). The Quality of Assessment for Learning (QuAL) score is a tool developed to rate the quality of narrative comments in workplace-based assessments; it has validity evidence for scoring the quality of narrative feedback provided to emergency medicine residents, but it is unknown whether the QuAL score is reliable in the assessment of narrative feedback in other postgraduate programs. Methods Fifty sets of EPA narratives from a single academic year at our competency based medical education post-graduate anesthesia program were selected by stratified sampling within defined parameters [e.g. resident gender and stage of training, assessor gender, Competency By Design training level, and word count (≥17 or <17 words)]. Two competency committee members and two medical students rated the quality of narrative feedback using a utility score and QuAL score. We used Kendall's tau-b co-efficient to compare the perceived utility of the written feedback to the quality assessed with the QuAL score. The authors used generalizability and decision studies to estimate the reliability and generalizability coefficients. Results Both the faculty's utility scores and QuAL scores (r = 0.646, p < 0.001) and the trainees' utility scores and QuAL scores (r = 0.667, p < 0.001) were moderately correlated. Results from the generalizability studies showed that utility scores were reliable with two raters for both faculty (Epsilon=0.87, Phi=0.86) and trainees (Epsilon=0.88, Phi=0.88). Conclusions The QuAL score is correlated with faculty- and trainee-rated utility of anesthesia EPA feedback. Both faculty and trainees can reliability apply the QuAL score to anesthesia EPA narrative feedback. This tool has the potential to be used for faculty development and program evaluation in Competency Based Medical Education. Other programs could consider replicating our study in their specialty.
Collapse
Affiliation(s)
- Eugene K Choo
- Department of Anesthesiology, College of Medicine, University of Saskatchewan, Saskatchewan, Canada;
| | - Rob Woods
- Department of Emergency Medicine, College of Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Mary Ellen Walker
- Department of Anesthesiology, College of Medicine, University of Saskatchewan, Saskatchewan, Canada;
| | - Jennifer M O’Brien
- Department of Anesthesiology, College of Medicine, University of Saskatchewan, Saskatchewan, Canada;
| | - Teresa M Chan
- Department of Medicine (Division of Emergency Medicine; Division of Education & Innovation), Michael G. DeGroote School of Medicine, Faculty of Health Sciences, McMaster University and Office of Continuing Professional Development & McMaster Education Research, Innovation, and Theory (MERIT) Program, Faculty of Health Sciences, McMaster University, Ontario, Canada
| |
Collapse
|
6
|
McGuire N, Acai A, Sonnadara RR. The McMaster Narrative Comment Rating Tool: Development and Initial Validity Evidence. TEACHING AND LEARNING IN MEDICINE 2023:1-13. [PMID: 37964518 DOI: 10.1080/10401334.2023.2276799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Accepted: 10/05/2023] [Indexed: 11/16/2023]
Abstract
CONSTRUCT The McMaster Narrative Comment Rating Tool aims to capture critical features reflecting the quality of written narrative comments provided in the medical education context: valence/tone of language, degree of correction versus reinforcement, specificity, actionability, and overall usefulness. BACKGROUND Despite their role in competency-based medical education, not all narrative comments contribute meaningfully to the development of learners' competence. To develop solutions to mitigate this problem, robust measures of narrative comment quality are needed. While some tools exist, most were created in specialty-specific contexts, have focused on one or two features of feedback, or have focused on faculty perceptions of feedback, excluding learners from the validation process. In this study, we aimed to develop a detailed, broadly applicable narrative comment quality assessment tool that drew upon features of high-quality assessment and feedback and could be used by a variety of raters to inform future research, including applications related to automated analysis of narrative comment quality. APPROACH In Phase 1, we used the literature to identify five critical features of feedback. We then developed rating scales for each of the features, and collected 670 competency-based assessments completed by first-year surgical residents in the first six-weeks of training. Residents were from nine different programs at a Canadian institution. In Phase 2, we randomly selected 50 assessments with written feedback from the dataset. Two education researchers used the scale to independently score the written comments and refine the rating tool. In Phase 3, 10 raters, including two medical education researchers, two medical students, two residents, two clinical faculty members, and two laypersons from the community, used the tool to independently and blindly rate written comments from another 50 randomly selected assessments from the dataset. We compared scores between and across rater pairs to assess reliability. FINDINGS Single and average measures intraclass correlation (ICC) scores ranged from moderate to excellent (ICCs = .51-.83 and .91-.98) across all categories and rater pairs. All tool domains were significantly correlated (p's <.05), apart from valence, which was only significantly correlated with degree of correction versus reinforcement. CONCLUSION Our findings suggest that the McMaster Narrative Comment Rating Tool can reliably be used by multiple raters, across a variety of rater types, and in different surgical contexts. As such, it has the potential to support faculty development initiatives on assessment and feedback, and may be used as a tool to conduct research on different assessment strategies, including automated analysis of narrative comments.
Collapse
Affiliation(s)
- Natalie McGuire
- Office of Professional Development and Educational Scholarship, Queen's University, Kingston, Ontario, Canada
| | - Anita Acai
- Department of Psychiatry and Behavioural Neurosciences and McMaster Education Research, Innovation and Theory (MERIT) Program, McMaster University, and St. Joseph's Education Research Centre (SERC), St. Joseph's Healthcare Hamilton, Hamilton, Canada
| | - Ranil R Sonnadara
- Office of Education Science, Department of Surgery, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
7
|
Anderson LM, Rowland K, Edberg D, Wright KM, Park YS, Tekian A. An Analysis of Written and Numeric Scores in End-of-Rotation Forms from Three Residency Programs. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:497-506. [PMID: 37929204 PMCID: PMC10624145 DOI: 10.5334/pme.41] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 10/24/2023] [Indexed: 11/07/2023]
Abstract
Introduction End-of-Rotation Forms (EORFs) assess resident progress in graduate medical education and are a major component of Clinical Competency Committee (CCC) discussion. Single-institution studies suggest EORFs can detect deficiencies, but both grades and comments skew positive. In this study, we sought to determine whether the EORFs from three programs, including multiple specialties and institutions, produced useful information for residents, program directors, and CCCs. Methods Evaluations from three programs were included (Program 1, Institution A, Internal Medicine: n = 38; Program 2, Institution A, Anesthesia: n = 9; Program 3, Institution B, Anesthesia: n = 11). Two independent researchers coded each written comment for relevance (specificity and actionability) and orientation (praise or critical) using a standardized rubric. Numeric scores were analyzed using descriptive statistics. Results 4869 evaluations were collected from the programs. Of the 77,434 discrete numeric scores, 691 (0.89%) were considered "below expected level." 71.2% (2683/3767) of the total written comments were scored as irrelevant, while 3217 (85.4%) of total comments were scored positive and 550 (14.6%) were critical. When combined, 63.2% (n = 2379) of comments were scored positive and irrelevant while 6.5% (n = 246) were scored critical and relevant. Discussion <1% of comments indicated below average performance; >70% of comments scored irrelevant. Critical, relevant comments were least frequently observed, consistent across all 3 programs. The low rate of constructive feedback and the high rate of irrelevant comments are inadequate for a CCC to make informed decisions. The consistency of these findings across programs, specialties, and institutions suggests both local and systemic changes should be considered.
Collapse
Affiliation(s)
- Lauren M. Anderson
- Department of Family and Preventive Medicine, Rush University, Chicago, Illinois, US
| | - Kathleen Rowland
- Department of Family and Preventive Medicine, Rush University, Chicago, Illinois, US
| | - Deborah Edberg
- Department of Family and Preventive Medicine, Rush University, Chicago, Illinois, US
| | - Katherine M. Wright
- Department of Family & Community Medicine, Northwestern University Feinberg School of Medicine, Chicago, Illinois, US
| | - Yoon Soo Park
- Department of Medical Education, University of Illinois Chicago, Chicago, Illinois, US
| | - Ara Tekian
- Department of Medical Education, University of Illinois Chicago, Chicago, Illinois, US
| |
Collapse
|
8
|
Ryan MS, Lomis KD, Deiorio NM, Cutrer WB, Pusic MV, Caretta-Weyer HA. Competency-Based Medical Education in a Norm-Referenced World: A Root Cause Analysis of Challenges to the Competency-Based Paradigm in Medical School. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:1251-1260. [PMID: 36972129 DOI: 10.1097/acm.0000000000005220] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Competency-based medical education (CBME) requires a criterion-referenced approach to assessment. However, despite best efforts to advance CBME, there remains an implicit, and at times, explicit, demand for norm-referencing, particularly at the junction of undergraduate medical education (UME) and graduate medical education (GME). In this manuscript, the authors perform a root cause analysis to determine the underlying reasons for continued norm-referencing in the context of the movement toward CBME. The root cause analysis consisted of 2 processes: (1) identification of potential causes and effects organized into a fishbone diagram and (2) identification of the 5 whys. The fishbone diagram identified 2 primary drivers: the false notion that measures such as grades are truly objective and the importance of different incentives for different key constituents. From these drivers, the importance of norm-referencing for residency selection was identified as a critical component. Exploration of the 5 whys further detailed the reasons for continuation of norm-referenced grading to facilitate selection, including the need for efficient screening in residency selection, dependence upon rank-order lists, perception that there is a best outcome to the match, lack of trust between residency programs and medical schools, and inadequate resources to support progression of trainees. Based on these findings, the authors argue that the implied purpose of assessment in UME is primarily stratification for residency selection. Because stratification requires comparison, a norm-referenced approach is needed. To advance CBME, the authors recommend reconsideration of the approach to assessment in UME to maintain the purpose of selection while also advancing the purpose of rendering a competency decision. Changing the approach will require a collaboration between national organizations, accrediting bodies, GME programs, UME programs, students, and patients/societies. Details are provided regarding the specific approaches required of each key constituent group.
Collapse
Affiliation(s)
- Michael S Ryan
- M.S. Ryan is professor and associate dean for assessment, evaluation, research and innovation, Department of Pediatrics, University of Virginia, Charlottesville, Virginia, and a PhD student, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands; ORCID: https://orcid.org/0000-0003-3266-9289
| | - Kimberly D Lomis
- K.D. Lomis is vice president, undergraduate medical education innovations, American Medical Association, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-3504-6776
| | - Nicole M Deiorio
- N.M. Deiorio is professor and associate dean for student affairs, Department of Emergency Medicine, Virginia Commonwealth University, Richmond, Virginia; ORCID: https://orcid.org/0000-0002-8123-1112
| | - William B Cutrer
- W.B. Cutrer is associate professor of pediatrics and associate dean for undergraduate medical education, Vanderbilt University School of Medicine, Nashville, Tennessee; ORCID: https://orcid.org/0000-0003-1538-9779
| | - Martin V Pusic
- M.V. Pusic is associate professor of emergency medicine and pediatrics, Department of Pediatrics, Harvard Medical School, Boston, Massachusetts; ORCID: https://orcid.org/0000-0001-5236-6598
| | - Holly A Caretta-Weyer
- H.A. Caretta-Weyer is assistant professor and associate residency director, Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, California; ORCID: https://orcid.org/0000-0002-9783-5797
| |
Collapse
|
9
|
Tahim A, Gill D, Bezemer J. Workplace-based assessments-Articulating the playbook. MEDICAL EDUCATION 2023; 57:939-948. [PMID: 36924016 DOI: 10.1111/medu.15083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 02/13/2023] [Accepted: 03/10/2023] [Indexed: 06/18/2023]
Abstract
INTRODUCTION A workplace-based assessment (WBA) is a learning recording device that is widely used in medical education globally. Although entrenched in medical curricula, and despite a substantial body of literature exploring them, it is not yet fully understood how WBAs play out in practice. Adopting a constructivist standpoint, we examine these assessments, in the workplace, using principles based upon naturalist inquiry, drawing from a theoretical framework based on Goffman's dramaturgical analogy for the presentation of self, and using qualitative research methods to articulate what is happening as learners complete them. METHODS Learners were voluntarily recruited to participate in the study from a single teaching hospital. Data were generated, in-situ, through observations with field notes and audiovisual recording of WBAs, along with accompanying interviews with learners. RESULTS Data from six learners was analysed to reveal a set of general principles-the WBA playbook. These four principles were tacit, unwritten, unofficial and learners applied them to complete their WBA proformas: (1) maintain the impression of progression, (2) manage the authenticity of the individual proforma, (3) avoid losing face with the assessor and (4) complete the proforma in an effort-efficient way. By adhering to these principles, learners expressed their understanding of their social position in their world at that time the documents were created. DISCUSSION This paper recognises the value of the WBA as a lived experience, and of the WBA document as a social space, where learners engage in a social performance before the readers of the proforma. Such an interpretation better represents what happens as learners undergo and record WBAs in the real-world, recognising WBAs as learner-centred, learner-driven, meaning-making phenomena. In this way, as a record of interpretation and meanings, the subjective nature of the WBA process is a strength to be harnessed, rather than a weakness to be glossed over.
Collapse
Affiliation(s)
- Arpan Tahim
- Department of Culture, Communication and Media, UCL Institute of Education, London, UK
| | - Deborah Gill
- Faculty of Medicine, University of Southampton, Southampton, UK
| | - Jeff Bezemer
- Department of Culture, Communication and Media, UCL Institute of Education, London, UK
| |
Collapse
|
10
|
Lip A, Watling CJ, Ginsburg S. What does "Timely" Mean to Residents? Challenging Feedback Assumptions in Postgraduate Education. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:218-227. [PMID: 37334109 PMCID: PMC10275343 DOI: 10.5334/pme.1052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 05/29/2023] [Indexed: 06/20/2023]
Abstract
Introduction Current orthodoxy states that feedback should be timely and face-to-face, yet the optimal timing and mode of delivery for feedback is unclear. We explored what "optimal timing" means from residents' points of view as feedback providers and receivers, to ultimately inform strategies to optimize feedback in training. Methods As near-peers who have dual roles in both providing and receiving feedback, 16 subspecialty (PGY4 and 5) internal medicine residents were interviewed about their perceptions of the optimal timing and format of feedback. Using constructivist grounded theory, interviews were conducted and analyzed iteratively. Results Drawing on their experiences as providers and recipients, residents described simultaneously considering and weighing multiple factors when deciding on when and how to provide feedback. These included their own readiness to engage in providing meaningful feedback, the perceived receptiveness of the learner and the apparent urgency of feedback delivery (e.g., if patient safety was at stake). Face-to-face verbal feedback was valued for encouraging dialogue but could be uncomfortable and limited by time constraints. Written feedback could be more honest and concise, and the possibility of asynchronous delivery had potential to overcome issues with timing and discomfort. Discussion Participants' perceptions of the optimal timing of feedback challenge current assumptions about the benefits of "immediate" versus "delayed". The concept of "optimal timing" for feedback was found to be complex and context-dependent, defying a formulaic approach. There may be a role for asynchronous and/or written feedback, which has potential to address unique issues identified issues in near-peer relationships.
Collapse
Affiliation(s)
- Alyssa Lip
- Department of Medicine, Temerty Faculty of Medicine, University of Toronto, CA
| | - Christopher J. Watling
- Centre for Education Research and Innovation, Schulich School of Medicine & Dentistry, Western University, London, Ontario, CA
| | - Shiphra Ginsburg
- Department of Medicine, Sinai Health System and Faculty of Medicine, University of Toronto, scientist, Wilson Centre for Research in Education, University of Toronto, Toronto, Ontario, CA
- Canada Research Chair in Health Professions Education, CA
| |
Collapse
|
11
|
Chakroun M, Dion VR, Ouellet K, Graillon A, Désilets V, Xhignesse M, St-Onge C. Quality of Narratives in Assessment: Piloting a List of Evidence-Based Quality Indicators. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:XX. [PMID: 37252269 PMCID: PMC10215990 DOI: 10.5334/pme.925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 05/12/2023] [Indexed: 05/31/2023]
Abstract
Background & Need for Innovation Appraising the quality of narratives used in assessment is challenging for educators and administrators. Although some quality indicators for writing narratives exist in the literature, they remain context specific and not always sufficiently operational to be easily used. Creating a tool that gathers applicable quality indicators and ensuring its standardized use would equip assessors to appraise the quality of narratives. Steps taken for Development and Implementation of innovation We used DeVellis' framework to develop a checklist of evidence-informed indicators for quality narratives. Two team members independently piloted the checklist using four series of narratives coming from three different sources. After each series, team members documented their agreement and achieved a consensus. We calculated frequencies of occurrence for each quality indicator as well as the interrater agreement to assess the standardized application of the checklist. Outcomes of Innovation We identified seven quality indicators and applied them on narratives. Frequencies of quality indicators ranged from 0% to 100%. Interrater agreement ranged from 88.7% to 100% for the four series. Critical Reflection Although we were able to achieve a standardized application of a list of quality indicators for narratives used in health sciences education, it does not exclude the fact that users would need training to be able to write good quality narratives. We also noted that some quality indicators were less frequent than others and we suggested a few reflections on this.
Collapse
Affiliation(s)
- Molk Chakroun
- Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Vincent R. Dion
- Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Kathleen Ouellet
- Paul Grand’Maison de la Sociétédes médecins de l’Universitéde Sherbrooke research chair in medical education, Sherbrooke, Québec, CA
| | - Ann Graillon
- Centre de pédagogie et des sciences de la santé, Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Valérie Désilets
- Department of Pediatrics, Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Marianne Xhignesse
- Department of Family and Emergency Medicine, Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Christina St-Onge
- Department of Medicine, Faculty of medicine and health sciences, Universitéde Sherbrooke, Paul Grand’Maison de la Sociétédes médecins de l’Universitéde Sherbrooke research chair in medical education, Sherbrooke, Québec, CA
| |
Collapse
|
12
|
Maimone C, Dolan BM, Green MM, Sanguino SM, Garcia PM, O’Brien CL. Utilizing Natural Language Processing of Narrative Feedback to Develop a Predictive Model of Pre-Clerkship Performance: Lessons Learned. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:141-148. [PMID: 37151853 PMCID: PMC10162355 DOI: 10.5334/pme.40] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 04/19/2023] [Indexed: 05/09/2023]
Abstract
Background Natural language processing is a promising technique that can be used to create efficiencies in the review of narrative feedback to learners. The Feinberg School of Medicine has implemented formal review of pre-clerkship narrative feedback since 2014 through its portfolio assessment system but this process requires considerable time and effort. This article describes how natural language processing was used to build a predictive model of pre-clerkship student performance that can be utilized to assist competency committee reviews. Approach The authors took an iterative and inductive approach to the analysis, which allowed them to identify characteristics of narrative feedback that are both predictive of performance and useful to faculty reviewers. Words and phrases were manually grouped into topics that represented concepts illustrating student performance. Topics were reviewed by experienced reviewers, tested for consistency across time, and checked to ensure they did not demonstrate bias. Outcomes Sixteen topic groups of words and phrases were found to be predictive of performance. The best-fitting model used a combination of topic groups, word counts, and categorical ratings. The model had an AUC value of 0.92 on the training data and 0.88 on the test data. Reflection A thoughtful, careful approach to using natural language processing was essential. Given the idiosyncrasies of narrative feedback in medical education, standard natural language processing packages were not adequate for predicting student outcomes. Rather, employing qualitative techniques including repeated member checking and iterative revision resulted in a useful and salient predictive model.
Collapse
Affiliation(s)
- Christina Maimone
- Associate director of research data services, Northwestern IT Research Computing Services, Northwestern University, Evanston, Illinois, USA
| | - Brigid M. Dolan
- Associate professor of medicine and medical education and director of assessment, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| | - Marianne M. Green
- Raymond H. Curry, MD Professor of Medical Education, professor of medicine, and vice dean for education, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| | - Sandra M. Sanguino
- Associate professor of pediatrics and senior associate dean of medical education, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| | - Patricia M. Garcia
- Professor of obstetrics and gynecology and medical education, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| | - Celia Laird O’Brien
- Assistant professor of medical education and assistant dean of program evaluation and accreditation, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| |
Collapse
|
13
|
Mooney CJ, Pascoe JM, Blatt AE, Lang VJ, Kelly MS, Braun MK, Burch JE, Stone RT. Predictors of faculty narrative evaluation quality in medical school clerkships. MEDICAL EDUCATION 2022; 56:1223-1231. [PMID: 35950329 DOI: 10.1111/medu.14911] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 08/01/2022] [Accepted: 08/08/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Narrative approaches to assessment provide meaningful and valid representations of trainee performance. Yet, narratives are frequently perceived as vague, nonspecific and low quality. To date, there is little research examining factors associated with narrative evaluation quality, particularly in undergraduate medical education. The purpose of this study was to examine associations of faculty- and student-level characteristics with the quality of faculty member's narrative evaluations of clerkship students. METHODS The authors reviewed faculty narrative evaluations of 50 students' clinical performance in their inpatient medicine and neurology clerkships, resulting in 165 and 87 unique evaluations in the respective clerkships. The authors evaluated narrative quality using the Narrative Evaluation Quality Instrument (NEQI). The authors used linear mixed effects modelling to predict total NEQI score. Explanatory covariates included the following: time to evaluation completion, number of weeks spent with student, faculty total weeks on service per year, total faculty years in clinical education, student gender, faculty gender, and an interaction term between student and faculty gender. RESULTS Significantly higher narrative evaluation quality was associated with a shorter time to evaluation completion, with NEQI scores decreasing by approximately 0.3 points every 10 days following students' rotations (p = .004). Additionally, women faculty had statistically higher quality narrative evaluations with NEQI scores 1.92 points greater than men faculty (p = .012). All other covariates were not significant. CONCLUSIONS The quality of faculty members' narrative evaluations of medical students was associated with time to evaluation completion and faculty gender but not faculty experience in clinical education, faculty weeks on service, or the amount of time spent with students. Findings advance understanding on ways to improve the quality of narrative evaluations which are imperative given assessment models that will increase the volume and reliance on narratives.
Collapse
Affiliation(s)
- Christopher J Mooney
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Jennifer M Pascoe
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Amy E Blatt
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Valerie J Lang
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | | | - Melanie K Braun
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Jaclyn E Burch
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | | |
Collapse
|
14
|
Branfield Day L, Rassos J, Billick M, Ginsburg S. 'Next steps are…': An exploration of coaching and feedback language in EPA assessment comments. MEDICAL TEACHER 2022; 44:1368-1375. [PMID: 35944554 DOI: 10.1080/0142159x.2022.2098098] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Entrustable Professional Activities (EPA) assessments are intended to facilitate meaningful, low-stakes coaching and feedback, partly through the provision of written comments. We sought to explore EPA assessment comments provided to internal medicine (IM) residents for evidence of feedback and coaching language as well as politeness. METHODS We collected all written comments from EPA assessments of communication from a first-year IM resident cohort at the University of Toronto. Sensitized by politeness theory, we analyzed data using principles of constructivist grounded theory. RESULTS Nearly all EPA assessments (94%) contained written feedback based on focused clinical encounters. The majority of comments demonstrated coaching language, including phrases like 'don't forget to,' and 'next steps are,' followed by specific suggestions for improvement. A variety of words, including 'autonomy' and 'independence' denoted entrustment decisions. Linguistic politeness strategies such as hedging were pervasive, seemingly to minimize harm to the supervisor-trainee relationship. CONCLUSION Evidence of written coaching feedback suggests that EPA assessment comments are being used as intended as a means of formative feedback to promote learning. Yet, the frequent use of polite language suggests that EPAs may be higher-stakes than expected, highlighting a need for changes to the assessment culture and improved feedback literacy.
Collapse
Affiliation(s)
- Leora Branfield Day
- Department of Medicine, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - James Rassos
- Department of Medicine, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Maxime Billick
- Department of Medicine, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Shiphra Ginsburg
- Department of Medicine, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
- Wilson Centre for Research in Education, Toronto, Canada
| |
Collapse
|
15
|
Chakroun M, Dion VR, Ouellet K, Graillon A, Désilets V, Xhignesse M, St-Onge C. Narrative Assessments in Higher Education: A Scoping Review to Identify Evidence-Based Quality Indicators. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:1699-1706. [PMID: 35612917 DOI: 10.1097/acm.0000000000004755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Narrative comments are increasingly used in assessment to document trainees' performance and to make important decisions about academic progress. However, little is known about how to document the quality of narrative comments, since traditional psychometric analysis cannot be applied. The authors aimed to generate a list of quality indicators for narrative comments, to identify recommendations for writing high-quality narrative comments, and to document factors that influence the quality of narrative comments used in assessments in higher education. METHOD The authors conducted a scoping review according to Arksey & O'Malley's framework. The search strategy yielded 690 articles from 6 databases. Team members screened abstracts for inclusion and exclusion, then extracted numerical and qualitative data based on predetermined categories. Numerical data were used for descriptive analysis. The authors completed the thematic analysis of qualitative data with iterative discussions until they achieved consensus for the interpretation of the results. RESULTS After the full-text review of 213 selected articles, 47 were included. Through the thematic analysis, the authors identified 7 quality indicators, 12 recommendations for writing quality narratives, and 3 factors that influence the quality of narrative comments used in assessment. The 7 quality indicators are (1) describes performance with a focus on particular elements (attitudes, knowledge, skills); (2) provides a balanced message between positive elements and elements needing improvement; (3) provides recommendations to learners on how to improve their performance; (4) compares the observed performance with an expected standard of performance; (5) provides justification for the mark/score given; (6) uses language that is clear and easily understood; and (7) uses a nonjudgmental style. CONCLUSIONS Assessors can use these quality indicators and recommendations to write high-quality narrative comments, thus reinforcing the appropriate documentation of trainees' performance, facilitating solid decision making about trainees' progression, and enhancing the impact of narrative feedback for both learners and programs.
Collapse
Affiliation(s)
- Molk Chakroun
- M. Chakroun is a PhD student, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0002-0518-1782
| | - Vincent R Dion
- V.R. Dion was research assistant, Paul Grand'Maison de la Société des médecins de l'Université de Sherbrooke Research Chair in Medical Education, at the time of this work, and is now a first-year medical student, Université de Sherbrooke, Sherbrooke, Québec, Canada
| | - Kathleen Ouellet
- K. Ouellet is research coordinator, Centre de pédagogie et des sciences de la santé, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0001-9829-151X
| | - Ann Graillon
- A. Graillon is associate professor, Department of Pediatrics, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0003-3677-7113
| | - Valérie Désilets
- V. Désilets is associate professor, Department of Pediatrics, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0001-7399-119X
| | - Marianne Xhignesse
- M. Xhignesse is full professor, Department of Family and Emergency Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0002-3257-5912
| | - Christina St-Onge
- C. St-Onge is full professor, Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, and holds the Paul Grand'Maison de la Société des médecins de l'Université de Sherbrooke Research Chair in Medical Education, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0001-5313-0456
| |
Collapse
|
16
|
Menchetti I, Eagles D, Ghanem D, Leppard J, Fournier K, Cheung WJ. Gender differences in emergency medicine resident assessment: A scoping review. AEM EDUCATION AND TRAINING 2022; 6:e10808. [PMID: 36189450 PMCID: PMC9513437 DOI: 10.1002/aet2.10808] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 09/05/2022] [Accepted: 09/06/2022] [Indexed: 05/26/2023]
Abstract
Background Growing literature within postgraduate medical education demonstrates that female resident physicians experience gender bias throughout their training and future careers. This scoping review aims to describe the current body of literature on gender differences in emergency medicine (EM) resident assessment. Methods We conducted a scoping review which adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews guidelines. We included research involving resident physicians or fellows in EM (population and context), which focused on the impact of gender on assessments (concept). We searched seven databases from the databases' inception to April 4, 2022. Two reviewers independently screened citations, completed full-text review, and abstracted data. A third reviewer resolved any discrepancies. Results A total of 667 unique citations were identified; 10 studies were included, and all were conducted within the United States. Four studies reported differences in EM resident assessments attributable to gender within workplace-based assessments (qualitative comments and quantitative scores) by both attending physicians and nonphysicians. Six studies investigating clinical competency committee scores, procedural scores, and simulation-based assessments did not report any significant differences attributable to gender. Conclusions This scoping review found that gender bias exists within EM resident assessment most notably at the level of narrative comments typically received via workplace-based assessments. As female EM residents receive higher rates of negative or critical comments and discordant feedback documented on assessment, these findings raise concern about added barriers female EM residents may face while progressing through residency and the impact on their clinical and professional development.
Collapse
Affiliation(s)
| | - Debra Eagles
- Department of Emergency MedicineUniversity of OttawaOttawaOntarioCanada
- School of Epidemiology and Public HealthUniversity of OttawaOttawaOntarioCanada
| | - Dana Ghanem
- Faculty of MedicineUniversity of OttawaOttawaOntarioCanada
| | - Jennifer Leppard
- Department of Emergency MedicineUniversity of OttawaOttawaOntarioCanada
| | - Karine Fournier
- Health Sciences LibraryUniversity of OttawaOttawaOntarioCanada
| | - Warren J. Cheung
- Department of Emergency MedicineUniversity of OttawaOttawaOntarioCanada
- Royal College of Physicians and Surgeons of CanadaOttawaOntarioCanada
| |
Collapse
|
17
|
Clarke MJ, Frimannsdottir K. Assessment of neurosurgical resident milestone evaluation reporting and feedback processes. Neurosurg Focus 2022; 53:E5. [DOI: 10.3171/2022.1.focus21734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 01/25/2022] [Indexed: 11/06/2022]
Abstract
OBJECTIVE
Structured performance evaluations are important for the professional development and personal growth of resident learners. This process is formalized by the Accreditation Council for Graduate Medical Education milestones assessment system. The primary aim of this study was to understand the current feedback delivery mechanism by exploring the culture of feedback, the mechanics of delivery, and the evaluation of the feedback itself.
METHODS
Face-to-face interviews were conducted with 10 neurosurgery residents exploring their perceptions of summative feedback. Coded data were analyzed qualitatively for overriding themes using the matrix framework method. A priori themes of definition of feedback, feedback delivery, and impact of feedback were combined with de novo themes discovered during analysis.
RESULTS
Trainees prioritized formative over summative feedback. Summative and milestone feedback were criticized as being vague, misaligned with practice, and often perceived as erroneous. Barriers to implementation of summative feedback included perceived veracity of feedback, high interrater variability, and the inconstant adoption of a developmental progression model. Gender bias was noted in degree of feedback provided and language used.
CONCLUSIONS
Trainee perception of feedback provided multiple areas of improvement. This paper can serve as a baseline to study improvements in the milestone feedback process and optimize learning.
Collapse
Affiliation(s)
- Michelle J. Clarke
- Department of Neurologic Surgery, Mayo Clinic, Rochester, Minnesota; and
| | - Katrin Frimannsdottir
- Department of Education, Ministry of Education, Culture and Science, Reykjavik, Iceland
| |
Collapse
|
18
|
Concordance of Narrative Comments with Supervision Ratings Provided During Entrustable Professional Activity Assessments. J Gen Intern Med 2022; 37:2200-2207. [PMID: 35710663 PMCID: PMC9296736 DOI: 10.1007/s11606-022-07509-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 03/24/2022] [Indexed: 10/18/2022]
Abstract
BACKGROUND Use of EPA-based entrustment-supervision ratings to determine a learner's readiness to assume patient care responsibilities is expanding. OBJECTIVE In this study, we investigate the correlation between narrative comments and supervision ratings assigned during ad hoc assessments of medical students' performance of EPA tasks. DESIGN Data from assessments completed for students enrolled in the clerkship phase over 2 academic years were used to extract a stratified random sample of 100 narrative comments for review by an expert panel. PARTICIPANTS A review panel, comprised of faculty with specific expertise related to their roles within the EPA program, provided a "gold standard" supervision rating using the comments provided by the original assessor. MAIN MEASURES Interrater reliability (IRR) between members of review panel and correlation coefficients (CC) between expert ratings and supervision ratings from original assessors. KEY RESULTS IRR among members of the expert panel ranged from .536 for comments associated with focused history taking to .833 for complete physical exam. CC (Kendall's correlation coefficient W) between panel members' assignment of supervision ratings and the ratings provided by the original assessors for history taking, physical examination, and oral presentation comments were .668, .697, and .735 respectively. The supervision ratings of the expert panel had the highest degree of correlation with ratings provided during assessments done by master assessors, faculty trained to assess students across clinical contexts. Correlation between supervision ratings provided with the narrative comments at the time of observation and supervision ratings assigned by the expert panel differed by clinical discipline, perhaps reflecting the value placed on, and perhaps the comfort level with, assessment of the task in a given specialty. CONCLUSIONS To realize the full educational and catalytic effect of EPA assessments, assessors must apply established performance expectations and provide high-quality narrative comments aligned with the criteria.
Collapse
|
19
|
Consequence in Competency-Based Education: Individualize, but Do Not Compromise. J Gen Intern Med 2022; 37:2146-2148. [PMID: 35581450 PMCID: PMC9296725 DOI: 10.1007/s11606-022-07668-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
20
|
de Jong LH, Bok HGJ, Schellekens LH, Kremer WDJ, Jonker FH, van der Vleuten CPM. Shaping the right conditions in programmatic assessment: how quality of narrative information affects the quality of high-stakes decision-making. BMC MEDICAL EDUCATION 2022; 22:409. [PMID: 35643442 PMCID: PMC9148525 DOI: 10.1186/s12909-022-03257-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 03/10/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Programmatic assessment is increasingly being implemented within competency-based health professions education. In this approach a multitude of low-stakes assessment activities are aggregated into a holistic high-stakes decision on the student's performance. High-stakes decisions need to be of high quality. Part of this quality is whether an examiner perceives saturation of information when making a holistic decision. The purpose of this study was to explore the influence of narrative information in perceiving saturation of information during the interpretative process of high-stakes decision-making. METHODS In this mixed-method intervention study the quality of the recorded narrative information was manipulated within multiple portfolios (i.e., feedback and reflection) to investigate its influence on 1) the perception of saturation of information and 2) the examiner's interpretative approach in making a high-stakes decision. Data were collected through surveys, screen recordings of the portfolio assessments, and semi-structured interviews. Descriptive statistics and template analysis were applied to analyze the data. RESULTS The examiners perceived less frequently saturation of information in the portfolios with low quality of narrative feedback. Additionally, they mentioned consistency of information as a factor that influenced their perception of saturation of information. Even though in general they had their idiosyncratic approach to assessing a portfolio, variations were present caused by certain triggers, such as noticeable deviations in the student's performance and quality of narrative feedback. CONCLUSION The perception of saturation of information seemed to be influenced by the quality of the narrative feedback and, to a lesser extent, by the quality of reflection. These results emphasize the importance of high-quality narrative feedback in making robust decisions within portfolios that are expected to be more difficult to assess. Furthermore, within these "difficult" portfolios, examiners adapted their interpretative process reacting on the intervention and other triggers by means of an iterative and responsive approach.
Collapse
Affiliation(s)
- Lubberta H de Jong
- Department Population Health Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands.
| | - Harold G J Bok
- Department Population Health Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Lonneke H Schellekens
- Department Population Health Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
- Faculty of Social and Behavioural Sciences, Educational Consultancy and Professional Development, Utrecht University, Utrecht, The Netherlands
| | - Wim D J Kremer
- Department Population Health Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - F Herman Jonker
- Department Population Health Sciences, Section Farm Animal Health, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Cees P M van der Vleuten
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
21
|
Ginsburg S, Stroud L, Lynch M, Melvin L, Kulasegaram K. Beyond the ratings: gender effects in written comments from clinical teaching assessments. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2022; 27:355-374. [PMID: 35088152 DOI: 10.1007/s10459-021-10088-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Accepted: 12/12/2021] [Indexed: 06/14/2023]
Abstract
Assessment of clinical teachers by learners is problematic. Construct-irrelevant factors influence ratings, and women teachers often receive lower ratings than men. However, most studies focus only on numeric scores. Therefore, the authors analyzed written comments on 4032 teacher assessments, representing 282 women and 448 men teachers in one Department of Medicine, to explore for gender differences. NVivo was used to search for 61 evidence- and theoretically-based terms purported to reflect teaching excellence, which were analyzed using 2 × 2 chi-squared tests. The Linguistic Index and Word Count (LIWC) was used to categorize comment data, which were analyzed using linear regressions. The only significant difference in NVivo was that men were more likely than women to have the word "available" in a comment (OR 1.4, p < .05). A subset of LIWC variables showed significant gender differences, but all effects were modest. Men teachers had more positive emotion words written about them, while negative emotion words appeared equally. Significant differences occurred more often between the men and women residents who wrote the comments, rather than those attributed to the gender of the teachers. For example, women residents used more social and gender-related words (β 1.87, p < 0.001) and fewer words related to power or achievement (β -3.78, p < 0.001) than men residents. Profound gender differences were not found in teacher assessment comments in this large, diverse academic department of medicine, which differs from other studies. The authors explore possible reasons including differences in departmental culture and issues related to the methods used.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- Department of Medicine, Sinai Health System, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada.
- Wilson Centre for Research in Education, University Health Network and University of Toronto, Toronto, Ontario, Canada.
- Canada Research Chair in Health Professions Education, Ottawa, Canada.
- Mount Sinai Hospital, 433-600, University Ave., Toronto, Ontario, M5G 1X5, Canada.
| | - Lynfa Stroud
- Wilson Centre for Research in Education, University Health Network and University of Toronto, Toronto, Ontario, Canada
- Department of Medicine, Sunnybrook HSC and Temerty Faculty of Medicine, Toronto, Ontario, Canada
| | - Meghan Lynch
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Lindsay Melvin
- Department of Medicine, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Kulamakan Kulasegaram
- Wilson Centre for Research in Education, University Health Network and University of Toronto, Toronto, Ontario, Canada
- Department of Family and Community Medicine, Temerty Faculty of Medicine, Toronto, Ontario, Canada
- Temerty Chair in Learner Assessment and Program Evaluation, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
22
|
Do Resident Archetypes Influence the Functioning of Programs of Assessment? EDUCATION SCIENCES 2022. [DOI: 10.3390/educsci12050293] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
While most case studies consider how programs of assessment may influence residents’ achievement, we engaged in a qualitative, multiple case study to model how resident engagement and performance can reciprocally influence the program of assessment. We conducted virtual focus groups with program leaders from four residency training programs from different disciplines (internal medicine, emergency medicine, neurology, and rheumatology) and institutions. We facilitated discussion with live screen-sharing to (1) improve upon a previously-derived model of programmatic assessment and (2) explore how different resident archetypes (sample profiles) may influence their program of assessment. Participants agreed that differences in resident engagement and performance can influence their programs of assessment in some (mal)adaptive ways. For residents who are disengaged and weakly performing (of which there are a few), significantly more time is spent to make sense of problematic evidence, arrive at a decision, and generate recommendations. Whereas for residents who are engaged and performing strongly (the vast majority), significantly less effort is thought to be spent on discussion and formalized recommendations. These findings motivate us to fulfill the potential of programmatic assessment by more intentionally and strategically challenging those who are engaged and strongly performing, and by anticipating ways that weakly performing residents may strain existing processes.
Collapse
|
23
|
Desire paths for workplace assessment in postgraduate anaesthesia training: analysing informal processes to inform assessment redesign. Br J Anaesth 2022; 128:997-1005. [DOI: 10.1016/j.bja.2022.03.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 02/24/2022] [Accepted: 03/06/2022] [Indexed: 11/17/2022] Open
|
24
|
Spooner M, Duane C, Uygur J, Smyth E, Marron B, Murphy PJ, Pawlikowska T. Self-regulatory learning theory as a lens on how undergraduate and postgraduate learners respond to feedback: A BEME scoping review: BEME Guide No. 66. MEDICAL TEACHER 2022; 44:3-18. [PMID: 34666584 DOI: 10.1080/0142159x.2021.1970732] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
INTRODUCTION Little is known of processes by which feedback affects learners to influence achievement. This review maps what is known of how learners interact with feedback, to better understand how feedback affects learning strategies, and to explore enhancing and inhibiting factors. METHODS Pilot searching indicated a wide range of interpretations of feedback and study designs, prompting the use of scoping methodology. Inclusion criteria comprised: (i) learners (undergraduate, postgraduate, continuing education) who regularly receive feedback, and (ii) studies that associated feedback with subsequent learner reaction. The screening was performed independently in duplicate. Data extraction and synthesis occurred via an iterative consensus approach. Self-regulatory learning theory (SRL) was used as the conceptual framework. RESULTS Of 4253 abstracts reviewed, 232 were included in the final synthesis. Understandings of feedback are diverse; a minority adopt recognised definitions. Distinct learner responses to feedback can be categorized as cognitive, behavioural, affective, and contextual with complex, overlapping interactions. Importantly emotional responses are commonplace; factors mediating them are pivotal in learner recipience. CONCLUSION Feedback benefits learners most when focussed on learner needs, via engagement in bi-directional dialogue. Learner emotions must be supported, with the construction of positive learner-teacher relationships. A developmental agenda is key to learner's acceptance of feedback and enhancing future learning.
Collapse
Affiliation(s)
- Muirne Spooner
- Health Professions Education Centre, RCSI University of Medicine and Health Sciences, Dublin, Ireland
| | - Catherine Duane
- Health Professions Education Centre, RCSI University of Medicine and Health Sciences, Dublin, Ireland
| | - Jane Uygur
- Health Professions Education Centre, RCSI University of Medicine and Health Sciences, Dublin, Ireland
| | - Erica Smyth
- Health Professions Education Centre, RCSI University of Medicine and Health Sciences, Dublin, Ireland
| | - Brian Marron
- Health Professions Education Centre, RCSI University of Medicine and Health Sciences, Dublin, Ireland
| | - Paul J Murphy
- Health Professions Education Centre, RCSI University of Medicine and Health Sciences, Dublin, Ireland
| | - Teresa Pawlikowska
- Health Professions Education Centre, RCSI University of Medicine and Health Sciences, Dublin, Ireland
| |
Collapse
|
25
|
Kelleher M, Kinnear B, Sall DR, Weber DE, DeCoursey B, Nelson J, Klein M, Warm EJ, Schumacher DJ. Warnings in early narrative assessment that might predict performance in residency: signal from an internal medicine residency program. PERSPECTIVES ON MEDICAL EDUCATION 2021; 10:334-340. [PMID: 34476730 PMCID: PMC8633188 DOI: 10.1007/s40037-021-00681-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 07/08/2021] [Accepted: 07/11/2021] [Indexed: 05/10/2023]
Abstract
INTRODUCTION Narrative assessment data are valuable in understanding struggles in resident performance. However, it remains unknown which themes in narrative data that occur early in training may indicate a higher likelihood of struggles later in training, allowing programs to intervene sooner. METHODS Using learning analytics, we identified 26 internal medicine residents in three cohorts that were below expected entrustment during training. We compiled all narrative data in the first 6 months of training for these residents as well as 13 typically performing residents for comparison. Narrative data were blinded for all 39 residents during initial phases of an inductive thematic analysis for initial coding. RESULTS Many similarities were identified between the two cohorts. Codes that differed between typical and lower entrusted residents were grouped into two types of themes: three explicit/manifest and three implicit/latent with six total themes. The explicit/manifest themes focused on specific aspects of resident performance with assessors describing 1) Gaps in attention to detail, 2) Communication deficits with patients, and 3) Difficulty recognizing the "big picture" in patient care. Three implicit/latent themes, focused on how narrative data were written, were also identified: 1) Feedback described as a deficiency rather than an opportunity to improve, 2) Normative comparisons to identify a resident as being behind their peers, and 3) Warning of possible risk to patient care. DISCUSSION Clinical competency committees (CCCs) usually rely on accumulated data and trends. Using the themes in this paper while reviewing narrative comments may help CCCs with earlier recognition and better allocation of resources to support residents' development.
Collapse
Affiliation(s)
- Matthew Kelleher
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| | - Benjamin Kinnear
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Dana R Sall
- HonorHealth Internal Medicine Residency Program, Scottsdale, Arizona and University of Arizona College of Medicine, Phoenix, AZ, USA
| | - Danielle E Weber
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Bailey DeCoursey
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Jennifer Nelson
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Melissa Klein
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Eric J Warm
- Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Daniel J Schumacher
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| |
Collapse
|
26
|
Roshan A, Wagner N, Acai A, Emmerton-Coughlin H, Sonnadara RR, Scott TM, Karimuddin AA. Comparing the Quality of Narrative Comments by Rotation Setting. JOURNAL OF SURGICAL EDUCATION 2021; 78:2070-2077. [PMID: 34301523 DOI: 10.1016/j.jsurg.2021.06.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Accepted: 06/20/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE To investigate the effect of rotation setting on trainee-directed narrative comments within a Canadian General Surgery Residency Program. The primary outcome was to use the McMaster Narrative Comment Rating Scale (MNCRS) to evaluate the quality of narrative comments across five domains: valence of language, degree of correction versus reinforcement, specificity, actionability and overall usefulness. As distributed medical education in the postgraduate training context becomes more prevalent, delineating differences in feedback between various sites will be imperative, as it may affect how narrative comments are interpreted by clinical competency committee (CCC) members. DESIGN, SETTING, AND PARTICIPANTS A retrospective analysis of 2,469 assessments obtained between July 1, 2014 and May 5, 2019 from the General Surgery Residency Program at the University of British Columbia (UBC) was conducted. Narrative comments were rated using the McMaster Narrative Comment Rating Scale (MNCRS), a validated instrument for evaluating the quality of narrative comments. A repeated measures Analysis of Variance (ANOVA) was conducted to explore the impact of rotation setting, academic, urban tertiary, distributed urban, and distributed rural on the quality of narrative feedback. RESULTS Overall, the quality of the narrative comments varied substantially between and within rotation settings. Academic sites tended to provide more actionable comments (p = 0.01) and more corrective versus reinforcing comments, compared with other sites (p's < 0.01). Comments produced by the urban tertiary rotation setting were consistently lower in quality across all scale categories compared with other settings (p's < 0.01). CONCLUSION The type of rotation setting has a significant effect on the quality of faculty feedback for trainees. Faculty development on the provision of feedback is necessary, regardless of rotation setting, and should appropriately combine rotation-specific needs and overarching program goals to ensure trainees and clinical competence committees receive high quality narrative.
Collapse
Affiliation(s)
- Aishwarya Roshan
- University of British Columbia, Vancouver, British Columbia, Canada.
| | - Natalie Wagner
- Office of Professional Development & Educational Scholarship, Queen's University, Kingston, Ontario Canada
| | - Anita Acai
- Department of Psychology, Neuroscience & Behavior, McMaster University, Hamilton, Ontario, Canada; Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, Ontario, Canada; Office of Education Science, Department of Surgery, McMaster University, Hamilton, Ontario, Canada
| | - Heather Emmerton-Coughlin
- Department of Surgery, University of British Columbia, Vancouver, British Columbia Canada; Department of Surgery, Royal Jubilee Hospital, Victoria, British Columbia, Canada
| | - Ranil R Sonnadara
- Office of Education Science, Department of Surgery, McMaster University, Hamilton, Ontario, Canada; Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Tracy M Scott
- Department of Surgery, University of British Columbia, Vancouver, British Columbia Canada; Department of Surgery, St. Paul's Hospital, Vancouver, British Columbia, Canada
| | - Ahmer A Karimuddin
- Department of Surgery, University of British Columbia, Vancouver, British Columbia Canada; Department of Surgery, St. Paul's Hospital, Vancouver, British Columbia, Canada
| |
Collapse
|
27
|
Ginsburg S, Watling CJ, Schumacher DJ, Gingerich A, Hatala R. Numbers Encapsulate, Words Elaborate: Toward the Best Use of Comments for Assessment and Feedback on Entrustment Ratings. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S81-S86. [PMID: 34183607 DOI: 10.1097/acm.0000000000004089] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The adoption of entrustment ratings in medical education is based on a seemingly simple premise: to align workplace-based supervision with resident assessment. Yet it has been difficult to operationalize this concept. Entrustment rating forms combine numeric scales with comments and are embedded in a programmatic assessment framework, which encourages the collection of a large quantity of data. The implicit assumption that more is better has led to an untamable volume of data that competency committees must grapple with. In this article, the authors explore the roles of numbers and words on entrustment rating forms, focusing on the intended and optimal use(s) of each, with a focus on the words. They also unpack the problematic issue of dual-purposing words for both assessment and feedback. Words have enormous potential to elaborate, to contextualize, and to instruct; to realize this potential, educators must be crystal clear about their use. The authors set forth a number of possible ways to reconcile these tensions by more explicitly aligning words to purpose. For example, educators could focus written comments solely on assessment; create assessment encounters distinct from feedback encounters; or use different words collected from the same encounter to serve distinct feedback and assessment purposes. Finally, the authors address the tyranny of documentation created by programmatic assessment and urge caution in yielding to the temptation to reduce words to numbers to make them manageable. Instead, they encourage educators to preserve some educational encounters purely for feedback, and to consider that not all words need to become data.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- S. Ginsburg is professor of medicine, Department of Medicine, Sinai Health System and Faculty of Medicine, University of Toronto, scientist, Wilson Centre for Research in Education, University of Toronto, Toronto, Ontario, Canada, and Canada Research Chair in Health Professions Education; ORCID: http://orcid.org/0000-0002-4595-6650
| | - Christopher J Watling
- C.J. Watling is professor and director, Centre for Education Research and Innovation, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada; ORCID: https://orcid.org/0000-0001-9686-795X
| | - Daniel J Schumacher
- D.J. Schumacher is associate professor of pediatrics, Cincinnati Children's Hospital Medical Center and University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0001-5507-8452
| | - Andrea Gingerich
- A. Gingerich is assistant professor, Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada; ORCID: https://orcid.org/0000-0001-5765-3975
| | - Rose Hatala
- R. Hatala is professor, Department of Medicine, and director, Clinical Educator Fellowship, Center for Health Education Scholarship, University of British Columbia, Vancouver, British Columbia, Canada; ORCID: https://orcid.org/0000-0003-0521-2590
| |
Collapse
|
28
|
Chan T, Oswald A, Hauer KE, Caretta-Weyer HA, Nousiainen MT, Cheung WJ. Diagnosing conflict: Conflicting data, interpersonal conflict, and conflicts of interest in clinical competency committees. MEDICAL TEACHER 2021; 43:765-773. [PMID: 34182879 DOI: 10.1080/0142159x.2021.1925101] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Clinical competency committees (CCCs) are increasingly used within health professions education as their decisions are thought to be more defensible and fairer than those generated by previous training promotion processes. However, as with most group-based processes, it is inevitable that conflict will arise. In this paper the authors explore three ways conflict may arise within a CCC: (1) conflicting data submissions that are presented to the committee, (2) conflicts between members of the committee, and (3) conflicts of interest between a specific committee member and a trainee. The authors describe each of these conflict situations, dissect out the underlying problems, and explore possible solutions based on the current literature.
Collapse
Affiliation(s)
- Teresa Chan
- Faculty Development, Faculty of Health Sciences, McMaster University, Hamilton, Canada
- Division of Emergency Medicine, Department of Medicine, McMaster University, Hamilton, Canada
- McMaster program for Education Research, Innovation, and Theory (MERIT), Hamilton, Canada
| | - Anna Oswald
- Competency Based Medical Education, Office of Postgraduate Medical Education, University of Alberta, Edmonton, Canada
- CanMEDS Clinician Educator, Royal College of Physicians and Surgeons of Canada, Edmonton, Canada
- Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Canada
| | - Karen E Hauer
- Competency Assessment and Professional Standards, San Francisco, CA, USA
- Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, CA, USA
| | - Holly A Caretta-Weyer
- Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, CA, USA
| | | | - Warren J Cheung
- Department of Emergency Medicine, University of Ottawa, Ottawa, Canada
- Senior Clinician Investigator, Ottawa Hospital Research Institute, Ottawa, Canada
- CanMEDS Clinician Educator, Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
| |
Collapse
|
29
|
Olvet DM, Willey JM, Bird JB, Rabin JM, Pearlman RE, Brenner J. Third year medical students impersonalize and hedge when providing negative upward feedback to clinical faculty. MEDICAL TEACHER 2021; 43:700-708. [PMID: 33657329 DOI: 10.1080/0142159x.2021.1892619] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Medical students provide clinical teaching faculty with feedback on their skills as educators through anonymous surveys at the end of their clerkship rotation. Because faculty are in a position of power, students are hesitant to provide candid feedback. Our objective was to determine if medical students were willing to provide negative upward feedback to clinical faculty and describe how they conveyed their feedback. A qualitative analysis of third year medical students' open-ended comments from evaluations of six clerkships was performed using politeness theory as a conceptual framework. Students were asked to describe how the clerkship enhanced their learning and how it could be improved. Midway through the academic year, instructions to provide full names of faculty/residents was added. Overall, there were significantly more comments on what worked well than suggestions for improvement regarding faculty/residents. Instructing students to name-names increased the rate of naming from 35% to 75% for what worked well and from 13% to 39% for suggestions for improvement. Hedging language was included in 61% of suggestions for improvement, but only 2% of what worked well. Students described the variability of their experience, used passive language and qualified negative experiences with positive ones. Medical students may use linguistic strategies, such as impersonalizing and hedging, to mitigate the impact of negative upward feedback. Working towards a culture that supports upward feedback would allow students to feel more comfortable providing candid comments about their experience.
Collapse
Affiliation(s)
- Doreen M Olvet
- Department of Science Education, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York, USA
| | - Joanne M Willey
- Department of Science Education, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York, USA
| | - Jeffrey B Bird
- Department of Science Education, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York, USA
| | - Jill M Rabin
- Department of Obstetrics & Gynecology, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York, USA
| | - R Ellen Pearlman
- Department of Science Education, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York, USA
- Department of Medicine, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York, USA
| | - Judith Brenner
- Department of Science Education, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York, USA
- Department of Medicine, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York, USA
| |
Collapse
|
30
|
Young JQ, Holmboe ES, Frank JR. Competency-Based Assessment in Psychiatric Education: A Systems Approach. Psychiatr Clin North Am 2021; 44:217-235. [PMID: 34049645 DOI: 10.1016/j.psc.2020.12.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Medical education programs are failing to meet the health needs of patients and communities. Misalignments exist on multiple levels, including content (what trainees learn), pedagogy (how trainees learn), and culture (why trainees learn). To address these challenges effectively, competency-based assessment (CBA) for psychiatric medical education must simultaneously produce life-long learners who can self-regulate their own growth and trustworthy processes that determine and accelerate readiness for independent practice. The key to effectively doing so is situating assessment within a carefully designed system with several, critical, interacting components: workplace-based assessment, ongoing faculty development, learning analytics, longitudinal coaching, and fit-for-purpose clinical competency committees.
Collapse
Affiliation(s)
- John Q Young
- Department of Psychiatry, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell and the Zucker Hillside Hospital at Northwell Health, Glen Oaks, NY, USA.
| | - Eric S Holmboe
- Accreditation Council for Graduate Medical Education, 401 North Michigan Avenue, Chicago, IL 60611, USA
| | - Jason R Frank
- Royal College of Physicians and Surgeons of Canada, 774 Echo Drive, Ottawa, Ontario K15 5NB, Canada; Education, Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
31
|
Zhang N, Blissett S, Anderson D, O'Sullivan P, Qasim A. Race and Gender Bias in Internal Medicine Program Director Letters of Recommendation. J Grad Med Educ 2021; 13:335-344. [PMID: 34178258 PMCID: PMC8207902 DOI: 10.4300/jgme-d-20-00929.1] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 10/25/2020] [Accepted: 02/17/2021] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND While program director (PD) letters of recommendation (LOR) are subject to bias, especially against those underrepresented in medicine, these letters are one of the most important factors in fellowship selection. Bias manifests in LOR in a number of ways, including biased use of agentic and communal terms, doubt raising language, and description of career trajectory. To reduce bias, specialty organizations have recommended standardized PD LOR. OBJECTIVE This study examined PD LOR for applicants to a cardiology fellowship program to determine the mechanism of how bias is expressed and whether the 2017 Alliance for Academic Internal Medicine (AAIM) guidelines reduce bias. METHODS Fifty-six LOR from applicants selected to interview at a cardiology fellowship during the 2019 and 2020 application cycles were selected using convenience sampling. LOR for underrepresented (Black, Latinx, women) and non-underrepresented applicants were analyzed using directed qualitative content analysis. Two coders used an iteratively refined codebook to code the transcripts. Data were analyzed using outputs from these codes, analytical memos were maintained, and themes summarized. RESULTS With AAIM guidelines, there appeared to be reduced use of communal language for underrepresented applicants, which may represent less bias. However, in both LOR adherent and not adherent to the guidelines, underrepresented applicants were still more likely to be described using communal language, doubt raising language, and career trajectory bias. CONCLUSIONS PDs used language in a biased way to describe underrepresented applicants in LOR. The AAIM guidelines reduced but did not eliminate this bias. We provide recommendations to PDs and the AAIM on how to continue to work to reduce this bias.
Collapse
Affiliation(s)
- Neil Zhang
- Neil Zhang, MD, MS, is Clinical Instructor, Department of Medicine, University of California, San Francisco
| | - Sarah Blissett
- Sarah Blissett, MD, MHPE, is Assistant Professor, Division of Cardiology, Department of Medicine, and Centre for Education Research and Innovation, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - David Anderson
- David Anderson, MD, is a Cardiology Fellow, Division of Cardiology, Department of Medicine, University of California, San Francisco
| | - Patricia O'Sullivan
- Patricia O'Sullivan, EdD, is Professor, School of Medicine, University of California, San Francisco
| | - Atif Qasim
- Atif Qasim, MD, MSCE, is Cardiology Fellowship Program Director and Associate Professor, Division of Cardiology, Department of Medicine, University of California, San Francisco
| |
Collapse
|
32
|
Young JQ, Frank JR, Holmboe ES. Advancing Workplace-Based Assessment in Psychiatric Education: Key Design and Implementation Issues. Psychiatr Clin North Am 2021; 44:317-332. [PMID: 34049652 DOI: 10.1016/j.psc.2021.03.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
With the adoption of competency-based medical education, assessment has shifted from traditional classroom domains of knows and knows how to the workplace domain of doing. This workplace-based assessment has 2 purposes; assessment of learning (summative feedback) and the assessment for learning (formative feedback). What the trainee does becomes the basis for identifying growth edges and determining readiness for advancement and ultimately independent practice. High-quality workplace-based assessment programs require thoughtful choices about the framework of assessment, the tools themselves, the platforms used, and the contexts in which the assessments take place, with an emphasis on direct observation.
Collapse
Affiliation(s)
- John Q Young
- Department of Psychiatry, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell and, Zucker Hillside Hospital at Northwell Health, 75-59 263rd Street, Kaufman Building, Glen Oaks, NY 11004, USA.
| | - Jason R Frank
- Department of Emergency Medicine, University of Ottawa, Royal College of Physicians and Surgeons of Canada, 774 Echo Drive, Ottawa, Ontario K15 5NB, Canada
| | - Eric S Holmboe
- Accreditation Council for Graduate Medical Education, ACGME, 401 North Michigan Avenue, Chicago, IL 60611, USA
| |
Collapse
|
33
|
Humphrey-Murto S, Walker K, Aggarwal S, Dhillon NPK, Rauscher S, Wood TJ. The impact of local health professions education grants: is it worth the investment? CANADIAN MEDICAL EDUCATION JOURNAL 2021; 12:44-53. [PMID: 34249190 PMCID: PMC8263034 DOI: 10.36834/cmej.71357] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
BACKGROUND Local grants programs are important since funding for medical education research is limited. Understanding which factors predict successful outcomes is highly relevant to administrators. The purpose of this project was to identify factors that contribute to the publication of local medical education grants in a Canadian context. METHODS Surveys were distributed to previous Department of Innovation in Medical Education (DIME) and Department of Medicine (DOM) grant recipients (n = 115) to gather information pertaining to PI demographics and research outcomes. A backward logistic regression was used to determine the effects several variables on publication success. RESULTS The overall publication rate was 64/115 (56%). Due to missing data, 91 grants were included in the logistic regression. Variables associated with a higher rate of publication; cross departmental compared to single department OR = 2.82 (p = 0.04), being presented OR = 3.30 (p = 0.01), and multiple grant acquisition OR = 3.85 (p = 0.005). CONCLUSION Although preliminary, our data suggest that increasing research publications from local grants may be facilitated by pooling funds across departments, making research presentations mandatory, and allowing successful researchers to re-apply.
Collapse
Affiliation(s)
| | - Kyle Walker
- Department Medicine, University of Ottawa, Ontario, Canada
| | | | | | - Scott Rauscher
- Department of Innovation in Medical Education Research Support Unit, University of Ottawa, Ontario, Canada
| | - Timothy J Wood
- Department of Innovation in Medical Education (DIME), University of Ottawa, Ontario, Canada
| |
Collapse
|
34
|
Valentine N, Durning S, Shanahan EM, Schuwirth L. Fairness in human judgement in assessment: a hermeneutic literature review and conceptual framework. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2021; 26:713-738. [PMID: 33123837 DOI: 10.1007/s10459-020-10002-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Accepted: 10/19/2020] [Indexed: 06/11/2023]
Abstract
Human judgement is widely used in workplace-based assessment despite criticism that it does not meet standards of objectivity. There is an ongoing push within the literature to better embrace subjective human judgement in assessment not as a 'problem' to be corrected psychometrically but as legitimate perceptions of performance. Taking a step back and changing perspectives to focus on the fundamental underlying value of fairness in assessment may help re-set the traditional objective approach and provide a more relevant way to determine the appropriateness of subjective human judgements. Changing focus to look at what is 'fair' human judgement in assessment, rather than what is 'objective' human judgement in assessment allows for the embracing of many different perspectives, and the legitimising of human judgement in assessment. However, this requires addressing the question: what makes human judgements fair in health professions assessment? This is not a straightforward question with a single unambiguously 'correct' answer. In this hermeneutic literature review we aimed to produce a scholarly knowledge synthesis and understanding of the factors, definitions and key questions associated with fairness in human judgement in assessment and a resulting conceptual framework, with a view to informing ongoing further research. The complex construct of fair human judgement could be conceptualised through values (credibility, fitness for purpose, transparency and defensibility) which are upheld at an individual level by characteristics of fair human judgement (narrative, boundaries, expertise, agility and evidence) and at a systems level by procedures (procedural fairness, documentation, multiple opportunities, multiple assessors, validity evidence) which help translate fairness in human judgement from concepts into practical components.
Collapse
Affiliation(s)
- Nyoli Valentine
- Prideaux Health Professions Education, Flinders University, Bedford Park 5042, SA, Australia.
| | - Steven Durning
- Center for Health Professions Education, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Ernst Michael Shanahan
- Prideaux Health Professions Education, Flinders University, Bedford Park 5042, SA, Australia
| | - Lambert Schuwirth
- Prideaux Health Professions Education, Flinders University, Bedford Park 5042, SA, Australia
| |
Collapse
|
35
|
Roller D, Eberhard L. Quality over quantity - development of communicative and social competence in dentistry at the Medical Faculty of Heidelberg. GMS JOURNAL FOR MEDICAL EDUCATION 2021; 38:Doc60. [PMID: 33824896 PMCID: PMC7994886 DOI: 10.3205/zma001456] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Figures] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Revised: 09/23/2020] [Accepted: 10/01/2020] [Indexed: 06/12/2023]
Abstract
Given the context of implementing new licensing regulations for dentistry, this project report describes not only the current educational situation regarding communicative and social competency in dental education at the Medical Faculty of Heidelberg, but also introduces supportive and expanded measures that include medical educators and clinical staff. Based on less-than-satisfactory skills acquisition in students and experienced practitioners, it is necesssary to develop communicative and social competence not just in university courses with few hours of instruction, but also to practice and continually improve these skills in an educational clinical setting which serves as a system for teaching and learning knowledge and skills.
Collapse
Affiliation(s)
- Doris Roller
- Medizinische Fakultät Heidelberg, Studiendekanat Zahnmedizin, HeiCuDent Lehrentwicklung, Heidelberg, Germany
| | - Lydia Eberhard
- Medizinische Fakultät Heidelberg, Studiendekanat Zahnmedizin, HeiCuDent Lehrentwicklung, Heidelberg, Germany
| |
Collapse
|
36
|
Gingerich A, Sebok-Syer SS, Larstone R, Watling CJ, Lingard L. Seeing but not believing: Insights into the intractability of failure to fail. MEDICAL EDUCATION 2020; 54:1148-1158. [PMID: 32562288 DOI: 10.1111/medu.14271] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 05/04/2020] [Accepted: 06/10/2020] [Indexed: 06/11/2023]
Abstract
CONTEXT Inadequate documentation of observed trainee incompetence persists despite research-informed solutions targeting this failure to fail phenomenon. Documentation could be impeded if assessment language is misaligned with how supervisors conceptualise incompetence. Because frameworks tend to itemise competence as well as being vague about incompetence, assessment design may be improved by better understanding and describing of how supervisors experience being confronted with a potentially incompetent trainee. METHODS Following constructivist grounded theory methodology, analysis using a constant comparison approach was iterative and informed data collection. We interviewed 22 physicians about their experiences supervising trainees who demonstrate incompetence; we quickly found that they bristled at the term 'incompetence,' so we began to use 'underperformance' in its place. RESULTS Physicians began with a belief and an expectation: all trainees should be capable of learning and progressing by applying what they learn to subsequent clinical experiences. Underperformance was therefore unexpected and evoked disbelief in supervisors, who sought alternate explanations for the surprising evidence. Supervisors conceptualised underperformance as: an inability to engage with learning due to illness, a life event or learning disorders, so that progression was stalled, or an unwillingness to engage with learning due to lack of interest, insight or humility. CONCLUSION Physicians conceptualise underperformance as problematic progression due to insufficient engagement with learning that is unresponsive to intensified supervision. Although failure to fail tends to be framed as a reluctance to document underperformance, the prior phase of disbelief prevents confident documentation of performance and delays identification of underperformance. The findings offer further insight and possible new solutions to address under-documentation of underperformance.
Collapse
Affiliation(s)
- Andrea Gingerich
- Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada
| | - Stefanie S Sebok-Syer
- Emergency Medicine, Stanford Medicine, Stanford University, Stanford, California, USA
| | - Roseann Larstone
- Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada
| | - Christopher J Watling
- Department of Clinical Neurological Sciences, Centre for Education Research and Innovation, Schulich School of Medicine and Dentistry, London, Ontario, Canada
| | - Lorelei Lingard
- Department of Medicine, Centre for Education Research and Innovation, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| |
Collapse
|
37
|
Schuwirth LWT, van der Vleuten CPM. A history of assessment in medical education. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2020; 25:1045-1056. [PMID: 33113056 DOI: 10.1007/s10459-020-10003-0] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Accepted: 10/19/2020] [Indexed: 06/11/2023]
Abstract
The way quality of assessment has been perceived and assured has changed considerably in the recent 5 decades. Originally, assessment was mainly seen as a measurement problem with the aim to tell people apart, the competent from the not competent. Logically, reproducibility or reliability and construct validity were seen as necessary and sufficient for assessment quality and the role of human judgement was minimised. Later, assessment moved back into the authentic workplace with various workplace-based assessment (WBA) methods. Although originally approached from the same measurement framework, WBA and other assessments gradually became assessment processes that included or embraced human judgement but based on good support and assessment expertise. Currently, assessment is treated as a whole system problem in which competence is evaluated from an integrated rather than a reductionist perspective. Current research therefore focuses on how to support and improve human judgement, how to triangulate assessment information meaningfully and how to construct fairness, credibility and defensibility from a systems perspective. But, given the rapid changes in society, education and healthcare, yet another evolution in our thinking about good assessment is likely to lurk around the corner.
Collapse
Affiliation(s)
- Lambert W T Schuwirth
- FHMRI: Prideaux Research in Health Professions Education, College of Medicine and Public Health, Flinders University, Sturt Road, Bedford Park, South Australia, 5042, GPO Box 2100, Adelaide, SA, 5001, Australia.
- Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands.
| | - Cees P M van der Vleuten
- FHMRI: Prideaux Research in Health Professions Education, College of Medicine and Public Health, Flinders University, Sturt Road, Bedford Park, South Australia, 5042, GPO Box 2100, Adelaide, SA, 5001, Australia
- Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
38
|
Ginsburg S, Gingerich A, Kogan JR, Watling CJ, Eva KW. Idiosyncrasy in Assessment Comments: Do Faculty Have Distinct Writing Styles When Completing In-Training Evaluation Reports? ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:S81-S88. [PMID: 32769454 DOI: 10.1097/acm.0000000000003643] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
PURPOSE Written comments are gaining traction as robust sources of assessment data. Compared with the structure of numeric scales, what faculty choose to write is ad hoc, leading to idiosyncratic differences in what is recorded. This study offers exploration of what aspects of writing styles are determined by the faculty offering comment and what aspects are determined by the trainee being commented upon. METHOD The authors compiled in-training evaluation report comment data, generated from 2012 to 2015 by 4 large North American Internal Medicine training programs. The Linguistic Index and Word Count (LIWC) was used to categorize and quantify the language contained. Generalizability theory was used to determine whether faculty could be reliably discriminated from one another based on writing style. Correlations and ANOVAs were used to determine what styles were related to faculty or trainee demographics. RESULTS Datasets contained 23-142 faculty who provided 549-2,666 assessments on 161-989 trainees. Faculty could easily be discriminated from one another using a variety of LIWC metrics including word count, words per sentence, and the use of "clout" words. These patterns appeared person specific and did not reflect demographic factors such as gender or rank. These metrics were similarly not consistently associated with trainee factors such as postgraduate year or gender. CONCLUSIONS Faculty seem to have detectable writing styles that are relatively stable across the trainees they assess, which may represent an under-recognized source of construct irrelevance. If written comments are to meaningfully contribute to decision making, we need to understand and account for idiosyncratic writing styles.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- S. Ginsburg is professor of medicine, Department of Medicine, Faculty of Medicine, University of Toronto, scientist, Wilson Centre for Research in Education, University Health Network, University of Toronto, Toronto, Ontario, Canada, and Canada Research Chair in Health Professions Education; ORCID: http://orcid.org/0000-0002-4595-6650
| | - Andrea Gingerich
- A. Gingerich is assistant professor, Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada; ORCID: https://orcid.org/0000-0001-5765-3975
| | - Jennifer R Kogan
- J.R. Kogan is professor and associate dean for student success and professional development, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-8426-9506
| | - Christopher J Watling
- C.J. Watling is professor and director, Centre for Education Research and Innovation, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada; ORCID: https://orcid.org/0000-0001-9686-795X
| | - Kevin W Eva
- K.W. Eva is professor and director of education research and scholarship, Department of Medicine, and associate director and senior scientist, Centre for Health Education Scholarship, University of British Columbia, Vancouver, British Columbia, Canada; ORCID: http://orcid.org/0000-0002-8672-2500
| |
Collapse
|
39
|
Tam J, Wadhwa A, Martimianakis MA, Fernando O, Regehr G. The role of previously undocumented data in the assessment of medical trainees in clinical competency committees. PERSPECTIVES ON MEDICAL EDUCATION 2020; 9:286-293. [PMID: 33025382 PMCID: PMC7550499 DOI: 10.1007/s40037-020-00624-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2020] [Revised: 09/26/2020] [Accepted: 09/28/2020] [Indexed: 06/11/2023]
Abstract
INTRODUCTION The clinical competency committee (CCC) comprises a group of clinical faculty tasked with assessing a medical trainee's progress from multiple data sources. The use of previously undocumented data, or PUD, during CCC deliberations remains controversial. This study explored the use of previously undocumented data in conjunction with documented data in creating a meaningful assessment in a CCC. METHODS An instrumental case study of a CCC that uses previously undocumented data was conducted. A single CCC meeting was observed, followed by semi-structured individual interviews with all CCC members (n = 7). Meeting and interview transcripts were analyzed iteratively. RESULTS Documented data were perceived as limited by inaccurate or superficial data, but sometimes served as a starting point for invoking previously undocumented data. Previously undocumented data were introduced as summary impressions, contextualizing factors, personal anecdotes and, rarely, hearsay. The purpose was to raise a potential issue for discussion, enhance and elaborate an impression, or counter an impression. Various mechanisms allowed for the responsible use of previously undocumented data: embedding these data within a structured format; sharing relevant information without commenting beyond one's scope of experience; clarifying allowable disclosure of personal contextual factors with the trainee pre-meeting; excluding previously undocumented data not widely agreed upon in decision-making; and expecting these data to have been provided as direct feedback to trainees pre-meeting. DISCUSSION Previously undocumented data appear to play a vital part of the group conversation in a CCC to create meaningful, developmentally focused trainee assessments that cannot be achieved by documented data alone. Consideration should be given to ensuring the thoughtful incorporation of previously undocumented data as an essential part of the CCC assessment process.
Collapse
Affiliation(s)
- Jennifer Tam
- Centre for Health Education Scholarship, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada.
- Division of Infectious Diseases, Department of Pediatrics, University of British Columbia, Vancouver, British Columbia, Canada.
| | - Anupma Wadhwa
- Division of Infectious Diseases, Department of Pediatrics, University of Toronto, Toronto, Ontario, Canada
| | - Maria Athina Martimianakis
- Wilson Centre for Research in Education, University of Toronto, Toronto, ON, Canada
- Department of Paediatrics, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Oshan Fernando
- Department of Paediatrics, The Hospital for Sick Children, Toronto, ON, Canada
| | - Glenn Regehr
- Centre for Health Education Scholarship, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
- Department of Surgery, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
40
|
Young JQ, Sugarman R, Schwartz J, O'Sullivan PS. Overcoming the Challenges of Direct Observation and Feedback Programs: A Qualitative Exploration of Resident and Faculty Experiences. TEACHING AND LEARNING IN MEDICINE 2020; 32:541-551. [PMID: 32529844 DOI: 10.1080/10401334.2020.1767107] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Problem: Prior studies have reported significant negative attitudes amongst both faculty and residents toward direct observation and feedback. Numerous contributing factors have been identified, including insufficient time for direct observation and feedback, poorly understood purpose, inadequate training, disbelief in the formative intent, inauthentic resident-patient clinical interactions, undermining of resident autonomy, lack of trust between the faculty-resident dyad, and low-quality feedback information that lacks credibility. Strategies are urgently needed to overcome these challenges and more effectively engage faculty and residents in direct observation and feedback. Otherwise, the primary goals of supporting both formative and summative assessment will not be realized and the viability of competency-based medical education will be threatened. Intervention: Toward this end, recent studies have recommended numerous strategies to overcome these barriers: protected time for direct observation and feedback; ongoing faculty and resident training on goals and bidirectional, co-constructed feedback; repeated direct observations and feedback within a longitudinal resident-supervisor relationship; utilization of assessment tools with evidence for validity; and monitoring for engagement. Given the complexity of the problem, it is likely that bundling multiple strategies together will be necessary to overcome the challenges. The Direct Observation Structured Feedback Program (DOSFP) incorporated many of the recommended features, including protected time for direct observation and feedback within longitudinal faculty-resident relationships. Using a qualitative thematic approach the authors conducted semi-structured interviews, during February and March, 2019, with 10 supervisors and ten residents. Participants were asked to reflect on their experiences. Interview guide questions explored key themes from the literature on direct observation and feedback. Transcripts were anonymized. Two authors independently and iteratively coded the transcripts. Coding was theory-driven and differences were discussed until consensus was reached. The authors then explored the relationships between the codes and used a semantic approach to construct themes. Context: The DOSFP was implemented in a psychiatry continuity clinic for second and third year residents. Impact: Faculty and residents were aligned around the goals. They both perceived the DOSFP as focused on growth rather than judgment even though residents understood that the feedback had both formative and summative purposes. The DOSFP facilitated educational alliances characterized by trust and respect. With repeated practice within a longitudinal relationship, trainees dropped the performance orientation and described their interactions with patients as authentic. Residents generally perceived the feedback as credible, described feedback quality as high, and valued the two-way conversation. However, when receiving feedback with which they did not agree, residents demurred or, at most, would ask a clarifying question, but then internally discounted the feedback. Lessons Learned: Direct observation and structured feedback programs that bundle recent recommendations may overcome many of the challenges identified by previous research. Yet, residents discounted disagreeable feedback, illustrating a significant limitation and the need for other strategies that help residents reconcile conflict between external data and one's self-appraisal.
Collapse
Affiliation(s)
- John Q Young
- Department of Psychiatry, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York, USA
| | - Rebekah Sugarman
- Department of Psychiatry, The Zucker Hillside Hospital at Northwell Health, Glen Oaks, New York, USA
| | - Jessica Schwartz
- Department of Psychiatry, The Zucker Hillside Hospital at Northwell Health, Glen Oaks, New York, USA
| | - Patricia S O'Sullivan
- Office of Medical Education, University of California San Francisco, San Francisco, California, USA
| |
Collapse
|
41
|
Tavares W, Kuper A, Kulasegaram K, Whitehead C. The compatibility principle: on philosophies in the assessment of clinical competence. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2020; 25:1003-1018. [PMID: 31677146 DOI: 10.1007/s10459-019-09939-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Accepted: 10/25/2019] [Indexed: 06/10/2023]
Abstract
The array of different philosophical positions underlying contemporary views on competence, assessment strategies and justification have led to advances in assessment science. Challenges may arise when these philosophical positions are not considered in assessment design. These can include (a) a logical incompatibility leading to varied or difficult interpretations of assessment results, (b) an "anything goes" approach, and (c) uncertainty regarding when and in what context various philosophical positions are appropriate. We propose a compatibility principle that recognizes that different philosophical positions commit assessors/assessment researchers to particular ideas, assumptions and commitments, and applies ta logic of philosophically-informed, assessment-based inquiry. Assessment is optimized when its underlying philosophical position produces congruent, aligned and coherent views on constructs, assessment strategies, justification and their interpretations. As a way forward we argue that (a) there can and should be variability in the philosophical positions used in assessment, and these should be clearly articulated to promote understanding of assumptions and make sense of justifications; (b) we focus on developing the merits, boundaries and relationships within and/or between philosophical positions in assessment; (c) we examine a core set of principles related to the role and relevance of philosophical positions; (d) we elaborate strategies and criteria to delineate compatible from incompatible; and (f) we articulate a need to broaden knowledge/competencies related to these issues. The broadened use of philosophical positions in assessment in the health professions affect the "state of play" and can undermine assessment programs. This may be overcome with attention to the alignment between underlying assumptions/commitments.
Collapse
Affiliation(s)
- Walter Tavares
- The Wilson Centre, Department of Medicine, University of Toronto/University Health Network, 200 Elizabeth Street, 1ES-565, Toronto, ON, M5G 2C4, Canada.
- Post-MD Education (Post-Graduate Medical Education/Continued Professional Development), University of Toronto, Toronto, ON, Canada.
| | - Ayelet Kuper
- The Wilson Centre, Department of Medicine, University of Toronto/University Health Network, 200 Elizabeth Street, 1ES-565, Toronto, ON, M5G 2C4, Canada
- Division of General Internal Medicine, Sunnybrook Health Sciences Centre, Toronto, Canada
- Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Kulamakan Kulasegaram
- The Wilson Centre, Department of Medicine, University of Toronto/University Health Network, 200 Elizabeth Street, 1ES-565, Toronto, ON, M5G 2C4, Canada
- Department of Family and Community Medicine, Women's College Hospital, Faculty of Medicine, University of Toronto, Toronto, Canada
- MD Program, University of Toronto, Toronto, Canada
| | - Cynthia Whitehead
- The Wilson Centre, Department of Medicine, University of Toronto/University Health Network, 200 Elizabeth Street, 1ES-565, Toronto, ON, M5G 2C4, Canada
- Department of Family and Community Medicine, Women's College Hospital, Faculty of Medicine, University of Toronto, Toronto, Canada
| |
Collapse
|
42
|
Schuwirth LWT, Durning SJ, King SM. Assessment of clinical reasoning: three evolutions of thought. Diagnosis (Berl) 2020; 7:191-196. [PMID: 32182208 DOI: 10.1515/dx-2019-0096] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Accepted: 02/12/2020] [Indexed: 02/17/2024]
Abstract
Although assessing clinical reasoning is almost universally considered central to medical education it is not a straightforward issue. In the past decades, our insights into clinical reasoning as a phenomenon, and consequently the best ways to assess it, have undergone significant changes. In this article, we describe how the interplay between fundamental research, practical applications, and evaluative research has pushed the evolution of our thinking and our practices in assessing clinical reasoning.
Collapse
Affiliation(s)
- Lambert W T Schuwirth
- Prideaux Centre for Research in Health Professions Education, Flinders University, Adelaide, South Australia, Australia
| | | | - Svetlana M King
- Prideaux Centre for Research in Health Professions Education, Flinders University, Adelaide, South Australia, Australia
| |
Collapse
|
43
|
Thoma B, Hall AK, Clark K, Meshkat N, Cheung WJ, Desaulniers P, Ffrench C, Meiwald A, Meyers C, Patocka C, Beatty L, Chan TM. Evaluation of a National Competency-Based Assessment System in Emergency Medicine: A CanDREAM Study. J Grad Med Educ 2020; 12:425-434. [PMID: 32879682 PMCID: PMC7450748 DOI: 10.4300/jgme-d-19-00803.1] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 02/11/2020] [Accepted: 05/20/2020] [Indexed: 01/08/2023] Open
Abstract
BACKGROUND In 2018, Canadian postgraduate emergency medicine (EM) programs began implementing a competency-based medical education (CBME) assessment program. Studies evaluating these programs have focused on broad outcomes using data from national bodies and lack data to support program-specific improvement. OBJECTIVE We evaluated the implementation of a CBME assessment program within and across programs to identify successes and opportunities for improvement at the local and national levels. METHODS Program-level data from the 2018 resident cohort were amalgamated and analyzed. The number of entrustable professional activity (EPA) assessments (overall and for each EPA) and the timing of resident promotion through program stages were compared between programs and to the guidelines provided by the national EM specialty committee. Total EPA observations from each program were correlated with the number of EM and pediatric EM rotations. RESULTS Data from 15 of 17 (88%) programs containing 9842 EPA observations from 68 of 77 (88%) EM residents in the 2018 cohort were analyzed. Average numbers of EPAs observed per resident in each program varied from 92.5 to 229.6, correlating with the number of blocks spent on EM and pediatric EM (r = 0.83, P < .001). Relative to the specialty committee's guidelines, residents were promoted later than expected (eg, one-third of residents had a 2-month delay to promotion from the first to second stage) and with fewer EPA observations than suggested. CONCLUSIONS There was demonstrable variation in EPA-based assessment numbers and promotion timelines between programs and with national guidelines.
Collapse
|
44
|
Ginsburg S, Kogan JR, Gingerich A, Lynch M, Watling CJ. Taken Out of Context: Hazards in the Interpretation of Written Assessment Comments. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:1082-1088. [PMID: 31651432 DOI: 10.1097/acm.0000000000003047] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
PURPOSE Written comments are increasingly valued for assessment; however, a culture of politeness and the conflation of assessment with feedback lead to ambiguity. Interpretation requires reading between the lines, which is untenable with large volumes of qualitative data. For computer analytics to help with interpreting comments, the factors influencing interpretation must be understood. METHOD Using constructivist grounded theory, the authors interviewed 17 experienced internal medicine faculty at 4 institutions between March and July, 2017, asking them to interpret and comment on 2 sets of words: those that might be viewed as "red flags" (e.g., good, improving) and those that might be viewed as signaling feedback (e.g., should, try). Analysis focused on how participants ascribed meaning to words. RESULTS Participants struggled to attach meaning to words presented acontextually. Four aspects of context were deemed necessary for interpretation: (1) the writer; (2) the intended and potential audiences; (3) the intended purpose(s) for the comments, including assessment, feedback, and the creation of a permanent record; and (4) the culture, including norms around assessment language. These contextual factors are not always apparent; readers must balance the inevitable need to interpret others' language with the potential hazards of second-guessing intent. CONCLUSIONS Comments are written for a variety of intended purposes and audiences, sometimes simultaneously; this reality creates dilemmas for faculty attempting to interpret these comments, with or without computer assistance. Attention to context is essential to reduce interpretive uncertainty and ensure that written comments can achieve their potential to enhance both assessment and feedback.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- S. Ginsburg is professor of medicine, Department of Medicine, Faculty of Medicine, University of Toronto, scientist, Wilson Centre for Research in Education, University Health Network, University of Toronto, Toronto, Ontario, Canada, and Canada Research Chair in Health Professions Education; ORCID: http://orcid.org/0000-0002-4595-6650. J.R. Kogan is professor of medicine, Department of Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania. A. Gingerich is assistant professor, Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada; ORCID: http://orcid.org/0000-0001-5765-3975. M. Lynch is postdoctoral fellow, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada. C.J. Watling is professor, Department of Clinical Neurological Sciences, scientist, Centre for Education Research and Innovation, and associate dean of postgraduate medical education, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada; ORCID: http://orcid.org/0000-0001-9686-795X
| | | | | | | | | |
Collapse
|
45
|
Ramani S, Könings KD, Ginsburg S, van der Vleuten CPM. Relationships as the Backbone of Feedback: Exploring Preceptor and Resident Perceptions of Their Behaviors During Feedback Conversations. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:1073-1081. [PMID: 31464736 DOI: 10.1097/acm.0000000000002971] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE Newer definitions of feedback emphasize learner engagement throughout the conversation, yet teacher and learner perceptions of each other's behaviors during feedback exchanges have been less well studied. This study explored perceptions of residents and faculty regarding effective behaviors and strategies during feedback conversations and factors that affected provision and acceptance of constructive feedback. METHOD Six outpatient internal medicine preceptors and 12 residents at Brigham and Women's Hospital participated (2 dyads per preceptor) between September 2017 and May 2018. Their scheduled feedback conversations were observed by the lead investigator, and one-on-one interviews were conducted with each member of the dyad to explore their perceptions of the conversation. Interviews were transcribed and analyzed for key themes. Because participants repeatedly emphasized teacher-learner relationships as key to meaningful feedback, a framework method of analysis was performed using the 3-step relationship-centered communication model REDE (relationship establishment, development, and engagement). RESULTS After participant narratives were mapped onto the REDE model, key themes were identified and categorized under the major steps of the model. First, establishment: revisit and renew established relationships, preparation allows deeper reflection on goals, set a collaborative agenda. Second, development: provide a safe space to invite self-reflection, make it about a skill or action. Third, engagement: enhance self-efficacy at the close, establish action plans for growth. CONCLUSIONS Feedback conversations between longitudinal teacher-learner dyads could be mapped onto a relationship-centered communication framework. Our study suggests that behaviors that enable trusting and supportive teacher-learner relationships can form the foundation of meaningful feedback.
Collapse
Affiliation(s)
- Subha Ramani
- S. Ramani is associate professor of medicine, Harvard Medical School, director, Scholars in Medical Education Pathway, Internal Medicine Residency Program, Brigham and Women's Hospital, and leader of research and scholarship, Harvard Macy Institute, Boston, Massachusetts; ORCID: https://orcid.org/0000-0002-8360-4031. K.D. Könings is associate professor, Department of Educational Development and Research and School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands; ORCID: https://orcid.org/0000-0003-0063-8218. S. Ginsburg is professor of medicine (respirology) and scientist, Wilson Centre for Research in Education, University of Toronto, Toronto, Ontario, Canada. C.P.M. van der Vleuten is director, School of Health Professions Education, and professor of education, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands; ORCID: https://orcid.org/0000-0001-6802-3119
| | | | | | | |
Collapse
|
46
|
Bakke BM, Sheu L, Hauer KE. Fostering a Feedback Mindset: A Qualitative Exploration of Medical Students' Feedback Experiences With Longitudinal Coaches. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:1057-1065. [PMID: 32576764 DOI: 10.1097/acm.0000000000003012] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
PURPOSE Feedback is important for medical students' development. Recent conceptualizations of feedback as a dialogue between feedback provider and recipient point to longitudinal relationships as a facilitator of effective feedback discussions. This study illuminates how medical students experience feedback within a longitudinal relationship with a physician coach. METHOD In this qualitative study, second-year medical students from the University of California, San Francisco, School of Medicine participated in semistructured interviews that explored their experiences discussing feedback within longitudinal, nonevaluative coaching relationships. Interviews occurred between May and October 2018. Interview questions addressed students' experiences receiving feedback from their coach, how and when they used this feedback, and how their relationship with their coach influenced engagement in feedback discussions. Interviews were analyzed using constructivist grounded theory. RESULTS Seventeen students participated. The authors identified 3 major themes. First, students' development of a feedback mindset: Over time, students came to view feedback as an invaluable component of their training. Second, setting the stage for feedback: Establishing feedback routines and a low-stakes environment for developing clinical skills were important facilitators of effective feedback discussions. Third, interpreting and acting upon feedback: Students described identifying, receiving, and implementing tailored and individualized feedback in an iterative fashion. As students gained comfort and trust in their coaches' feedback, they reported increasingly engaging in feedback conversations for learning. CONCLUSIONS Through recurring feedback opportunities and iterative feedback discussions with coaches, students came to view feedback as essential for growth and learning. Longitudinal coaching relationships can positively influence how students conceptualize and engage in feedback discussions.
Collapse
Affiliation(s)
- Brian M Bakke
- B.M. Bakke is a third-year medical student, University of California, San Francisco, School of Medicine, San Francisco, California. L. Sheu is assistant professor, Department of Medicine, University of California, San Francisco, School of Medicine, San Francisco, California. K.E. Hauer is professor, Department of Medicine, University of California, San Francisco, School of Medicine, San Francisco, California; ORCID: https://orcid.org/0000-0002-8812-4045
| | | | | |
Collapse
|
47
|
Chan TM, Sebok-Syer SS, Sampson C, Monteiro S. The Quality of Assessment of Learning (Qual) Score: Validity Evidence for a Scoring System Aimed at Rating Short, Workplace-Based Comments on Trainee Performance. TEACHING AND LEARNING IN MEDICINE 2020; 32:319-329. [PMID: 32013584 DOI: 10.1080/10401334.2019.1708365] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Construct: This study seeks to determine validity evidence for the Quality of Assessment for Learning score (QuAL score), which was created to evaluate short qualitative comments that are related to specific scores entered into a workplace-based assessment, common within the competency-based medical education (CBME) context. Background: In the age of CBME, qualitative comments play an important role in clarifying the quantitative scores rendered by observers at the bedside. Currently there are few practical tools that evaluate mixed data (e.g. associated score-and-comment data), other than the comprehensive Completed Clinical Evaluation Report Rating tool (CCERR) that was originally derived to rate end-of-rotation reports. Approach: A multi-center, randomized cohort-based rating exercise was conducted to evaluate the rating properties of the QuAL score as compared to the CCERR. One group rated comments using the QuAL score, and the other group rated comments using the CCERR. A generalizability study (G-Study) and a decision study (D-study) were conducted to determine the number of meta-raters for a reliable rating (phi-coefficient target of >0.80). Both scores were correlated against rater's gestalt perceptions of utility for both faculty and residents reading the scores. Results: Twenty-five meta-raters from 20 sites participated in this rating exercise. The G-study revealed that the CCERR group (n = 13) rated the comments with a very high reliability (Phi = 0.97). Meanwhile, the QuAL group (n = 12) rated the comments with a similarly high reliability (Phi = 0.97). The QuAL score required only two raters to reach an acceptable target reliability of >0.80, while the CCERR required three. The QuAL score correlated with perceptions of utility (Meta-rater usefulness, Pearson's r = 0.69, p < 0.001; Perceived usefulness for trainee, r = 0.74, p < 0.001). The CCERR performed similarly, correlating with perceived faculty (r = 0.67, <0.001) and resident utility (0.79, <0.001). Conclusions: The QuAL score is reliable rating score that correlates well with perceptions of utility. The QuAL score may be useful for rating shorter comments generated by workplace-based assessments.
Collapse
Affiliation(s)
- Teresa M Chan
- Division of Emergency Medicine, McMaster University, Hamilton, Ontario, Canada
| | | | - Christopher Sampson
- Department of Emergency Medicine, University of Missouri, Columbia, Missouri, USA
| | - Sandra Monteiro
- Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
48
|
Buckley C, Natesan S, Breslin A, Gottlieb M. Finessing Feedback: Recommendations for Effective Feedback in the Emergency Department. Ann Emerg Med 2020; 75:445-451. [DOI: 10.1016/j.annemergmed.2019.05.016] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2018] [Indexed: 01/11/2023]
|
49
|
Castanelli DJ, Weller JM, Molloy E, Bearman M. Shadow systems in assessment: how supervisors make progress decisions in practice. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2020; 25:131-147. [PMID: 31485893 DOI: 10.1007/s10459-019-09913-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Accepted: 08/26/2019] [Indexed: 06/10/2023]
Abstract
Medical educators are tasked with decisions on trainee progression and credentialing for independent clinical practice, which requires robust evidence from workplace-based assessment. It is unclear how the current promotion of workplace-based assessment as a pedagogical approach to promote learning has impacted this use of assessments for decision-making; meeting both these purposes may present unforeseen challenges. In this study we explored how supervisors make decisions on trainee progress in practice. We conducted semi-structured interviews with 19 supervisors of postgraduate anesthesia training across Australia and New Zealand and undertook thematic analysis of the transcripts. Supervisors looked beyond the formal assessment portfolio when making performance decisions. They instead used assessment 'shadow systems' based on their own observation and confidential judgements from trusted colleagues. Supervisors' decision making involved expert judgement of the perceived salient aspects of performance and the standard to be attained while making allowances for the opportunities and constraints of the local learning environment. Supervisors found making progress decisions an emotional burden. When faced with difficult decisions, they found ways to share the responsibility and balance the potential consequences for the trainee with the need to protect their patients. Viewed through the lens of community of practice theory, the development of assessment 'shadow systems' indicates a lack of alignment between local workplace assessment practices and the prescribed programmatic assessment approach to high-stakes progress decisions. Avenues for improvement include cooperative development of formal assessment processes to better meet local needs or incorporating the information in 'shadow systems' into formal assessment processes.
Collapse
Affiliation(s)
- Damian J Castanelli
- School of Clinical Sciences at Monash Health, Monash University, Clayton, VIC, Australia.
- Department of Anaesthesia and Perioperative Medicine, Monash Health, Clayton, VIC, Australia.
| | - Jennifer M Weller
- Centre for Medical and Health Sciences Education, School of Medicine, University of Auckland, Auckland, New Zealand
- Department of Anaesthesia, Auckland City Hospital, Auckland, New Zealand
| | - Elizabeth Molloy
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Melbourne, VIC, Australia
| | - Margaret Bearman
- Centre for Research and Assessment in Digital Learning (CRADLE), Deakin University, Geelong, VIC, Australia
| |
Collapse
|
50
|
Dory V, Cummings BA, Mondou M, Young M. Nudging clinical supervisors to provide better in-training assessment reports. PERSPECTIVES ON MEDICAL EDUCATION 2020; 9:66-70. [PMID: 31848999 PMCID: PMC7012977 DOI: 10.1007/s40037-019-00554-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
INTRODUCTION In-training assessment reports (ITARs) summarize assessment during a clinical placement to inform decision-making and provide formal feedback to learners. Faculty development is an effective but resource-intensive means of improving the quality of completed ITARs. We examined whether the quality of completed ITARs could be improved by 'nudges' from the format of ITAR forms. METHODS Our first intervention consisted of placing the section for narrative comments at the beginning of the form, and using prompts for recommendations (Do more, Keep doing, Do less, Stop doing). In a second intervention, we provided a hyperlink to a detailed assessment rubric and shortened the checklist section. We analyzed a sample of 360 de-identified completed ITARs from six disciplines across the three academic years where the different versions of the ITAR were used. Two raters independently scored the ITARs using the Completed Clinical Evaluation Report Rating (CCERR) scale. We tested for differences between versions of the ITAR forms using a one-way ANOVA for the total CCERR score, and MANOVA for the nine CCERR item scores. RESULTS Changes to the form structure (nudges) improved the quality of information generated as measured by the CCERR instrument, from a total score of 18.0/45 (SD 2.6) to 18.9/45 (SD 3.1) and 18.8/45 (SD 2.6), p = 0.04. Specifically, comments were more balanced, more detailed, and more actionable compared with the original ITAR. DISCUSSION Nudge interventions, which are inexpensive and feasible, should be included in multipronged approaches to improve the quality of assessment reports.
Collapse
Affiliation(s)
- Valérie Dory
- Department of Medicine and Centre for Medical Education; Faculty of Medicine, McGill University, Montreal, QC, Canada.
| | - Beth-Ann Cummings
- Undergraduate Medical Education, Department of Medicine, and Institute of Health Sciences Education; Faculty of Medicine, McGill University, Montreal, QC, Canada
| | - Mélanie Mondou
- Department of Medicine and Institute of Health Sciences Education; Faculty of Medicine, McGill University, Montreal, QC, Canada
| | - Meredith Young
- Department of Medicine and Institute of Health Sciences Education; Faculty of Medicine, McGill University, Montreal, QC, Canada
| |
Collapse
|