1
|
Ko J, Roze des Ordons A, Ballard M, Shenkier T, Simon JE, Fyles G, Lefresne S, Hawley P, Chen C, McKenzie M, Sanders J, Bernacki R. Exploring the value of structured narrative feedback within the Serious Illness Conversation-Evaluation Exercise (SIC-Ex): a qualitative analysis. BMJ Open 2024; 14:e078385. [PMID: 38286701 PMCID: PMC10826582 DOI: 10.1136/bmjopen-2023-078385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 01/11/2024] [Indexed: 01/31/2024] Open
Abstract
OBJECTIVES The Serious Illness Conversation Guide (SICG) has emerged as a framework for conversations with patients with a serious illness diagnosis. This study reports on narratives generated from open-ended questions of a novel assessment tool, the Serious Illness Conversation-Evaluation Exercise (SIC-Ex), to assess resident-led conversations with patients in oncology outpatient clinics. DESIGN Qualitative study using template analysis. SETTING Three academic cancer centres in Canada. PARTICIPANTS 7 resident physicians (trainees), 7 patients from outpatient cancer clinics, 10 preceptors (raters) consisting of medical oncologists, palliative care physicians and radiation oncologists. INTERVENTIONS Each trainee conducted an SIC with a patient, which was videotaped. The raters watched the videos and evaluated each trainee using the novel SIC-Ex and the reference Calgary-Cambridge Guide (CCG) initially and again 3 months later. Two independent coders used template analysis to code the raters' narrative comments and identify themes/subthemes. OUTCOME MEASURES How narrative comments aligned with elements of the CCG and SICG. RESULTS Template analysis yielded four themes: adhering to SICG, engaging patients and family members, conversation management and being mindful of demeanour. Narrative comments identified numerous verbal and non-verbal elements essential to SICG. Some comments addressing general skills in engaging patients/families and managing the conversation (eg, setting agenda, introduction, planning, exploring, non-verbal communication) related to both the CCG and SICG, whereas other comments such as identifying substitute decision maker(s), affirming commitment and introducing Advance Care Planning were specific to the SICG. CONCLUSIONS Narrative comments generated by SIC-Ex provided detailed and nuanced insights into trainees' competence in SIC, beyond the numerical ratings of SIC-Ex and the general communication skills outlined in the CCG, and may contribute to a more fulsome assessment of SIC skills.
Collapse
Affiliation(s)
- Jenny Ko
- Department of Medical Oncology, BC Cancer Agency Abbostford Centre, Abbotsford, British Columbia, Canada
| | - Amanda Roze des Ordons
- Department of Critical Care Medicine and Division of Palliative Medicine; Department of Anesthesiology, University of Calgary Cumming School of Medicine, Calgary, Alberta, Canada
| | - Mark Ballard
- Department of Internal Medicine, Chilliwack General Hospital, Chilliwack, British Columbia, Canada
| | - Tamara Shenkier
- Department of Medical Oncology, BC Cancer Agency Vancouver Centre, Vancouver, British Columbia, Canada
| | - Jessica E Simon
- Department of Oncology and Community Health Sciences, University of Calgary Cumming School of Medicine, Calgary, Alberta, Canada
| | - Gillian Fyles
- Pain and Symptom Management/Palliative Care Program, BC Cancer Agency Sindi Ahluwalia Hawkins Centre for the Southern Interior, Kelowna, British Columbia, Canada
| | - Shilo Lefresne
- Department of Radiation Oncology, BC Cancer Agency Vancouver Centre, Vancouver, British Columbia, Canada
| | - Philippa Hawley
- Department of Palliative Care, BC Cancer Agency, Vancouver, British Columbia, Canada
| | - Charlie Chen
- Department of Oncology and Community Health Sciences, University of Calgary Cumming School of Medicine, Calgary, Alberta, Canada
| | - Michael McKenzie
- Department of Radiation Oncology, BC Cancer Agency Vancouver Centre, Vancouver, British Columbia, Canada
| | - Justin Sanders
- Department of Palliative Care, McGill University, Montreal, Quebec, Canada
| | - Rachelle Bernacki
- Department of Palliative Care, Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| |
Collapse
|
2
|
Choo EK, Woods R, Walker ME, O’Brien JM, Chan TM. The Quality of Assessment for Learning score for evaluating written feedback in anesthesiology postgraduate medical education: a generalizability and decision study. CANADIAN MEDICAL EDUCATION JOURNAL 2023; 14:78-85. [PMID: 38226296 PMCID: PMC10787859 DOI: 10.36834/cmej.75876] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
Background Competency based residency programs depend on high quality feedback from the assessment of entrustable professional activities (EPA). The Quality of Assessment for Learning (QuAL) score is a tool developed to rate the quality of narrative comments in workplace-based assessments; it has validity evidence for scoring the quality of narrative feedback provided to emergency medicine residents, but it is unknown whether the QuAL score is reliable in the assessment of narrative feedback in other postgraduate programs. Methods Fifty sets of EPA narratives from a single academic year at our competency based medical education post-graduate anesthesia program were selected by stratified sampling within defined parameters [e.g. resident gender and stage of training, assessor gender, Competency By Design training level, and word count (≥17 or <17 words)]. Two competency committee members and two medical students rated the quality of narrative feedback using a utility score and QuAL score. We used Kendall's tau-b co-efficient to compare the perceived utility of the written feedback to the quality assessed with the QuAL score. The authors used generalizability and decision studies to estimate the reliability and generalizability coefficients. Results Both the faculty's utility scores and QuAL scores (r = 0.646, p < 0.001) and the trainees' utility scores and QuAL scores (r = 0.667, p < 0.001) were moderately correlated. Results from the generalizability studies showed that utility scores were reliable with two raters for both faculty (Epsilon=0.87, Phi=0.86) and trainees (Epsilon=0.88, Phi=0.88). Conclusions The QuAL score is correlated with faculty- and trainee-rated utility of anesthesia EPA feedback. Both faculty and trainees can reliability apply the QuAL score to anesthesia EPA narrative feedback. This tool has the potential to be used for faculty development and program evaluation in Competency Based Medical Education. Other programs could consider replicating our study in their specialty.
Collapse
Affiliation(s)
- Eugene K Choo
- Department of Anesthesiology, College of Medicine, University of Saskatchewan, Saskatchewan, Canada;
| | - Rob Woods
- Department of Emergency Medicine, College of Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Mary Ellen Walker
- Department of Anesthesiology, College of Medicine, University of Saskatchewan, Saskatchewan, Canada;
| | - Jennifer M O’Brien
- Department of Anesthesiology, College of Medicine, University of Saskatchewan, Saskatchewan, Canada;
| | - Teresa M Chan
- Department of Medicine (Division of Emergency Medicine; Division of Education & Innovation), Michael G. DeGroote School of Medicine, Faculty of Health Sciences, McMaster University and Office of Continuing Professional Development & McMaster Education Research, Innovation, and Theory (MERIT) Program, Faculty of Health Sciences, McMaster University, Ontario, Canada
| |
Collapse
|
3
|
Mooney CJ, Stone RT, Wang L, Blatt AE, Pascoe JM, Lang VJ. Examining Generalizability of Faculty Members' Narrative Assessments. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S210. [PMID: 37983456 DOI: 10.1097/acm.0000000000005417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2023]
Affiliation(s)
- Christopher J Mooney
- Author affiliations: C.J. Mooney, R.T. Stone, L. Wang, A.E. Blatt, J.M. Pascoe, V.J. Lang, University of Rochester School of Medicine and Dentistry
| | | | | | | | | | | |
Collapse
|
4
|
Nel D, McNamee L, Wright M, Alseidi AA, Cairncross L, Jonas E, Burch V. Competency Assessment of General Surgery Trainees: A Perspective From the Global South, in a CBME-Naive Context. JOURNAL OF SURGICAL EDUCATION 2023; 80:1462-1471. [PMID: 37453897 DOI: 10.1016/j.jsurg.2023.06.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 05/30/2023] [Accepted: 06/18/2023] [Indexed: 07/18/2023]
Abstract
OBJECTIVE Before proceeding with local implementation of competency-based medical education-related assessment practices designed and evaluated in the Global North, we sought to challenge the assumption that this would be perceived as both necessary and acceptable in our context where training and assessment is based on a traditional, knowledge-focused approach. The aim of this study was to determine the perspectives of general surgery trainees and consultants towards the assessment of competence, how this has been achieved previously, and how it should be performed in the future at the University of Cape Town (UCT), South Africa. DESIGN Semi-structured interviews were conducted with consultants and trainees. Interviews were transcribed and then analyzed using a Reflexive Thematic Analysis approach. SETTING AND PARTICIPANTS Ten consultants (5 senior and 5 junior) and 10 trainees (5 South African and 5 international) from the Division of General Surgery at UCT in August 2022. RESULTS Five unique themes were developed: (1) Assessment of competence is essential, (2) competence includes multiple domains of practice, (3) a surgeon must be able to operate, (4) previously used methods were inadequate to assess competence, and (5) frequent assessment with feedback is desired. The themes were considered in the context of Situated Learning Theory, particularly Communities of Practice and their role in the training for, and authentic assessment of, competence in general surgery trainees. CONCLUSIONS Participants described a need to develop and implement a new competency assessment program for general surgery training in this context, which is aligned with described competency-based medical education principles. Thoughtful integration of the formative and summative use of direct observation in the workplace, with a clear emphasis on procedural ability and the provision of high-quality feedback, may enhance the successful implementation of a strategy for competency-based assessment in general surgery training programs.
Collapse
Affiliation(s)
- D Nel
- Department of Surgery, Groote Schuur Hospital and University of Cape Town, Cape Town, South Africa.
| | - L McNamee
- Center for Higher Education Development, University of Cape Town, Cape Town, South Africa
| | - M Wright
- Department of Radiodiagnosis, Tygerberg Hospital, University of Stellenbosch, Cape Town, South Africa
| | - A A Alseidi
- Department of Surgery, University of California, San Francisco, California
| | - L Cairncross
- Department of Surgery, Groote Schuur Hospital and University of Cape Town, Cape Town, South Africa
| | - E Jonas
- Department of Surgery, Groote Schuur Hospital and University of Cape Town, Cape Town, South Africa
| | - V Burch
- Department of Medicine, Groote Schuur Hospital and University of Cape Town, and the Colleges of Medicine of South Africa, South Africa
| |
Collapse
|
5
|
Chakroun M, Dion VR, Ouellet K, Graillon A, Désilets V, Xhignesse M, St-Onge C. Quality of Narratives in Assessment: Piloting a List of Evidence-Based Quality Indicators. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:XX. [PMID: 37252269 PMCID: PMC10215990 DOI: 10.5334/pme.925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 05/12/2023] [Indexed: 05/31/2023]
Abstract
Background & Need for Innovation Appraising the quality of narratives used in assessment is challenging for educators and administrators. Although some quality indicators for writing narratives exist in the literature, they remain context specific and not always sufficiently operational to be easily used. Creating a tool that gathers applicable quality indicators and ensuring its standardized use would equip assessors to appraise the quality of narratives. Steps taken for Development and Implementation of innovation We used DeVellis' framework to develop a checklist of evidence-informed indicators for quality narratives. Two team members independently piloted the checklist using four series of narratives coming from three different sources. After each series, team members documented their agreement and achieved a consensus. We calculated frequencies of occurrence for each quality indicator as well as the interrater agreement to assess the standardized application of the checklist. Outcomes of Innovation We identified seven quality indicators and applied them on narratives. Frequencies of quality indicators ranged from 0% to 100%. Interrater agreement ranged from 88.7% to 100% for the four series. Critical Reflection Although we were able to achieve a standardized application of a list of quality indicators for narratives used in health sciences education, it does not exclude the fact that users would need training to be able to write good quality narratives. We also noted that some quality indicators were less frequent than others and we suggested a few reflections on this.
Collapse
Affiliation(s)
- Molk Chakroun
- Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Vincent R. Dion
- Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Kathleen Ouellet
- Paul Grand’Maison de la Sociétédes médecins de l’Universitéde Sherbrooke research chair in medical education, Sherbrooke, Québec, CA
| | - Ann Graillon
- Centre de pédagogie et des sciences de la santé, Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Valérie Désilets
- Department of Pediatrics, Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Marianne Xhignesse
- Department of Family and Emergency Medicine, Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Christina St-Onge
- Department of Medicine, Faculty of medicine and health sciences, Universitéde Sherbrooke, Paul Grand’Maison de la Sociétédes médecins de l’Universitéde Sherbrooke research chair in medical education, Sherbrooke, Québec, CA
| |
Collapse
|
6
|
Schafer KR, Sood L, King CJ, Alexandraki I, Aronowitz P, Cohen M, Chretien K, Pahwa A, Shen E, Williams D, Hauer KE. The Grade Debate: Evidence, Knowledge Gaps, and Perspectives on Clerkship Assessment Across the UME to GME Continuum. Am J Med 2023; 136:394-398. [PMID: 36632923 DOI: 10.1016/j.amjmed.2023.01.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 01/03/2023] [Indexed: 01/10/2023]
Affiliation(s)
- Katherine R Schafer
- Department of Internal Medicine, Wake Forest University School of Medicine, Winston-Salem, NC.
| | - Lonika Sood
- Elson S. Floyd College of Medicine, Washington State University, Spokane
| | - Christopher J King
- Division of Hospital Medicine, Department of Medicine, University of Colorado School of Medicine, Aurora
| | | | | | - Margot Cohen
- Department of Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia
| | | | - Amit Pahwa
- Johns Hopkins University School of Medicine, Baltimore, Md
| | - E Shen
- Department of Internal Medicine, Wake Forest University School of Medicine, Winston-Salem, NC
| | - Donna Williams
- Department of Internal Medicine, Wake Forest University School of Medicine, Winston-Salem, NC
| | | |
Collapse
|
7
|
Tay AZ, Tang PY, New LM, Zhang X, Leow WQ. Detecting residents at risk of attrition - A Singapore pathology residency's experience. Acad Pathol 2023; 10:100075. [PMID: 37095782 PMCID: PMC10121803 DOI: 10.1016/j.acpath.2023.100075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 01/21/2023] [Accepted: 02/05/2023] [Indexed: 04/26/2023] Open
Abstract
The SingHealth Pathology Residency Program (SHPRP) is a 5-year postgraduate training program in Singapore. We face the problem of resident attrition, which has a significant impact on the individual, program and healthcare providers. Our residents are regularly evaluated, using in-house evaluations as well as assessments required in our partnership with the Accreditation Council for Graduate Medical Education International (ACGME-I). We hence sought to determine if these assessments were able to distinguish residents who would attrite from residents who would graduate successfully. Retrospective analysis of existing residency assessments was performed on all residents who have separated from SHPRP and compared with residents currently in senior residency or graduated from the program. Statistical analysis was performed on quantitative assessment methods of Resident In-Service Examination (RISE), 360-degree feedback, faculty assessment, Milestones and our own annual departmental mock examination. Word frequency analysis of narrative feedback from faculty assessment was used to generate themes. Since 2011, 10 out of 34 residents have separated from the program. RISE, Milestone data and the departmental mock examination showed statistical significance in discriminating residents at risk of attrition for specialty-related reasons from successful residents. Analysis of narrative feedback showed that successful residents performed better in areas of organization, preparation with clinical history, application of knowledge, interpersonal communication and achieving sustained progress. Existing assessment methods used in our pathology residency program are effective in detecting residents at risk of attrition. This also suggests applications in the way that we select, assess and teach residents.
Collapse
Affiliation(s)
- Amos Z.E. Tay
- Department of Anatomic Pathology, Singapore General Hospital, Singapore
- Duke-NUS Medical School, Singapore
- Corresponding author. Department of Anatomic Pathology, Singapore General Hospital, Academia, Level 10, Diagnostic Tower, 20 College Road, Singapore, 169856, Singapore.
| | - Po Yin Tang
- Department of Anatomic Pathology, Singapore General Hospital, Singapore
- Duke-NUS Medical School, Singapore
| | - Lee May New
- Department of Anatomic Pathology, Singapore General Hospital, Singapore
| | - Xiaozhu Zhang
- Department of Anatomic Pathology, Singapore General Hospital, Singapore
| | - Wei-Qiang Leow
- Department of Anatomic Pathology, Singapore General Hospital, Singapore
- Duke-NUS Medical School, Singapore
| |
Collapse
|
8
|
Gutierrez M, Wilson K, Bickford B, Yuhas J, Markert R, Burtson KM. Novel In-Training Evaluation Report in an Internal Medicine Residency Program: Improving the Quality of the Narrative Assessment. JOURNAL OF MEDICAL EDUCATION AND CURRICULAR DEVELOPMENT 2023; 10:23821205231206058. [PMID: 37822780 PMCID: PMC10563452 DOI: 10.1177/23821205231206058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 09/20/2023] [Indexed: 10/13/2023]
Abstract
OBJECTIVE To determine whether incorporating our novel in-training evaluation report (ITER), which prompts each resident to list at least three self-identified learning goals, improved the quality of narrative assessments as measured by the Narrative Evaluation Quality Instrument (NEQI). METHODS A total of 1468 narrative assessments from a single institution from 2017 to 2021 were deidentified, compiled, and sorted into the pre-intervention form arm and post-intervention form arm. Due to limitations in our residency management suite, incorporating learning goals required switching from an electronic form to a hand-deliver form. Comments were graded by two research personnel utilizing the NEQI's scale of 0-12, with 12 representing the maximum quality for a comment. The outcome of the study was the mean difference in NEQI score between the electronic pre-intervention period and paper post-intervention period. RESULTS The mean NEQI score for the pre-intervention period was 2.43 ± 3.34, and the mean NEQI score for the post-intervention period was 3.31 ± 1.71, with a mean difference of 0.88 (p < 0.001). In the pre-intervention period, 46% of evaluations were submitted without a narrative assessment (scored as a zero) while 1% of post-intervention period evaluations had no narrative assessment. Internal consistency reliability, as measured by Ebel's intraclass correlation coefficient (ICC), showed high agreement between the two raters (ICC = 0.92). CONCLUSIONS Our findings suggest that implementing a timely, hand-delivered paper ITER that incorporates resident learning goals can lead to overall higher-quality narrative assessments.
Collapse
Affiliation(s)
- Marc Gutierrez
- Internal Medicine Program, Affiliated with Wright Patterson AFB and Wright State University, Wright-Patterson AFB, OH, USA
| | - Kelsey Wilson
- Internal Medicine Program, Affiliated with Wright Patterson AFB and Wright State University, Wright-Patterson AFB, OH, USA
| | - Brant Bickford
- Internal Medicine Program, Affiliated with Wright Patterson AFB and Wright State University, Wright-Patterson AFB, OH, USA
| | - Joseph Yuhas
- Internal Medicine Program, Affiliated with Wright Patterson AFB and Wright State University, Wright-Patterson AFB, OH, USA
| | - Ronald Markert
- Department of Internal Medicine and Neurology, Affiliated with Wright State University, Dayton, OH, USA
| | - Kathryn M Burtson
- Internal Medicine Program, Affiliated with Wright Patterson AFB, Boonshoft School of Medicine and Wright State University, Wright-Patterson AFB, OH 45433, USA
| |
Collapse
|
9
|
Mooney CJ, Pascoe JM, Blatt AE, Lang VJ, Kelly MS, Braun MK, Burch JE, Stone RT. Predictors of faculty narrative evaluation quality in medical school clerkships. MEDICAL EDUCATION 2022; 56:1223-1231. [PMID: 35950329 DOI: 10.1111/medu.14911] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 08/01/2022] [Accepted: 08/08/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Narrative approaches to assessment provide meaningful and valid representations of trainee performance. Yet, narratives are frequently perceived as vague, nonspecific and low quality. To date, there is little research examining factors associated with narrative evaluation quality, particularly in undergraduate medical education. The purpose of this study was to examine associations of faculty- and student-level characteristics with the quality of faculty member's narrative evaluations of clerkship students. METHODS The authors reviewed faculty narrative evaluations of 50 students' clinical performance in their inpatient medicine and neurology clerkships, resulting in 165 and 87 unique evaluations in the respective clerkships. The authors evaluated narrative quality using the Narrative Evaluation Quality Instrument (NEQI). The authors used linear mixed effects modelling to predict total NEQI score. Explanatory covariates included the following: time to evaluation completion, number of weeks spent with student, faculty total weeks on service per year, total faculty years in clinical education, student gender, faculty gender, and an interaction term between student and faculty gender. RESULTS Significantly higher narrative evaluation quality was associated with a shorter time to evaluation completion, with NEQI scores decreasing by approximately 0.3 points every 10 days following students' rotations (p = .004). Additionally, women faculty had statistically higher quality narrative evaluations with NEQI scores 1.92 points greater than men faculty (p = .012). All other covariates were not significant. CONCLUSIONS The quality of faculty members' narrative evaluations of medical students was associated with time to evaluation completion and faculty gender but not faculty experience in clinical education, faculty weeks on service, or the amount of time spent with students. Findings advance understanding on ways to improve the quality of narrative evaluations which are imperative given assessment models that will increase the volume and reliance on narratives.
Collapse
Affiliation(s)
- Christopher J Mooney
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Jennifer M Pascoe
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Amy E Blatt
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Valerie J Lang
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | | | - Melanie K Braun
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Jaclyn E Burch
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | | |
Collapse
|
10
|
Hatala R, Ginsburg S, Gauthier S, Melvin L, Taylor D, Gingerich A. Supervising the senior medical resident: Entrusting the role, supporting the tasks. MEDICAL EDUCATION 2022; 56:1194-1202. [PMID: 35869566 DOI: 10.1111/medu.14883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 07/09/2022] [Accepted: 07/19/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Postgraduate competency-based medical education has been implemented with programmatic assessment that relies on entrustment-based ratings. Yet, in less procedurally oriented specialties such as internal medicine, the relationship between entrustment and supervision remains unclear. We undertook the current study to address how internal medicine supervisors conceptualise entrusting senior medical residents while supervising them on the acute care wards. METHODS Guided by constructivist grounded theory, we interviewed 19 physicians who regularly supervised senior internal medicine residents on inpatient wards at three Canadian universities. We developed a theoretical model through iterative cycles of data collection and analysis using a constant comparative process. RESULTS On the internal medicine ward, the senior resident role is viewed as a fundamentally managerial and rudimentary version of the supervisor's role. Supervisors come to trust their residents in the senior role through an early 'hands-on' period of assessment followed by a gradual withdrawal of support to promote independence. When considering entrustment, supervisors focused on entrusting a particular scope of the senior resident role as opposed to entrustment of individual tasks. Irrespective of the scope of the role that was entrusted, supervisors at times stepped in and stepped back to support specific tasks. CONCLUSION Supervisors' stepping in and stepping back to support individual tasks on the acute care ward has an inconsistent relationship to their entrustment of the resident with a particular scope of the senior resident role. In this context, entrustment-based assessment would need to capture more of the holistic perspective of the supervisor's entrustment of the senior resident role. Understanding the dance of supervision, from relatively static overall support of the resident in their role, to fluidly stepping in and out for specific patient care tasks, allows us insight into the affordances of the supervisory relationship and how it may be leveraged for assessment.
Collapse
Affiliation(s)
- Rose Hatala
- Department of Medicine, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - Shiphra Ginsburg
- Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
- Wilson Centre for Education, University of Toronto, Toronto, Ontario, Canada
| | - Stephen Gauthier
- Department of Medicine, Faculty of Medicine, Queen's University, Kingston, Ontario, Canada
| | - Lindsay Melvin
- Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - David Taylor
- Department of Medicine, Faculty of Medicine, Queen's University, Kingston, Ontario, Canada
| | - Andrea Gingerich
- Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada
| |
Collapse
|
11
|
Chakroun M, Dion VR, Ouellet K, Graillon A, Désilets V, Xhignesse M, St-Onge C. Narrative Assessments in Higher Education: A Scoping Review to Identify Evidence-Based Quality Indicators. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:1699-1706. [PMID: 35612917 DOI: 10.1097/acm.0000000000004755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Narrative comments are increasingly used in assessment to document trainees' performance and to make important decisions about academic progress. However, little is known about how to document the quality of narrative comments, since traditional psychometric analysis cannot be applied. The authors aimed to generate a list of quality indicators for narrative comments, to identify recommendations for writing high-quality narrative comments, and to document factors that influence the quality of narrative comments used in assessments in higher education. METHOD The authors conducted a scoping review according to Arksey & O'Malley's framework. The search strategy yielded 690 articles from 6 databases. Team members screened abstracts for inclusion and exclusion, then extracted numerical and qualitative data based on predetermined categories. Numerical data were used for descriptive analysis. The authors completed the thematic analysis of qualitative data with iterative discussions until they achieved consensus for the interpretation of the results. RESULTS After the full-text review of 213 selected articles, 47 were included. Through the thematic analysis, the authors identified 7 quality indicators, 12 recommendations for writing quality narratives, and 3 factors that influence the quality of narrative comments used in assessment. The 7 quality indicators are (1) describes performance with a focus on particular elements (attitudes, knowledge, skills); (2) provides a balanced message between positive elements and elements needing improvement; (3) provides recommendations to learners on how to improve their performance; (4) compares the observed performance with an expected standard of performance; (5) provides justification for the mark/score given; (6) uses language that is clear and easily understood; and (7) uses a nonjudgmental style. CONCLUSIONS Assessors can use these quality indicators and recommendations to write high-quality narrative comments, thus reinforcing the appropriate documentation of trainees' performance, facilitating solid decision making about trainees' progression, and enhancing the impact of narrative feedback for both learners and programs.
Collapse
Affiliation(s)
- Molk Chakroun
- M. Chakroun is a PhD student, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0002-0518-1782
| | - Vincent R Dion
- V.R. Dion was research assistant, Paul Grand'Maison de la Société des médecins de l'Université de Sherbrooke Research Chair in Medical Education, at the time of this work, and is now a first-year medical student, Université de Sherbrooke, Sherbrooke, Québec, Canada
| | - Kathleen Ouellet
- K. Ouellet is research coordinator, Centre de pédagogie et des sciences de la santé, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0001-9829-151X
| | - Ann Graillon
- A. Graillon is associate professor, Department of Pediatrics, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0003-3677-7113
| | - Valérie Désilets
- V. Désilets is associate professor, Department of Pediatrics, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0001-7399-119X
| | - Marianne Xhignesse
- M. Xhignesse is full professor, Department of Family and Emergency Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0002-3257-5912
| | - Christina St-Onge
- C. St-Onge is full professor, Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, and holds the Paul Grand'Maison de la Société des médecins de l'Université de Sherbrooke Research Chair in Medical Education, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0001-5313-0456
| |
Collapse
|
12
|
Mooney CJ, Blatt A, Pascoe J, Lang V, Kelly M, Braun M, Burch J, Stone RT. Predictors of Narrative Evaluation Quality in Undergraduate Medical Education Clerkships. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:S168. [PMID: 37838897 DOI: 10.1097/acm.0000000000004809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Affiliation(s)
- Christopher J Mooney
- Author affiliations: C.J. Mooney, A. Blatt, J. Pascoe, V. Lang, M. Braun, J. Burch, R.T. Stone, University of Rochester School of Medicine and Dentistry; M. Kelly, Massachusetts General Hospital
| | | | | | | | | | | | | | | |
Collapse
|
13
|
Cheung WJ, Wagner N, Frank JR, Oswald A, Van Melle E, Skutovich A, Dalseg TR, Cooke LJ, Hall AK. Implementation of competence committees during the transition to CBME in Canada: A national fidelity-focused evaluation. MEDICAL TEACHER 2022; 44:781-789. [PMID: 35199617 DOI: 10.1080/0142159x.2022.2041191] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE This study evaluated the fidelity of competence committee (CC) implementation in Canadian postgraduate specialist training programs during the transition to competency-based medical education (CBME). METHODS A national survey of CC chairs was distributed to all CBME training programs in November 2019. Survey questions were derived from guiding documents published by the Royal College of Physicians and Surgeons of Canada reflecting intended processes and design. RESULTS Response rate was 39% (113/293) with representation from all eligible disciplines. Committee size ranged from 3 to 20 members, 42% of programs included external members, and 20% included a resident representative. Most programs (72%) reported that a primary review and synthesis of resident assessment data occurs prior to the meeting, with some data reviewed collectively during meetings. When determining entrustable professional activity (EPA) achievement, most programs followed the national specialty guidelines closely with some exceptions (53%). Documented concerns about professionalism, EPA narrative comments, and EPA entrustment scores were most highly weighted when determining resident progress decisions. CONCLUSIONS Heterogeneity in CC implementation likely reflects local adaptations, but may also explain some of the variable challenges faced by programs during the transition to CBME. Our results offer educational leaders important fidelity data that can help inform the larger evaluation and transformation of CBME.
Collapse
Affiliation(s)
- Warren J Cheung
- Department of Emergency Medicine, University of Ottawa, Ottawa, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
| | - Natalie Wagner
- Office of Professional Development & Educational Scholarship and Department of Biomedical & Molecular Sciences, Queen's University, Kingston, Canada
| | - Jason R Frank
- Department of Emergency Medicine, University of Ottawa, Ottawa, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
| | - Anna Oswald
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Medicine, University of Alberta, Edmonton, Canada
| | - Elaine Van Melle
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Family Medicine, Queen's University, Kingston, Canada
| | | | - Timothy R Dalseg
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Medicine, Division of Emergency Medicine, University of Toronto, Toronto, Canada
| | - Lara J Cooke
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Clinical Neurosciences, Cumming School of Medicine, University of Calgary, Calgary, Canada
| | - Andrew K Hall
- Department of Emergency Medicine, University of Ottawa, Ottawa, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
| |
Collapse
|
14
|
Concordance of Narrative Comments with Supervision Ratings Provided During Entrustable Professional Activity Assessments. J Gen Intern Med 2022; 37:2200-2207. [PMID: 35710663 PMCID: PMC9296736 DOI: 10.1007/s11606-022-07509-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 03/24/2022] [Indexed: 10/18/2022]
Abstract
BACKGROUND Use of EPA-based entrustment-supervision ratings to determine a learner's readiness to assume patient care responsibilities is expanding. OBJECTIVE In this study, we investigate the correlation between narrative comments and supervision ratings assigned during ad hoc assessments of medical students' performance of EPA tasks. DESIGN Data from assessments completed for students enrolled in the clerkship phase over 2 academic years were used to extract a stratified random sample of 100 narrative comments for review by an expert panel. PARTICIPANTS A review panel, comprised of faculty with specific expertise related to their roles within the EPA program, provided a "gold standard" supervision rating using the comments provided by the original assessor. MAIN MEASURES Interrater reliability (IRR) between members of review panel and correlation coefficients (CC) between expert ratings and supervision ratings from original assessors. KEY RESULTS IRR among members of the expert panel ranged from .536 for comments associated with focused history taking to .833 for complete physical exam. CC (Kendall's correlation coefficient W) between panel members' assignment of supervision ratings and the ratings provided by the original assessors for history taking, physical examination, and oral presentation comments were .668, .697, and .735 respectively. The supervision ratings of the expert panel had the highest degree of correlation with ratings provided during assessments done by master assessors, faculty trained to assess students across clinical contexts. Correlation between supervision ratings provided with the narrative comments at the time of observation and supervision ratings assigned by the expert panel differed by clinical discipline, perhaps reflecting the value placed on, and perhaps the comfort level with, assessment of the task in a given specialty. CONCLUSIONS To realize the full educational and catalytic effect of EPA assessments, assessors must apply established performance expectations and provide high-quality narrative comments aligned with the criteria.
Collapse
|
15
|
de Jong LH, Bok HGJ, Schellekens LH, Kremer WDJ, Jonker FH, van der Vleuten CPM. Shaping the right conditions in programmatic assessment: how quality of narrative information affects the quality of high-stakes decision-making. BMC MEDICAL EDUCATION 2022; 22:409. [PMID: 35643442 PMCID: PMC9148525 DOI: 10.1186/s12909-022-03257-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 03/10/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Programmatic assessment is increasingly being implemented within competency-based health professions education. In this approach a multitude of low-stakes assessment activities are aggregated into a holistic high-stakes decision on the student's performance. High-stakes decisions need to be of high quality. Part of this quality is whether an examiner perceives saturation of information when making a holistic decision. The purpose of this study was to explore the influence of narrative information in perceiving saturation of information during the interpretative process of high-stakes decision-making. METHODS In this mixed-method intervention study the quality of the recorded narrative information was manipulated within multiple portfolios (i.e., feedback and reflection) to investigate its influence on 1) the perception of saturation of information and 2) the examiner's interpretative approach in making a high-stakes decision. Data were collected through surveys, screen recordings of the portfolio assessments, and semi-structured interviews. Descriptive statistics and template analysis were applied to analyze the data. RESULTS The examiners perceived less frequently saturation of information in the portfolios with low quality of narrative feedback. Additionally, they mentioned consistency of information as a factor that influenced their perception of saturation of information. Even though in general they had their idiosyncratic approach to assessing a portfolio, variations were present caused by certain triggers, such as noticeable deviations in the student's performance and quality of narrative feedback. CONCLUSION The perception of saturation of information seemed to be influenced by the quality of the narrative feedback and, to a lesser extent, by the quality of reflection. These results emphasize the importance of high-quality narrative feedback in making robust decisions within portfolios that are expected to be more difficult to assess. Furthermore, within these "difficult" portfolios, examiners adapted their interpretative process reacting on the intervention and other triggers by means of an iterative and responsive approach.
Collapse
Affiliation(s)
- Lubberta H de Jong
- Department Population Health Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands.
| | - Harold G J Bok
- Department Population Health Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Lonneke H Schellekens
- Department Population Health Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
- Faculty of Social and Behavioural Sciences, Educational Consultancy and Professional Development, Utrecht University, Utrecht, The Netherlands
| | - Wim D J Kremer
- Department Population Health Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - F Herman Jonker
- Department Population Health Sciences, Section Farm Animal Health, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Cees P M van der Vleuten
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
16
|
Schumacher DJ, Teunissen PW, Kinnear B, Driessen EW. Assessing trainee performance: ensuring learner control, supporting development, and maximizing assessment moments. Eur J Pediatr 2022; 181:435-439. [PMID: 34286373 DOI: 10.1007/s00431-021-04182-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 06/22/2021] [Accepted: 06/23/2021] [Indexed: 11/28/2022]
Abstract
In this article, the authors provide practical guidance for frontline supervisors' efforts to assess trainee performance. They focus on three areas. First, they argue the importance of promoting learner control in the assessment process, noting that providing learners agency and control can shift the stakes of assessment from high to low and promote a safe environment that facilitates learning. Second, they posit that assessment should be used to support continued development by promoting a relational partnership between trainees and supervisors. This partnership allows supervisors to reinforce desirable aspects of performance, provide real-time support for deficient areas of performance, and sequence learning with the appropriate amount of scaffolding to push trainees from competence (what they can do alone) to capability (what they are able to do with support). Finally, they advocate the importance of optimizing the use of written comments and direct observation while also recognizing that performance is interdependent in efforts to maximize assessment moments.Conclusion: Using best practices in trainee assessment can help trainees take next steps in their development in a learner-centered partnership with clinical supervisors. What is Known: • Many pediatricians are asked to assess the performance of medical students and residents they work with but few have received formal training in assessment. What is New: • This article presents evidence-based best practices for assessing trainees, including giving trainees agency in the assessment process and focusing on helping trainees take next steps in their development.
Collapse
Affiliation(s)
- Daniel J Schumacher
- Division of Emergency Medicine, Cincinnati Children's Hospital Medical Center, and Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| | - Pim W Teunissen
- School of Health Professions Education (SHE), Faculty of Health Medicine and Life Sciences and Gynecologist, Department of Obstetrics and Gynecology, Maastricht University Medical Center, Maastricht, the Netherlands
| | - Benjamin Kinnear
- Internal Medicine and Pediatrics, Division of Hospital Medicine, Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Erik W Driessen
- School of Health Professions Education (SHE), Faculty of Health Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands
| |
Collapse
|
17
|
Kelleher M, Kinnear B, Sall DR, Weber DE, DeCoursey B, Nelson J, Klein M, Warm EJ, Schumacher DJ. Warnings in early narrative assessment that might predict performance in residency: signal from an internal medicine residency program. PERSPECTIVES ON MEDICAL EDUCATION 2021; 10:334-340. [PMID: 34476730 PMCID: PMC8633188 DOI: 10.1007/s40037-021-00681-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 07/08/2021] [Accepted: 07/11/2021] [Indexed: 05/10/2023]
Abstract
INTRODUCTION Narrative assessment data are valuable in understanding struggles in resident performance. However, it remains unknown which themes in narrative data that occur early in training may indicate a higher likelihood of struggles later in training, allowing programs to intervene sooner. METHODS Using learning analytics, we identified 26 internal medicine residents in three cohorts that were below expected entrustment during training. We compiled all narrative data in the first 6 months of training for these residents as well as 13 typically performing residents for comparison. Narrative data were blinded for all 39 residents during initial phases of an inductive thematic analysis for initial coding. RESULTS Many similarities were identified between the two cohorts. Codes that differed between typical and lower entrusted residents were grouped into two types of themes: three explicit/manifest and three implicit/latent with six total themes. The explicit/manifest themes focused on specific aspects of resident performance with assessors describing 1) Gaps in attention to detail, 2) Communication deficits with patients, and 3) Difficulty recognizing the "big picture" in patient care. Three implicit/latent themes, focused on how narrative data were written, were also identified: 1) Feedback described as a deficiency rather than an opportunity to improve, 2) Normative comparisons to identify a resident as being behind their peers, and 3) Warning of possible risk to patient care. DISCUSSION Clinical competency committees (CCCs) usually rely on accumulated data and trends. Using the themes in this paper while reviewing narrative comments may help CCCs with earlier recognition and better allocation of resources to support residents' development.
Collapse
Affiliation(s)
- Matthew Kelleher
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| | - Benjamin Kinnear
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Dana R Sall
- HonorHealth Internal Medicine Residency Program, Scottsdale, Arizona and University of Arizona College of Medicine, Phoenix, AZ, USA
| | - Danielle E Weber
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Bailey DeCoursey
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Jennifer Nelson
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Melissa Klein
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Eric J Warm
- Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Daniel J Schumacher
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| |
Collapse
|
18
|
Al Maawali A, Puran A, Schwartz S, Johnstone J, Bismilla Z. The current state of general paediatric fellowships in Canada. Paediatr Child Health 2021; 26:353-357. [PMID: 34630782 DOI: 10.1093/pch/pxaa136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Accepted: 12/22/2020] [Indexed: 11/13/2022] Open
Abstract
Introduction The field of Paediatric Medicine has grown tremendously over the last two decades. Several niche areas of practice have emerged, and opportunities for focused training in these areas have grown in parallel. The landscape of 'General Paediatric Fellowship' (GPF) Programs in Canada is not well described; this knowledge is needed to promote standardization and high-quality training across Canada. This study explores the structure and components of existing GPFs in Canada and identifies the interest and barriers to providing such programs. Methods A questionnaire was created to explore the landscape of GPF Programs in Canada. Invitations to participate were sent to leaders of General Paediatric Divisions across Canada, with a request to forward the survey to the most appropriate individual to respond within their local context. Results A total of 19 responses (95%) representing 17 different Canadian universities were obtained. Eight universities offered a total of 13 GPF Programs in 2019, with one additional university planning to start a program in the coming year. Existing programs were variable in size, structure and curriculum. Most programs identified as Academic Paediatric Programs, with an overlap in content and structure between Academic Paediatrics and Paediatric Hospital Medicine programs. The majority of respondents felt there was a need for GPF Programs in Canada but cited funding as the most common perceived barrier. Conclusion A growing number of GPF Programs exist in Canada. Current fellowship programs are variable in structure and content. Collaboration between programs is required to advance GPF training in Canada.
Collapse
Affiliation(s)
- Ali Al Maawali
- Department of Paediatrics, The Hospital for Sick Children, Toronto, Ontario
| | - Allan Puran
- Department of Paediatrics, The Hospital for Sick Children, Toronto, Ontario
| | - Sarah Schwartz
- Department of Paediatrics, The Hospital for Sick Children, Toronto, Ontario
| | - Julie Johnstone
- Department of Paediatrics, The Hospital for Sick Children, Toronto, Ontario
| | - Zia Bismilla
- Department of Paediatrics, The Hospital for Sick Children, Toronto, Ontario
| |
Collapse
|
19
|
Read EK, Brown A, Maxey C, Hecker KG. Comparing Entrustment and Competence: An Exploratory Look at Performance-Relevant Information in the Final Year of a Veterinary Program. JOURNAL OF VETERINARY MEDICAL EDUCATION 2021; 48:562-572. [PMID: 33661087 DOI: 10.3138/jvme-2019-0128] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Workplace-based assessments and entrustment scales have two primary goals: providing formative information to assist students with future learning; and, determining if and when learners are ready for safe, independent practice. To date, there has not been an evaluation of the relationship between these performance-relevant information pieces in veterinary medicine. This study collected quantitative and qualitative data from a single cohort of final-year students (n = 27) across in-training evaluation reports (ITERs) and entrustment scales in a distributed veterinary hospital environment. Here we compare progression in scoring and performance within and across student, within and across method of assessment, over time. Narrative comments were quantified using the Completed Clinical Evaluation Report Rating (CCERR) instrument to assess quality of written comments. Preliminary evidence suggests that we may be capturing different aspects of performance using these two different methods. Specifically, entrustment scale scores significantly increased over time, while ITER scores did not. Typically, comments on entrustment scale scores were more learner specific, longer, and used more of a coaching voice. Longitudinal evaluation of learner performance is important for learning and demonstration of competence; however, the method of data collection could influence how feedback is structured and how performance is ultimately judged.
Collapse
|
20
|
Bray MJ, Bradley EB, Martindale JR, Gusic ME. Implementing Systematic Faculty Development to Support an EPA-Based Program of Assessment: Strategies, Outcomes, and Lessons Learned. TEACHING AND LEARNING IN MEDICINE 2021; 33:434-444. [PMID: 33331171 DOI: 10.1080/10401334.2020.1857256] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Problem: Development of a novel, competency-based program of assessment requires creation of a plan to measure the processes that enable successful implementation. The principles of implementation science outline the importance of considering key drivers that support and sustain transformative change within an educational program. The introduction of Entrustable Professional Activities (EPAs) as a framework for assessment has underscored the need to create a structured plan to prepare assessors to engage in a new paradigm of assessment. Although approaches to rater training for workplace-based assessments have been described, specific strategies to prepare assessors to apply standards related to the level of supervision a student needs have not been documented. Intervention: We describe our systematic approach to prepare assessors, faculty and postgraduate trainees, to complete EPA assessments for medical students during the clerkship phase of our curriculum. This institution-wide program is designed to build assessors' skills in direct observation of learners during authentic patient encounters. Assessors apply new knowledge and practice skills in using established performance expectations to determine the level of supervision a learner needs to perform clinical tasks. Assessors also learn to provide feedback and narrative comments to coach students and promote their ongoing clinical development. Data visualizations for assessors facilitate reinforcement of the tenets learned during training. Collaborative learning and peer feedback during faculty development sessions promote the formation of a community of practice among assessors. Context: Faculty development for assessors was implemented in advance of implementation of the EPA program. Assessors in the program include residents/fellows who work closely with students, faculty with discipline-specific expertise and a group of experienced clinicians who were selected to serve as experts in competency-based EPA assessments, the Master Assessors. Training focused on creating a shared understanding about the application of criteria used to evaluate student performance. EPA assessments based on the AAMC's Core Entrustable Professional Activities for Entering Residency, were completed in nine core clerkships. EPA assessments included a supervision rating based on a modified scale for use in undergraduate medical education. Impact: Data from EPA assessments completed during the first year of the program were analyzed to evaluate the effectiveness of the faculty development activities implemented to prepare assessors to consistently apply standards for assessment. A systematic approach to training and attention to critical drivers that enabled institution-wide implementation, led to consistency in the supervision rating for students' first EPA assessment completed by any type of assessor, ratings by assessors done within a specific clinical context, and ratings assigned by a group of specific assessors across clinical settings. Lessons learned: A systematic approach to faculty development with a willingness to be flexible and reach potential participants using existing infrastructure, can facilitate assessors' engagement in a new culture of assessment. Interaction among participants during training sessions not only promotes learning but also contributes to community building. A leadership group responsible to oversee faculty development can ensure that the needs of stakeholders are addressed and that a change in assessment culture is sustained.
Collapse
Affiliation(s)
- Megan J Bray
- Department of Obstetrics and Gynecology, Center for Medical Education Research and Scholarly Innovation, Office of Medical Education, University of Virginia School of Medicine, Charlottesville, Virginia, USA
| | - Elizabeth B Bradley
- Center for Medical Education Research and Scholarly Innovation, Office of Medical Education, University of Virginia School of Medicine, Charlottesville, Virginia, USA
| | - James R Martindale
- Center for Medical Education Research and Scholarly Innovation, Office of Medical Education, University of Virginia School of Medicine, Charlottesville, Virginia, USA
| | - Maryellen E Gusic
- Center for Medical Education Research and Scholarly Innovation, Office of Medical Education, Department of Pediatrics, University of Virginia School of Medicine, Charlottesville, Virginia, USA
| |
Collapse
|
21
|
Ginsburg S, Watling CJ, Schumacher DJ, Gingerich A, Hatala R. Numbers Encapsulate, Words Elaborate: Toward the Best Use of Comments for Assessment and Feedback on Entrustment Ratings. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S81-S86. [PMID: 34183607 DOI: 10.1097/acm.0000000000004089] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The adoption of entrustment ratings in medical education is based on a seemingly simple premise: to align workplace-based supervision with resident assessment. Yet it has been difficult to operationalize this concept. Entrustment rating forms combine numeric scales with comments and are embedded in a programmatic assessment framework, which encourages the collection of a large quantity of data. The implicit assumption that more is better has led to an untamable volume of data that competency committees must grapple with. In this article, the authors explore the roles of numbers and words on entrustment rating forms, focusing on the intended and optimal use(s) of each, with a focus on the words. They also unpack the problematic issue of dual-purposing words for both assessment and feedback. Words have enormous potential to elaborate, to contextualize, and to instruct; to realize this potential, educators must be crystal clear about their use. The authors set forth a number of possible ways to reconcile these tensions by more explicitly aligning words to purpose. For example, educators could focus written comments solely on assessment; create assessment encounters distinct from feedback encounters; or use different words collected from the same encounter to serve distinct feedback and assessment purposes. Finally, the authors address the tyranny of documentation created by programmatic assessment and urge caution in yielding to the temptation to reduce words to numbers to make them manageable. Instead, they encourage educators to preserve some educational encounters purely for feedback, and to consider that not all words need to become data.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- S. Ginsburg is professor of medicine, Department of Medicine, Sinai Health System and Faculty of Medicine, University of Toronto, scientist, Wilson Centre for Research in Education, University of Toronto, Toronto, Ontario, Canada, and Canada Research Chair in Health Professions Education; ORCID: http://orcid.org/0000-0002-4595-6650
| | - Christopher J Watling
- C.J. Watling is professor and director, Centre for Education Research and Innovation, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada; ORCID: https://orcid.org/0000-0001-9686-795X
| | - Daniel J Schumacher
- D.J. Schumacher is associate professor of pediatrics, Cincinnati Children's Hospital Medical Center and University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0001-5507-8452
| | - Andrea Gingerich
- A. Gingerich is assistant professor, Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada; ORCID: https://orcid.org/0000-0001-5765-3975
| | - Rose Hatala
- R. Hatala is professor, Department of Medicine, and director, Clinical Educator Fellowship, Center for Health Education Scholarship, University of British Columbia, Vancouver, British Columbia, Canada; ORCID: https://orcid.org/0000-0003-0521-2590
| |
Collapse
|
22
|
Hernandez CA, Daroowalla F, LaRochelle JS, Ismail N, Tartaglia KM, Fagan MJ, Kisielewski M, Walsh K. Determining Grades in the Internal Medicine Clerkship: Results of a National Survey of Clerkship Directors. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:249-255. [PMID: 33149085 DOI: 10.1097/acm.0000000000003815] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PURPOSE Trust in and comparability of assessments are essential in clerkships in undergraduate medical education for many reasons, including ensuring competency in clinical skills and application of knowledge important for the transition to residency and throughout students' careers. The authors examined how assessments are used to determine internal medicine (IM) core clerkship grades across U.S. medical schools. METHODS A multisection web-based survey of core IM clerkship directors at 134 U.S. medical schools with membership in the Clerkship Directors in Internal Medicine was conducted in October through November 2018. The survey included a section on assessment practices to characterize current grading scales used, who determines students' final clerkship grades, the nature/type of summative assessments, and how assessments are weighted. Respondents were asked about perceptions of the influence of the National Board of Medical Examiners (NBME) Medicine Subject Examination (MSE) on students' priorities during the clerkship. RESULTS The response rate was 82.1% (110/134). There was considerable variability in the summative assessments and their weighting in determining final grades. The NBME MSE (91.8%), clinical performance (90.9%), professionalism (70.9%), and written notes (60.0%) were the most commonly used assessments. Clinical performance assessments and the NBME MSE accounted for the largest percentage of the total grade (on average 52.8% and 23.5%, respectively). Eighty-seven percent of respondents were concerned that students' focus on the NBME MSE performance detracted from patient care learning. CONCLUSIONS There was considerable variability in what IM clerkships assessed and how those assessments were translated into grades. The NBME MSE was a major contributor to the final grade despite concerns about the impact on patient care learning. These findings underscore the difficulty in comparing learners across institutions and serve to advance discussions for how to improve accuracy and comparability of grading in the clinical environment.
Collapse
Affiliation(s)
- Caridad A Hernandez
- C.A. Hernandez is professor of medicine, Departments of Internal Medicine and Medical Education, University of Central Florida College of Medicine, Orlando, Florida
| | - Feroza Daroowalla
- F. Daroowalla is associate professor of medicine, Department of Medical Education, and Internal Medicine Clerkship Director, University of Central Florida College of Medicine, Orlando, Florida
| | - Jeffrey S LaRochelle
- J.S. LaRochelle is professor of medicine, Department of Medical Education, and assistant dean of medical education, University of Central Florida College of Medicine, Orlando, Florida
| | - Nadia Ismail
- N. Ismail is associate professor of medicine, Department of Medicine, and associate dean, curriculum, Baylor College of Medicine, Houston, Texas
| | - Kimberly M Tartaglia
- K.M. Tartaglia is associate professor of clinical medicine and pediatrics, Division of Hospital Medicine, The Ohio State University, Columbus, Ohio
| | - Mark J Fagan
- M.J. Fagan is professor of medicine emeritus, Department of Medicine, Alpert Medical School of Brown University, Providence, Rhode Island
| | - Michael Kisielewski
- M. Kisielewski is Surveys and Research Manager, Alliance for Academic Internal Medicine, Alexandria, Virginia
| | - Katherine Walsh
- K. Walsh is associate professor of clinical internal medicine, Division of Hematology and Internal Medicine Inpatient Clerkship Director, The Ohio State University, Columbus, Ohio
| |
Collapse
|
23
|
Griffiths J, Schultz K, Han H, Dalgarno N. Feedback on feedback: a two-way street between residents and preceptors. CANADIAN MEDICAL EDUCATION JOURNAL 2021; 12:e32-e45. [PMID: 33680229 PMCID: PMC7931483 DOI: 10.36834/cmej.69913] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
BACKGROUND Workplace-based assessment (WBA), foundational to competency-based medical education, relies on preceptors providing feedback to residents. Preceptors however get little timely, formative, specific, actionable feedback on the effectiveness of that feedback. Our study aimed to identify useful qualities of feedback for family medicine residents and to inform improving feedback-giving skills for preceptors in PGME training program. METHODS This study employed a two-phase exploratory design. Phase 1 collected qualitative data from preceptor feedback given to residents through Field Notes (FNs) and quantitative data from residents who provided feedback to preceptor about the quality of the feedback given. Phase 2 employed focus groups to explore ways in which residents are willing to provide preceptors with constructive feedback about the quality of the feedback they receive. Descriptive statistics and a thematic approach were used for data analysis. FINDINGS We collected 22 FNs identified by residents as being impactful to their learning; analysis of these FNs resulted in five themes. Functionality was then added to the electronic FNs allowing residents to indicate impactful feedback with a "Thumbs Up" icon. Over one year, 895 out of 8,496 FNs (11%) had a "Thumbs up" added, divided into reasons of: confirmation of learning (28.6%), practice improvement (21.2%), new learning (18.8%), motivation (17.7%), and evoking reflection (13.7%). Two focus groups (12 residents, convenience sampling) explored residents' perception of constructive feedback and willingness to also provide constructive feedback to preceptors. CONCLUSION Adding constructive feedback to existing positive feedback choices will provide preceptors with holistic information about the impact of their feedback on learners, which, in turn, should allow them to provide more effective feedback to learners. However, power differential, relationship impact, and institutional support were concerns for residents that would need to be addressed for this to be optimally operationalized.
Collapse
Affiliation(s)
- Jane Griffiths
- Department of Family Medicine, Queen's University, Ontario, Canada
| | - Karen Schultz
- Department of Family Medicine, Queen's University, Ontario, Canada
| | - Han Han
- Department of Family Medicine, Queen's University, Ontario, Canada
| | - Nancy Dalgarno
- Office of Professional Development and Educational Scholarship, Queen's University, Ontario, Canada
| |
Collapse
|
24
|
Aakre CA, Maggio LA, Fiol GD, Cook DA. Barriers and facilitators to clinical information seeking: a systematic review. J Am Med Inform Assoc 2021; 26:1129-1140. [PMID: 31127830 DOI: 10.1093/jamia/ocz065] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Revised: 03/28/2019] [Accepted: 04/19/2019] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE The study sought to identify barriers to and facilitators of point-of-care information seeking and use of knowledge resources. MATERIALS AND METHODS We searched MEDLINE, Embase, PsycINFO, and Cochrane Library from 1991 to February 2017. We included qualitative studies in any language exploring barriers to and facilitators of point-of-care information seeking or use of electronic knowledge resources. Two authors independently extracted data on users, study design, and study quality. We inductively identified specific barriers or facilitators and from these synthesized a model of key determinants of information-seeking behaviors. RESULTS Forty-five qualitative studies were included, reporting data derived from interviews (n = 26), focus groups (n = 21), ethnographies (n = 6), logs (n = 4), and usability studies (n = 2). Most studies were performed within the context of general medicine (n = 28) or medical specialties (n = 13). We inductively identified 58 specific barriers and facilitators and then created a model reflecting 5 key determinants of information-seeking behaviors: time includes subthemes of time availability, efficiency of information seeking, and urgency of information need; accessibility includes subthemes of hardware access, hardware speed, hardware portability, information restriction, and cost of resources; personal skills and attitudes includes subthemes of computer literacy, information-seeking skills, and contextual attitudes about information seeking; institutional attitudes, cultures, and policies includes subthemes describing external individual and institutional information-seeking influences; and knowledge resource features includes subthemes describing information-seeking efficiency, information content, information organization, resource familiarity, information credibility, information currency, workflow integration, compatibility of recommendations with local processes, and patient educational support. CONCLUSIONS Addressing these determinants of information-seeking behaviors may facilitate clinicians' question answering to improve patient care.
Collapse
Affiliation(s)
- Christopher A Aakre
- Division of General Internal Medicine, Mayo Clinic, Rochester, Minnesota, USA
| | - Lauren A Maggio
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland, USA
| | - Guilherme Del Fiol
- Department of Biomedical Informatics, University of Utah School of Medicine, Salt Lake City, Utah, USA
| | - David A Cook
- Division of General Internal Medicine, Mayo Clinic, Rochester, Minnesota, USA
| |
Collapse
|
25
|
Hauer KE, Giang D, Kapp ME, Sterling R. Standardization in the MSPE: Key Tensions for Learners, Schools, and Residency Programs. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:44-49. [PMID: 32167965 DOI: 10.1097/acm.0000000000003290] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The Medical Student Performance Evaluation (MSPE), which summarizes a medical student's academic and professional undergraduate medical education performance and provides salient information during the residency selection process, faces persistent criticisms regarding heterogeneity and obscurity. Specifically, MSPEs do not always provide the same type or amount of information about students, especially from diverse schools, and important information is not always easy to find or interpret. To address these concerns, a key guiding principle from the Recommendations for Revising the MSPE Task Force of the Association of American Medical Colleges (AAMC) was to achieve "a level of standardization and transparency that facilitates the residency selection process." Benefits of standardizing the MSPE format include clarification of performance benchmarks or metrics, consistency across schools to enhance readability, and improved quality. In medical education, standardization may be an important mechanism to ensure accountability of the system for all learners, including those with varied backgrounds and socioeconomic resources. In this article, members of the aforementioned AAMC MSPE task force explore 5 tensions inherent in the pursuit of standardizing the MSPE: (1) presenting each student's individual characteristics and strengths in a way that is relevant, while also working with a standard format and providing standard content; (2) showcasing school-specific curricular strengths while also demonstrating standard evidence of readiness for internship; (3) defining and achieving the right amount of standardization so that the MSPE provides useful information, adds value to the residency selection process, and is efficient to read and understand; (4) balancing reporting with advocacy; and (5) maintaining standardization over time, especially given the tendency for the MSPE format and content to drift. Ongoing efforts to promote collaboration and trust across the undergraduate to graduate medical education continuum offer promise to reconcile these tensions and promote successful educational outcomes.
Collapse
Affiliation(s)
- Karen E Hauer
- K.E. Hauer is associate dean, Assessment, and professor, Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, California; ORCID: https://orcid.org/0000-0002-8812-4045
| | - Daniel Giang
- D. Giang is associate dean, Graduate Medical Education, and professor, Department of Neurology, Loma Linda University, Loma Linda, California
| | - Meghan E Kapp
- M.E. Kapp is assistant professor, Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, Tennessee; ORCID: https://orcid.org/0000-0002-0252-3919
| | - Robert Sterling
- R. Sterling is associate professor, Department of Orthopaedic Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland; ORCID: https://orcid.org/0000-0003-2963-3162
| |
Collapse
|
26
|
Ko JJ, Ballard MS, Shenkier T, Simon J, Roze des Ordons A, Fyles G, Lefresne S, Hawley P, Chen C, McKenzie M, Ghement I, Sanders JJ, Bernacki R, Jones S. Serious Illness Conversation-Evaluation Exercise: A Novel Assessment Tool for Residents Leading Serious Illness Conversations. Palliat Med Rep 2020; 1:280-290. [PMID: 34223487 PMCID: PMC8241377 DOI: 10.1089/pmr.2020.0086] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/13/2020] [Indexed: 11/28/2022] Open
Abstract
Background/Objectives: The serious illness conversation (SIC) is an evidence-based framework for conversations with patients about a serious illness diagnosis. The objective of our study was to develop and validate a novel tool, the SIC-evaluation exercise (SIC-Ex), to facilitate assessment of resident-led conversations with oncology patients. Design: We developed the SIC-Ex based on SIC and on the Royal College of Canada Medical Oncology milestones. Seven resident trainees and 10 evaluators were recruited. Each trainee conducted an SIC with a patient, which was videotaped. The evaluators watched the videos and evaluated each trainee by using the novel SIC-Ex and the reference Calgary-Cambridge guide (CCG) at months zero and three. We used Kane's validity framework to assess validity. Results: Intra-class correlation using average SIC-Ex scores showed a moderate level of inter-evaluator agreement (range 0.523–0.822). Most evaluators rated a particular resident similar to the group average, except for one to two evaluator outliers in each domain. Test–retest reliability showed a moderate level of consistency among SIC-Ex scores at months zero and three. Global rating at zero and three months showed fair to good/very good inter-evaluator correlation. Pearson correlation coefficients comparing total SIC-Ex and CCG scores were high for most evaluators. Self-scores by trainees did not correlate well with scores by evaluators. Conclusions: SIC-Ex is the first assessment tool that provides evidence for incorporating the SIG guide framework for evaluation of resident competence. SIC-Ex is conceptually related to, but more specific than, CCG in evaluating serious illness conversation skills.
Collapse
Affiliation(s)
- Jenny J Ko
- Department of Medical Oncology, University of British Columbia, BC Cancer-Abbotsford, Abbotsford, British Columbia, Canada
| | - Mark S Ballard
- Department of Internal Medicine, Chilliwack General Hospital, Chilliwack, British Columbia, Canada
| | - Tamara Shenkier
- Department of Medical Oncology, BC Cancer-Vancouver, Vancouver, British Columbia, Canada
| | - Jessica Simon
- Department of Oncology, University of Calgary, Calgary, Alberta, Canada
| | | | - Gillian Fyles
- BC Centre for Palliative Care, Vancouver, British Columbia, Canada
| | - Shilo Lefresne
- Department of Radiation Oncology, BC Cancer-Vancouver, Vancouver, British Columbia, Canada
| | - Philippa Hawley
- Pain and Symptom Management/Palliative Care Program, BC Cancer-Vancouver, Vancouver, British Columbia, Canada
| | - Charlie Chen
- Department of Oncology, University of Calgary, Calgary, Alberta, Canada
| | - Michael McKenzie
- Department of Radiation Oncology, BC Cancer-Vancouver, Vancouver, British Columbia, Canada
| | | | - Justin J Sanders
- Ariadne Labs, Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| | - Rachelle Bernacki
- Ariadne Labs, Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| | - Scott Jones
- Vancouver Coastal Health, Vancouver, British Columbia, Canada
| |
Collapse
|
27
|
Ginsburg S, Gingerich A, Kogan JR, Watling CJ, Eva KW. Idiosyncrasy in Assessment Comments: Do Faculty Have Distinct Writing Styles When Completing In-Training Evaluation Reports? ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:S81-S88. [PMID: 32769454 DOI: 10.1097/acm.0000000000003643] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
PURPOSE Written comments are gaining traction as robust sources of assessment data. Compared with the structure of numeric scales, what faculty choose to write is ad hoc, leading to idiosyncratic differences in what is recorded. This study offers exploration of what aspects of writing styles are determined by the faculty offering comment and what aspects are determined by the trainee being commented upon. METHOD The authors compiled in-training evaluation report comment data, generated from 2012 to 2015 by 4 large North American Internal Medicine training programs. The Linguistic Index and Word Count (LIWC) was used to categorize and quantify the language contained. Generalizability theory was used to determine whether faculty could be reliably discriminated from one another based on writing style. Correlations and ANOVAs were used to determine what styles were related to faculty or trainee demographics. RESULTS Datasets contained 23-142 faculty who provided 549-2,666 assessments on 161-989 trainees. Faculty could easily be discriminated from one another using a variety of LIWC metrics including word count, words per sentence, and the use of "clout" words. These patterns appeared person specific and did not reflect demographic factors such as gender or rank. These metrics were similarly not consistently associated with trainee factors such as postgraduate year or gender. CONCLUSIONS Faculty seem to have detectable writing styles that are relatively stable across the trainees they assess, which may represent an under-recognized source of construct irrelevance. If written comments are to meaningfully contribute to decision making, we need to understand and account for idiosyncratic writing styles.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- S. Ginsburg is professor of medicine, Department of Medicine, Faculty of Medicine, University of Toronto, scientist, Wilson Centre for Research in Education, University Health Network, University of Toronto, Toronto, Ontario, Canada, and Canada Research Chair in Health Professions Education; ORCID: http://orcid.org/0000-0002-4595-6650
| | - Andrea Gingerich
- A. Gingerich is assistant professor, Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada; ORCID: https://orcid.org/0000-0001-5765-3975
| | - Jennifer R Kogan
- J.R. Kogan is professor and associate dean for student success and professional development, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-8426-9506
| | - Christopher J Watling
- C.J. Watling is professor and director, Centre for Education Research and Innovation, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada; ORCID: https://orcid.org/0000-0001-9686-795X
| | - Kevin W Eva
- K.W. Eva is professor and director of education research and scholarship, Department of Medicine, and associate director and senior scientist, Centre for Health Education Scholarship, University of British Columbia, Vancouver, British Columbia, Canada; ORCID: http://orcid.org/0000-0002-8672-2500
| |
Collapse
|
28
|
Odorizzi S, Cheung WJ, Sherbino J, Lee AC, Thurgur L, Frank JR. A Signal Through the Noise: Do Professionalism Concerns Impact the Decision Making of Competence Committees? ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:896-901. [PMID: 31577582 DOI: 10.1097/acm.0000000000003005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE To characterize how professionalism concerns influence individual reviewers' decisions about resident progression using simulated competence committee (CC) reviews. METHOD In April 2017, the authors conducted a survey of 25 Royal College of Physicians and Surgeons of Canada emergency medicine residency program directors and senior faculty who were likely to function as members of a CC (or equivalent) at their institution. Participants took a survey with 12 resident portfolios, each containing hypothetical formative and summative assessments. Six portfolios represented residents progressing as expected (PAE) and 6 represented residents not progressing as expected (NPAE). A professionalism variable (PV) was developed for each portfolio. Two counterbalanced surveys were developed in which 6 portfolios contained a PV and 6 portfolios did not (for each PV condition, 3 portfolios represented residents PAE and 3 represented residents NPAE). Participants were asked to make progression decisions based on each portfolio. RESULTS Without PVs, the consistency of participants giving scores of 1 or 2 (i.e., little or no need for educational intervention) to residents PAE and to those NPAE was 92% and 10%, respectively. When a PV was added, the consistency decreased by 34% for residents PAE and increased by 4% for those NPAE (P = .01). CONCLUSIONS When reviewing a simulated resident portfolio, individual reviewer scores for residents PAE were responsive to the addition of professionalism concerns. Considering this, educators using a CC should have a system to report, collect, and document professionalism issues.
Collapse
Affiliation(s)
- Scott Odorizzi
- S. Odorizzi is postgraduate year 5 resident physician, Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada. W.J. Cheung is assistant professor and staff physician, Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada. J. Sherbino is professor, Division of Emergency Medicine, Department of Medicine, and assistant dean, health professions education research, McMaster University, Hamilton, Ontario, Canada. A.C. Lee is conjoint associate professor, School of Medicine and Public Health, The University of Newcastle Australia, Callaghan, New South Wales, Australia, and psychometrician, Royal Australasian College of Physicians, Sydney, New South Wales, Australia. L. Thurgur is assistant professor and staff physician, Department of Emergency Medicine, and program director, Royal College Emergency Medicine Residency Program, University of Ottawa, Ottawa, Ontario, Canada. J.R. Frank is associate professor and staff physician, Department of Emergency Medicine, University of Ottawa, and director, Specialty Education, Strategy and Standards, Office of Specialty Education, Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada
| | | | | | | | | | | |
Collapse
|
29
|
Kelly MS, Mooney CJ, Rosati JF, Braun MK, Thompson Stone R. Education Research: The Narrative Evaluation Quality Instrument: Development of a tool to assess the assessor. Neurology 2020; 94:91-95. [PMID: 31932402 DOI: 10.1212/wnl.0000000000008794] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
OBJECTIVE Determining the quality of narrative evaluations to assess medical student neurology clerkship performance remains a challenge. This study sought to develop a tool to comprehensively and systematically assess quality of student narrative evaluations. METHODS The Narrative Evaluation Quality Instrument (NEQI) was created to assess several components within clerkship narrative evaluations: performance domains, specificity, and usefulness to learner. In this retrospective study, 5 investigators scored 123 narrative evaluations using the NEQI. Inter-rater reliability was estimated by calculating interclass correlation coefficients (ICC) across 615 NEQI scores. RESULTS The average overall NEQI score was 6.4 (SD 2.9), with mean component arm scores of 2.6 for performance domains (SD 0.9), 1.8 for specificity (SD 1.1), and 2.0 for usefulness (SD 1.4). Each component arm exhibited moderate reliability: performance domains ICC 0.65 (95% confidence interval [CI] 0.58-0.72), specificity ICC 0.69 (95% CI 0.61-0.77), and usefulness ICC 0.73 (95% CI 0.66-0.80). Overall NEQI score exhibited good reliability (0.81; 95% CI 0.77-0.86). CONCLUSION The NEQI is a novel, reliable tool to comprehensively assess the quality of narrative evaluation of neurology clerks and will enhance the study of interventions seeking to improve clerkship evaluation.
Collapse
Affiliation(s)
- Michael S Kelly
- From the Department of Neurology (R.T.S., J.R., M.B.), University of Rochester School of Medicine and Dentistry (C.M., M.K.), NY
| | - Christopher J Mooney
- From the Department of Neurology (R.T.S., J.R., M.B.), University of Rochester School of Medicine and Dentistry (C.M., M.K.), NY
| | - Justin F Rosati
- From the Department of Neurology (R.T.S., J.R., M.B.), University of Rochester School of Medicine and Dentistry (C.M., M.K.), NY
| | - Melanie K Braun
- From the Department of Neurology (R.T.S., J.R., M.B.), University of Rochester School of Medicine and Dentistry (C.M., M.K.), NY
| | - Robert Thompson Stone
- From the Department of Neurology (R.T.S., J.R., M.B.), University of Rochester School of Medicine and Dentistry (C.M., M.K.), NY
| |
Collapse
|
30
|
Diller D, Cooper S, Jain A, Lam CN, Riddell J. Which Emergency Medicine Milestone Sub-competencies are Identified Through Narrative Assessments? West J Emerg Med 2019; 21:173-179. [PMID: 31913841 PMCID: PMC6948702 DOI: 10.5811/westjem.2019.12.44468] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2019] [Accepted: 12/04/2019] [Indexed: 12/02/2022] Open
Abstract
Introduction Evaluators use assessment data to make judgments on resident performance within the Accreditation Council for Graduate Medical Education (ACGME) milestones framework. While workplace-based narrative assessments (WBNA) offer advantages to rating scales, validity evidence for their use in assessing the milestone sub-competencies is lacking. This study aimed to determine the frequency of sub-competencies assessed through WBNAs in an emergency medicine (EM) residency program. Methods We performed a retrospective analysis of WBNAs of postgraduate year (PGY) 2–4 residents. A shared mental model was established by reading and discussing the milestones framework, and we created a guide for coding WBNAs to the milestone sub-competencies in an iterative process. Once inter-rater reliability was satisfactory, raters coded each WBNA to the 23 EM milestone sub-competencies. Results We analyzed 2517 WBNAs. An average of 2.04 sub-competencies were assessed per WBNA. The sub-competencies most frequently identified were multitasking, medical knowledge, practice-based performance improvement, patient-centered communication, and team management. The sub-competencies least frequently identified were pharmacotherapy, airway management, anesthesia and acute pain management, goal-directed focused ultrasound, wound management, and vascular access. Overall, the frequency with which WBNAs assessed individual sub-competencies was low, with 14 of the 23 sub-competencies being assessed in less than 5% of WBNAs. Conclusion WBNAs identify few milestone sub-competencies. Faculty assessed similar sub-competencies related to interpersonal and communication skills, practice-based learning and improvement, and medical knowledge, while neglecting sub-competencies related to patient care and procedural skills. These findings can help shape faculty development programs designed to improve assessments of specific workplace behaviors and provide more robust data for the summative assessment of residents.
Collapse
Affiliation(s)
- David Diller
- LAC+USC Medical Center, Keck School of Medicine of the University of Southern California, Department of Emergency Medicine, Los Angeles, California
| | - Shannon Cooper
- Henry Ford Allegiance Health, Department of Emergency Medicine, Jackson, Michigan
| | - Aarti Jain
- LAC+USC Medical Center, Keck School of Medicine of the University of Southern California, Department of Emergency Medicine, Los Angeles, California
| | - Chun Nok Lam
- LAC+USC Medical Center, Keck School of Medicine of the University of Southern California, Department of Emergency Medicine, Los Angeles, California
| | - Jeff Riddell
- LAC+USC Medical Center, Keck School of Medicine of the University of Southern California, Department of Emergency Medicine, Los Angeles, California
| |
Collapse
|
31
|
Tekian A, Park YS, Tilton S, Prunty PF, Abasolo E, Zar F, Cook DA. Competencies and Feedback on Internal Medicine Residents' End-of-Rotation Assessments Over Time: Qualitative and Quantitative Analyses. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:1961-1969. [PMID: 31169541 PMCID: PMC6882536 DOI: 10.1097/acm.0000000000002821] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
PURPOSE To examine how qualitative narrative comments and quantitative ratings from end-of-rotation assessments change for a cohort of residents from entry to graduation, and explore associations between comments and ratings. METHOD The authors obtained end-of-rotation quantitative ratings and narrative comments for 1 cohort of internal medicine residents at the University of Illinois at Chicago College of Medicine from July 2013-June 2016. They inductively identified themes in comments, coded orientation (praising/critical) and relevance (specificity and actionability) of feedback, examined associations between codes and ratings, and evaluated changes in themes and ratings across years. RESULTS Data comprised 1,869 assessments (828 comments) on 33 residents. Five themes aligned with ACGME competencies (interpersonal and communication skills, professionalism, medical knowledge, patient care, and systems-based practice), and 3 did not (personal attributes, summative judgment, and comparison to training level). Work ethic was the most frequent subtheme. Comments emphasized medical knowledge more in year 1 and focused more on autonomy, leadership, and teaching in later years. Most comments (714/828 [86%]) contained high praise, and 412/828 (50%) were very relevant. Average ratings correlated positively with orientation (β = 0.46, P < .001) and negatively with relevance (β = -0.09, P = .01). Ratings increased significantly with each training year (year 1, mean [standard deviation]: 5.31 [0.59]; year 2: 5.58 [0.47]; year 3: 5.86 [0.43]; P < .001). CONCLUSIONS Narrative comments address resident attributes beyond the ACGME competencies and change as residents progress. Lower quantitative ratings are associated with more specific and actionable feedback.
Collapse
Affiliation(s)
- Ara Tekian
- A. Tekian is professor and associate dean for international affairs, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-9252-1588
| | - Yoon Soo Park
- Y.S. Park is associate professor, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0001-8583-4335
| | - Sarette Tilton
- S. Tilton is a PharmD candidate, University of Illinois at Chicago College of Pharmacy, Chicago, Illinois
| | - Patrick F. Prunty
- P.F. Prunty is a PharmD candidate, University of Illinois at Chicago College of Pharmacy, Chicago, Illinois
| | - Eric Abasolo
- E. Abasolo is a PharmD candidate, University of Illinois at Chicago College of Pharmacy, Chicago, Illinois
| | - Fred Zar
- F. Zar is professor and program director, Department of Medicine, University of Illinois at Chicago College of Medicine, Chicago, Illinois
| | - David A. Cook
- D.A. Cook is professor of medicine and medical education and associate director, Office of Applied Scholarship and Education Science, and consultant, Division of General Internal Medicine, Mayo Clinic College of Medicine, Rochester, Minnesota; ORCID: https://orcid.org/0000-0003-2383-4633
| |
Collapse
|
32
|
Tremblay G, Carmichael PH, Maziade J, Grégoire M. Detection of Residents With Progress Issues Using a Keyword-Specific Algorithm. J Grad Med Educ 2019; 11:656-662. [PMID: 31871565 PMCID: PMC6919172 DOI: 10.4300/jgme-d-19-00386.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/30/2019] [Revised: 09/16/2019] [Accepted: 09/17/2019] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND The literature suggests that specific keywords included in summative rotation assessments might be an early indicator of abnormal progress or failure. OBJECTIVE This study aims to determine the possible relationship between specific keywords on in-training evaluation reports (ITERs) and subsequent abnormal progress or failure. The goal is to create a functional algorithm to identify residents at risk of failure. METHODS A database of all ITERs from all residents training in accredited programs at Université Laval between 2001 and 2013 was created. An instructional designer reviewed all ITERs and proposed terms associated with reinforcing and underperformance feedback. An algorithm based on these keywords was constructed by recursive partitioning using classification and regression tree methods. The developed algorithm was tuned to achieve 100% sensitivity while maximizing specificity. RESULTS There were 41 618 ITERs for 3292 registered residents. Residents with failure to progress were detected for family medicine (6%, 67 of 1129) and 36 other specialties (4%, 78 of 2163), while the positive predictive values were 23.3% and 23.4%, respectively. The low positive predictive value may be a reflection of residents improving their performance after receiving feedback or a reluctance by supervisors to ascribe a "fail" or "in difficulty" score on the ITERs. CONCLUSIONS Classification and regression trees may be helpful to identify pertinent keywords and create an algorithm, which may be implemented in an electronic assessment system to detect future residents at risk of poor performance.
Collapse
|
33
|
Thai TTN, Pham TT, Nguyen KT, Nguyen PM, Derese A. Can a family medicine rotation improve medical students' knowledge, skills and attitude towards primary care in Vietnam? A pre-test-post-test comparison and qualitative survey. Trop Med Int Health 2019; 25:264-275. [PMID: 31674702 DOI: 10.1111/tmi.13326] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
OBJECTIVES Well-designed studies on the impact of a family medicine rotation on medical students are rare, and very few studies include a qualitative component. This study aimed to determine the improvement of medical students' knowledge, communication skills and attitude towards primary care and explore their perceptions after rotations, in comparison with a control group. METHODS We used a mixed-methods design, comprising a pre-test-post-test comparison between a sample of trained students who took family medicine rotations and a control group and a qualitative survey. The measurement of improvement included (i) multiple choice question testing, (ii) objective structured checklist examinations, (iii) self-reporting and (iv) interviews and focus group discussions. Data were collected from August 2017 to June 2018. RESULTS There were 696 students in the trained group and 617 controls. The two groups' baseline scores in knowledge, communication skills and attitude were not significantly different. Knowledge covering five domains of family medicine (Pearson's r from 0.6 to 0.9) improved significantly, as did attitudes towards primary care in the trained group. There were no differences in communication and counselling skills between the two groups for four situations, but for two-health check-ups and mental health care-skills were significantly improved (Pearson's r from 0.28 to 0.43). The qualitative survey showed highly positive feedback from trained students. CONCLUSIONS The family medicine rotation significantly improved students' knowledge and attitude towards primary care and some communication skills. Further studies should investigate students' interest in and career choice for this discipline.
Collapse
Affiliation(s)
- Thuy T N Thai
- Department of Family Medicine, Faculty of Medicine, Can Tho University of Medicine and Pharmacy, Can Tho, Vietnam
| | - Tam T Pham
- Faculty of Public Health, Can Tho University of Medicine and Pharmacy, Can Tho, Vietnam
| | - Kien T Nguyen
- Faculty of Medicine, Can Tho University of Medicine and Pharmacy, Can Tho, Vietnam
| | - Phuong M Nguyen
- Faculty of Medicine, Can Tho University of Medicine and Pharmacy, Can Tho, Vietnam.,Skills Training Unit, Can Tho University of Medicine and Pharmacy, Can Tho, Vietnam
| | - Anselme Derese
- Department of Public Health and Primary Care, Ghent University, Ghent, Belgium
| |
Collapse
|
34
|
Young JQ. Advancing Our Understanding of Narrative Comments Generated by Direct Observation Tools: Lessons From the Psychopharmacotherapy-Structured Clinical Observation. J Grad Med Educ 2019; 11:570-579. [PMID: 31636828 PMCID: PMC6795331 DOI: 10.4300/jgme-d-19-00207.1] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2019] [Revised: 07/07/2019] [Accepted: 08/05/2019] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND While prior research has focused on the validity of quantitative ratings generated by direct observation tools, much less is known about the written comments. OBJECTIVE This study examines the quality of written comments and their relationship with checklist scores generated by a direct observation tool, the Psychopharmacotherapy-Structured Clinical Observation (P-SCO). METHODS From 2008 to 2012, faculty in a postgraduate year 3 psychiatry outpatient clinic completed 601 P-SCOs. Twenty-five percent were randomly selected from each year; the sample included 8 faculty and 57 residents. To assess quality, comments were coded for valence (reinforcing or corrective), behavioral specificity, and content. To assess the relationship between comments and scores, the authors calculated the correlation between comment and checklist score valence and examined the degree to which comments and checklist scores addressed the same content. RESULTS Ninety-one percent of the comments were behaviorally specific. Sixty percent were reinforcing, and 40% were corrective. Eight themes were identified, including 2 constructs not adequately represented by the checklist. Comment and checklist score valence was moderately correlated (Spearman's rho = 0.57, P < .001). Sixty-seven percent of high and low checklist scores were associated with a comment of the same valence and content. Only 50% of overall comments were associated with a checklist score of the same valence and content. CONCLUSIONS A direct observation tool such as the P-SCO can generate high-quality written comments. Narrative comments both explain checklist scores and convey unique content. Thematic coding of comments can improve the content validity of a checklist.
Collapse
|
35
|
Frank AK, O'Sullivan P, Mills LM, Muller-Juge V, Hauer KE. Clerkship Grading Committees: the Impact of Group Decision-Making for Clerkship Grading. J Gen Intern Med 2019; 34:669-676. [PMID: 30993615 PMCID: PMC6502934 DOI: 10.1007/s11606-019-04879-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
BACKGROUND Faculty and students debate the fairness and accuracy of medical student clerkship grades. Group decision-making is a potential strategy to improve grading. OBJECTIVE To explore how one school's grading committee members integrate assessment data to inform grade decisions and to identify the committees' benefits and challenges. DESIGN This qualitative study used semi-structured interviews with grading committee chairs and members conducted between November 2017 and March 2018. PARTICIPANTS Participants included the eight core clerkship directors, who chaired their grading committees. We randomly selected other committee members to invite, for a maximum of three interviews per clerkship. APPROACH Interviews were recorded, transcribed, and analyzed using inductive content analysis. KEY RESULTS We interviewed 17 committee members. Within and across specialties, committee members had distinct approaches to prioritizing and synthesizing assessment data. Participants expressed concerns about the quality of assessments, necessitating careful scrutiny of language, assessor identity, and other contextual factors. Committee members were concerned about how unconscious bias might impact assessors, but they felt minimally impacted at the committee level. When committee members knew students personally, they felt tension about how to use the information appropriately. Participants described high agreement within their committees; debate was more common when site directors reviewed students' files from other sites prior to meeting. Participants reported multiple committee benefits including faculty development and fulfillment, as well as improved grading consistency, fairness, and transparency. Groupthink and a passive approach to bias emerged as the two main threats to optimal group decision-making. CONCLUSIONS Grading committee members view their practices as advantageous over individual grading, but they feel limited in their ability to address grading fairness and accuracy. Recommendations and support may help committees broaden their scope to address these aspirations.
Collapse
Affiliation(s)
- Annabel K Frank
- Department of Medicine, University of California, San Francisco, San Francisco, CA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Patricia O'Sullivan
- Department of Medicine, University of California, San Francisco, San Francisco, CA, USA
| | - Lynnea M Mills
- Department of Medicine, University of California, San Francisco, San Francisco, CA, USA
| | - Virginie Muller-Juge
- Department of Medicine, University of California, San Francisco, San Francisco, CA, USA
| | - Karen E Hauer
- Department of Medicine, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
36
|
Lefebvre C, Hiestand B, Glass C, Masneri D, Hosmer K, Hunt M, Hartman N. Examining the Effects of Narrative Commentary on Evaluators’ Summative Assessments of Resident Performance. Eval Health Prof 2018; 43:159-161. [DOI: 10.1177/0163278718820415] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Anchor-based, end-of-shift ratings are commonly used to conduct performance assessments of resident physicians. These performance evaluations often include narrative assessments, such as solicited or “free-text” commentary. Although narrative commentary can help to create a more detailed and specific assessment of performance, there are limited data describing the effects of narrative commentary on the global assessment process. This single-group, observational study examined the effect of narrative comments on global performance assessments. A subgroup of the clinical competency committee, blinded to resident identity, assigned a single, consensus-based performance score (1–6) to each resident based solely on end-of-shift milestone scores. De-identified narrative comments from end-of-shift evaluations were then included and the process was repeated. We compared milestone-only scores to milestone plus narrative commentary scores using a nonparametric sign test. During the study period, 953 end-of-shift evaluations were submitted on 41 residents. Of these, 535 evaluations included free-text narrative comments. In 17 of the 41 observations, performance scores changed after the addition of narrative comments. In two cases, scores decreased with the addition of free-text commentary. In 15 cases, scores increased. The frequency of net positive change was significant ( p = .0023). The addition of narrative commentary to anchor-based ratings significantly influenced the global performance assessment of Emergency Medicine residents by a committee of educators. Descriptive commentary collected at the end of shift may inform more meaningful appraisal of a resident’s progress in a milestone-based paradigm. The authors recommend clinical training programs collect unstructured narrative impressions of residents’ performance from supervising faculty.
Collapse
Affiliation(s)
- Cedric Lefebvre
- Department of Emergency Medicine, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Brian Hiestand
- Department of Emergency Medicine, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Casey Glass
- Department of Emergency Medicine, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - David Masneri
- Department of Emergency Medicine, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Kathleen Hosmer
- Department of Emergency Medicine, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Meagan Hunt
- Department of Emergency Medicine, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Nicholas Hartman
- Department of Emergency Medicine, Wake Forest School of Medicine, Winston-Salem, NC, USA
| |
Collapse
|
37
|
What do quantitative ratings and qualitative comments tell us about general surgery residents' progress toward independent practice? Evidence from a 5-year longitudinal cohort. Am J Surg 2018; 217:288-295. [PMID: 30309619 DOI: 10.1016/j.amjsurg.2018.09.031] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Revised: 09/12/2018] [Accepted: 09/28/2018] [Indexed: 11/21/2022]
Abstract
BACKGROUND This study examines the alignment of quantitative and qualitative assessment data in end-of-rotation evaluations using longitudinal cohorts of residents progressing throughout the five-year general surgery residency. METHODS Rotation evaluation data were extracted for 171 residents who trained between July 2011 and July 2016. Data included 6069 rotation evaluations forms completed by 38 faculty members and 164 peer-residents. Qualitative comments mapped to general surgery milestones were coded for positive/negative feedback and relevance. RESULTS Quantitative evaluation scores were significantly correlated with positive/negative feedback, r = 0.52 and relevance, r = -0.20, p < .001. Themes included feedback on leadership, teaching contribution, medical knowledge, work ethic, patient-care, and ability to work in a team-based setting. Faculty comments focused on technical and clinical abilities; comments from peers focused on professionalism and interpersonal relationships. CONCLUSIONS We found differences in themes emphasized as residents progressed. These findings underscore improving our understanding of how faculty synthesize assessment data.
Collapse
|
38
|
Cheung WJ, Dudek NL, Wood TJ, Frank JR. Supervisor-trainee continuity and the quality of work-based assessments. MEDICAL EDUCATION 2017; 51:1260-1268. [PMID: 28971502 DOI: 10.1111/medu.13415] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Revised: 05/30/2017] [Accepted: 07/11/2017] [Indexed: 05/12/2023]
Abstract
CONTEXT Work-based assessments (WBAs) represent an increasingly important means of reporting expert judgements of trainee competence in clinical practice. However, the quality of WBAs completed by clinical supervisors is of concern. The episodic and fragmented interaction that often occurs between supervisors and trainees has been proposed as a barrier to the completion of high-quality WBAs. OBJECTIVES The primary purpose of this study was to determine the effect of supervisor-trainee continuity on the quality of assessments documented on daily encounter cards (DECs), a common form of WBA. The relationship between trainee performance and DEC quality was also examined. METHODS Daily encounter cards representing three differing degrees of supervisor-trainee continuity (low, intermediate, high) were scored by two raters using the Completed Clinical Evaluation Report Rating (CCERR), a previously published nine-item quantitative measure of DEC quality. An analysis of variance (anova) was performed to compare mean CCERR scores among the three groups. Linear regression analysis was conducted to examine the relationship between resident performance and DEC quality. RESULTS Differences in mean CCERR scores were observed between the three continuity groups (p = 0.02); however, the magnitude of the absolute differences was small (partial eta-squared = 0.03) and not educationally meaningful. Linear regression analysis demonstrated a significant inverse relationship between resident performance and CCERR score (p < 0.001, r2 = 0.18). This inverse relationship was observed in both groups representing on-service residents (p = 0.001, r2 = 0.25; p = 0.04, r2 = 0.19), but not in the Off-service group (p = 0.62, r2 = 0.05). CONCLUSIONS Supervisor-trainee continuity did not have an educationally meaningful influence on the quality of assessments documented on DECs. However, resident performance was found to affect assessor behaviours in the On-service group, whereas DEC quality remained poor regardless of performance in the Off-service group. The findings suggest that greater attention should be given to determining ways of improving the quality of assessments reported for off-service residents, as well as for those residents demonstrating appropriate clinical competence progression.
Collapse
Affiliation(s)
- Warren J Cheung
- Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Nancy L Dudek
- Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Timothy J Wood
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| | - Jason R Frank
- Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada
| |
Collapse
|
39
|
Wilbur K. Does faculty development influence the quality of in-training evaluation reports in pharmacy? BMC MEDICAL EDUCATION 2017; 17:222. [PMID: 29157239 PMCID: PMC5697106 DOI: 10.1186/s12909-017-1054-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2017] [Accepted: 11/02/2017] [Indexed: 06/02/2023]
Abstract
BACKGROUND In-training evaluation reports (ITERs) of student workplace-based learning are completed by clinical supervisors across various health disciplines. However, outside of medicine, the quality of submitted workplace-based assessments is largely uninvestigated. This study assessed the quality of ITERs in pharmacy and whether clinical supervisors could be trained to complete higher quality reports. METHODS A random sample of ITERs submitted in a pharmacy program during 2013-2014 was evaluated. These ITERs served as a historical control (control group 1) for comparison with ITERs submitted in 2015-2016 by clinical supervisors who participated in an interactive faculty development workshop (intervention group) and those who did not (control group 2). Two trained independent raters scored the ITERs using a previously validated nine-item scale assessing report quality, the Completed Clinical Evaluation Report Rating (CCERR). The scoring scale for each item is anchored at 1 ("not at all") and 5 ("exemplary"), with 3 categorized as "acceptable". RESULTS Mean CCERR score for reports completed after the workshop (22.9 ± 3.39) did not significantly improve when compared to prospective control group 2 (22.7 ± 3.63, p = 0.84) and were worse than historical control group 1 (37.9 ± 8.21, p = 0.001). Mean item scores for individual CCERR items were below acceptable thresholds for 5 of the 9 domains in control group 1, including supervisor documented evidence of specific examples to clearly explain weaknesses and concrete recommendations for student improvement. Mean item scores for individual CCERR items were below acceptable thresholds for 6 and 7 of the 9 domains in control group 2 and the intervention group, respectively. CONCLUSIONS This study is the first using CCERR to evaluate ITER quality outside of medicine. Findings demonstrate low baseline CCERR scores in a pharmacy program not demonstrably changed by a faculty development workshop, but strategies are identified to augment future rater training.
Collapse
Affiliation(s)
- Kerry Wilbur
- College of Pharmacy, Qatar University, PO Box 2713, Doha, Qatar.
| |
Collapse
|
40
|
Bartels J, Mooney CJ, Stone RT. Numerical versus narrative: A comparison between methods to measure medical student performance during clinical clerkships. MEDICAL TEACHER 2017; 39:1154-1158. [PMID: 28845738 DOI: 10.1080/0142159x.2017.1368467] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
BACKGROUND Medical school evaluations typically rely on both language-based narrative descriptions and psychometrically converted numeric scores to convey performance to the grading committee. We evaluated inter-rater reliability and correlation of numeric versus narrative evaluations for students on their Neurology Clerkship. DESIGN/METHODS 50 Neurology Clerkship in-training evaluation reports completed by their residents and faculty members at the University of Rochester School of Medicine were dissected into narrative and numeric components. 5 Clerkship grading committee members retrospectively gave new narrative scores (NNS) while blinded to original numeric scores (ONS). We calculated intra-class correlation coefficients (ICC) and their associated confidence intervals for the ONS and the NNS. In addition, we calculated the correlation between ONS and NNS. RESULTS The ICC was greater for the NNS (ICC = .88 (95% CI = .70-.94)) than the ONS (ICC = .62 (95% CI = .40-.77)) Pearson correlation coefficient showed that the ONS and NNS were highly correlated (r = .81). CONCLUSIONS Narrative evaluations converted by a small group of experienced graders are at least as reliable as numeric scoring by individual evaluators. We could allow evaluators to focus their efforts on creating richer narrative of greater value to trainees.
Collapse
Affiliation(s)
- Josef Bartels
- a Family Medicine , WWAMI Region Practice & Research Network , Boise , ID , USA
| | - Christopher John Mooney
- b Office of Medical Education , University of Rochester School of Medicine and Dentistry , Rochester , NY , USA
| | - Robert Thompson Stone
- c Neurology , University of Rochester School of Medicine and Dentistry , Rochester , NY , USA
| |
Collapse
|
41
|
Ginsburg S, van der Vleuten CPM, Eva KW. The Hidden Value of Narrative Comments for Assessment: A Quantitative Reliability Analysis of Qualitative Data. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2017; 92:1617-1621. [PMID: 28403004 DOI: 10.1097/acm.0000000000001669] [Citation(s) in RCA: 72] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
PURPOSE In-training evaluation reports (ITERs) are ubiquitous in internal medicine (IM) residency. Written comments can provide a rich data source, yet are often overlooked. This study determined the reliability of using variable amounts of commentary to discriminate between residents. METHOD ITER comments from two cohorts of PGY-1s in IM at the University of Toronto (graduating 2010 and 2011; n = 46-48) were put into sets containing 15 to 16 residents. Parallel sets were created: one with comments from the full year and one with comments from only the first three assessments. Each set was rank-ordered by four internists external to the program between April 2014 and May 2015 (n = 24). Generalizability analyses and a decision study were performed. RESULTS For the full year of comments, reliability coefficients averaged across four rankers were G = 0.85 and G = 0.91 for the two cohorts. For a single ranker, G = 0.60 and G = 0.73. Using only the first three assessments, reliabilities remained high at G = 0.66 and G = 0.60 for a single ranker. In a decision study, if two internists ranked the first three assessments, reliability would be G = 0.80 and G = 0.75 for the two cohorts. CONCLUSIONS Using written comments to discriminate between residents can be extremely reliable even after only several reports are collected. This suggests a way to identify residents early on who may require attention. These findings contribute evidence to support the validity argument for using qualitative data for assessment.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- S. Ginsburg is professor, Department of Medicine, and scientist, Wilson Centre for Research in Education, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada. C.P.M. van der Vleuten is professor of education, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands. K.W. Eva is associate director and senior scientist, Centre for Health Education Scholarship, and professor and director of educational research and scholarship, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | | | | |
Collapse
|
42
|
Sklar DP. Assessment Reconsidered: Finding the Balance Between Patient Safety, Student Ranking, and Feedback for Improved Learning. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2017; 92:721-724. [PMID: 28557907 DOI: 10.1097/acm.0000000000001687] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
|