1
|
van Enk A, MacDonald G, Hatala R, Gingerich A, Tam J. Not in the file: How competency committees work with undocumented contributions. MEDICAL EDUCATION 2024. [PMID: 38899368 DOI: 10.1111/medu.15457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 04/25/2024] [Accepted: 05/24/2024] [Indexed: 06/21/2024]
Abstract
INTRODUCTION Competence committees (CCs) centre their work around documentation of trainees' performance; undocumented contributions (i.e. informal, unrecorded material like personal judgements, experiential anecdotes and contextual information) evoke suspicion even though they may play a role in decision making. This qualitative multiple case study incorporates insights from a social practice perspective on writing to examine the use of undocumented contributions by the CCs of two large post-graduate training programmes, one in a more procedural (MP) speciality and the other in a less procedural (LP) one. METHODS Data were collected via observations of meetings and semi-structured interviews with CC members. In the analysis, conversations were organised into triptychs of lead-up, undocumented contribution(s), and follow-up. We then created thick descriptions around the undocumented contributions, drawing on conversational context and interview data to assign possible motivations and significance. RESULTS We found no instances in which undocumented contributions superseded the contents of a trainee's file or stood in for missing documentation. The number of undocumented contributions varied between the MP CC (six instances over two meetings) and the LP CC (22 instances over three meetings). MP CC discussions emphasised Entrustable Professional Activity (EPA) observations, whereas LP CC members paid more attention to narrative data. The divergent orientations of the CCs-adding an 'advis[ing]/guid[ing]' role versus focusing simply on evaluation-offers the most compelling explanation. In lead-ups, undocumented contributions were prompted by missing and flawed documentation, conflicting evidence and documentation at odds with members' perceptions. Recognising other 'red flags' in documentation often required professional experience. In follow-ups, purposes served by undocumented contributions varied with context and were difficult to generalise; we, therefore, provide deeper analysis of two vignettes to illustrate. CONCLUSIONS Our data suggest undocumented contributions often serve best efforts to ground decisions in documentation. We would encourage CC practices and policies be rooted in more nuanced approaches to documentation.
Collapse
Affiliation(s)
- Anneke van Enk
- Centre for Health Education Scholarship, University of British Columbia, Vancouver, BC, Canada
| | - Graham MacDonald
- Rehabilitation Sciences Program, University of British Columbia, Vancouver, BC, Canada
| | - Rose Hatala
- Division of General Internal Medicine, Department of Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Andrea Gingerich
- Division of Medical Sciences, University of Northern British Columbia, Prince George, BC, Canada
| | - Jennifer Tam
- Division of Infectious Diseases, Department of Pediatrics, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
2
|
Jarrett JB, Elmes AT, Keller E, Stowe CD, Daugherty KK. Evaluating the Strengths and Barriers of Competency-Based Education in the Health Professions. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2024; 88:100709. [PMID: 38729616 DOI: 10.1016/j.ajpe.2024.100709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 04/30/2024] [Accepted: 04/30/2024] [Indexed: 05/12/2024]
Abstract
OBJECTIVE This study aimed to define competency-based education (CBE) for pharmacy education and describe how strengths and barriers of CBE can support or hinder implementation. FINDINGS Sixty-five studies were included from a variety of health professions in order to define competency based pharmacy education (CBPE) and identify barriers and benefits from the learner, faculty, institution, and society perspectives. From the 7 identified thematic categories, a CBPE definition was developed: "Competency-based pharmacy education is an outcomes-based curricular model of an organized framework of competencies (knowledge, skills, attitudes) for pharmacists to meet health care and societal needs. This learner-centered curricular model aligns authentic teaching and learning strategies and assessment (emphasizing workplace assessment and quality feedback) while deemphasizing time." SUMMARY This article provides a definition of CBE for its application within pharmacy education. The strengths and barriers for CBE were elucidated from other health professions' education literature. Identified implementation strengths and barriers aid in the discussions on what will support or hinder the implementation of CBE in pharmacy education.
Collapse
Affiliation(s)
- Jennie B Jarrett
- University of Illinois Chicago College of Pharmacy, Department of Pharmacy Practice, Chicago, IL, USA
| | - Abigail T Elmes
- University of Illinois Chicago College of Pharmacy, Department of Pharmacy Practice, Chicago, IL, USA
| | - Eden Keller
- University of Illinois Chicago College of Pharmacy, Department of Pharmacy Practice, Chicago, IL, USA
| | - Cindy D Stowe
- University of Arkansas for Medical Sciences College of Pharmacy, Little Rock, AR, USA
| | | |
Collapse
|
3
|
Caretta-Weyer HA, Smirnova A, Barone MA, Frank JR, Hernandez-Boussard T, Levinson D, Lombarts KMJMH, Lomis KD, Martini A, Schumacher DJ, Turner DA, Schuh A. The Next Era of Assessment: Building a Trustworthy Assessment System. PERSPECTIVES ON MEDICAL EDUCATION 2024; 13:12-23. [PMID: 38274558 PMCID: PMC10809864 DOI: 10.5334/pme.1110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 12/18/2023] [Indexed: 01/27/2024]
Abstract
Assessment in medical education has evolved through a sequence of eras each centering on distinct views and values. These eras include measurement (e.g., knowledge exams, objective structured clinical examinations), then judgments (e.g., workplace-based assessments, entrustable professional activities), and most recently systems or programmatic assessment, where over time multiple types and sources of data are collected and combined by competency committees to ensure individual learners are ready to progress to the next stage in their training. Significantly less attention has been paid to the social context of assessment, which has led to an overall erosion of trust in assessment by a variety of stakeholders including learners and frontline assessors. To meaningfully move forward, the authors assert that the reestablishment of trust should be foundational to the next era of assessment. In our actions and interventions, it is imperative that medical education leaders address and build trust in assessment at a systems level. To that end, the authors first review tenets on the social contextualization of assessment and its linkage to trust and discuss consequences should the current state of low trust continue. The authors then posit that trusting and trustworthy relationships can exist at individual as well as organizational and systems levels. Finally, the authors propose a framework to build trust at multiple levels in a future assessment system; one that invites and supports professional and human growth and has the potential to position assessment as a fundamental component of renegotiating the social contract between medical education and the health of the public.
Collapse
Affiliation(s)
- Holly A. Caretta-Weyer
- Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, California, USA
| | - Alina Smirnova
- Department of Family Medicine, University of Calgary, Calgary, Alberta, Canada
- Kern Institute for the Transformation of Medical Education, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
| | - Michael A. Barone
- NBME, Philadelphia, Pennsylvania, USA
- Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Jason R. Frank
- Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, CA
| | | | - Dana Levinson
- Josiah Macy Jr Foundation, Philadelphia, Pennsylvania, USA
| | - Kiki M. J. M. H. Lombarts
- Department of Medical Psychology, Amsterdam University Medical Centers, University of Amsterdam, NL
- Amsterdam Public Health research institute, Amsterdam, NL
| | - Kimberly D. Lomis
- Undergraduate Medical Education Innovations, American Medical Association, Chicago, Illinois, USA
| | - Abigail Martini
- Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio, USA
| | - Daniel J. Schumacher
- Division of Emergency Medicine, Cincinnati Children’s Hospital Medical Center/University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| | - David A. Turner
- American Board of Pediatrics, Chapel Hill, North Carolina, USA
| | - Abigail Schuh
- Division of Emergency Medicine, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
| |
Collapse
|
4
|
Acker A, Leifso K, Crawford L, Braund H, Hawksby E, Hall AK, McEwen L, Dalgarno N, Dagnone JD. Lessons learned and new strategies for success: Evaluating the Implementation of Competency-Based Medical Education in Queen's Pediatrics. Paediatr Child Health 2023; 28:463-467. [PMID: 38638538 PMCID: PMC11022870 DOI: 10.1093/pch/pxad021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 04/17/2023] [Indexed: 04/20/2024] Open
Abstract
Objectives In 2017, Queen's University launched Competency-Based Medical Education (CBME) across 29 programs simultaneously. Two years post-implementation, we asked key stakeholders (faculty, residents, and program leaders) within the Pediatrics program for their perspectives on and experiences with CBME so far. Methods Program leadership explicitly described the intended outcomes of implementing CBME. Focus groups and interviews were conducted with all stakeholders to describe the enacted implementation. The intended versus enacted implementations were compared to provide insight into needed adaptations for program improvement. Results Overall, stakeholders saw value in the concept of CBME. Residents felt they received more specific feedback and monthly Competence Committee (CC) meetings and Academic Advisors were helpful. Conversely, all stakeholders noted the increased expectations had led to a feeling of assessment fatigue. Faculty noted that direct observation and not knowing a resident's previous performance information was challenging. Residents wanted to see faculty initiate assessments and improved transparency around progress and promotion decisions. Discussion The results provided insight into how well the intended outcomes had been achieved as well as areas for improvement. Proposed adaptations included a need for increased direct observation and exploration of faculty accessing residents' previous performance information. Education was provided on the performance expectations of residents and how progress and promotion decisions are made. As well, "flex blocks" were created to help residents customize their training experience to meet their learning needs. The results of this study can be used to inform and guide implementation and adaptations in other programs and institutions.
Collapse
Affiliation(s)
- Amy Acker
- Department of Pediatrics, Queen’s University, Kingston, Canada
| | - Kirk Leifso
- Department of Pediatrics, Queen’s University, Kingston, Canada
| | | | - Heather Braund
- Scholarship and Simulation Education, Office of Professional Development and Educational Scholarship, Faculty of Health Sciences, Queen’s University, Kingston, Canada
| | - Emily Hawksby
- Department of Pediatrics, Queen’s University, Kingston, Canada
| | - Andrew K Hall
- Department of Emergency Medicine, University of Ottawa, Ottawa, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
| | - Laura McEwen
- Department of Pediatrics, Queen’s University, Kingston, Canada
- Postgraduate Medical Education Queen’s University, Kingston, Canada
| | - Nancy Dalgarno
- Education Scholarship, Office of Professional Development and Educational Scholarship, Kingston, Canada
- Department of Biomedical and Molecular Sciences, Faculty of Health Sciences, Queen’s University, Kingston, Canada
| | - Jeffrey Damon Dagnone
- Postgraduate Medical Education Queen’s University, Kingston, Canada
- Department of Emergency Medicine, Queen’s University, Kingston, Canada
| |
Collapse
|
5
|
Vennemeyer S, Kinnear B, Gao A, Zhu S, Nattam A, Knopp MI, Warm E, Wu DT. User-Centered Evaluation and Design Recommendations for an Internal Medicine Resident Competency Assessment Dashboard. Appl Clin Inform 2023; 14:996-1007. [PMID: 38122817 PMCID: PMC10733060 DOI: 10.1055/s-0043-1777103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 10/25/2023] [Indexed: 12/23/2023] Open
Abstract
OBJECTIVES Clinical Competency Committee (CCC) members employ varied approaches to the review process. This makes the design of a competency assessment dashboard that fits the needs of all members difficult. This work details a user-centered evaluation of a dashboard currently utilized by the Internal Medicine Clinical Competency Committee (IM CCC) at the University of Cincinnati College of Medicine and generated design recommendations. METHODS Eleven members of the IM CCC participated in semistructured interviews with the research team. These interviews were recorded and transcribed for analysis. The three design research methods used in this study included process mapping (workflow diagrams), affinity diagramming, and a ranking experiment. RESULTS Through affinity diagramming, the research team identified and organized opportunities for improvement about the current system expressed by study participants. These areas include a time-consuming preprocessing step, lack of integration of data from multiple sources, and different workflows for each step in the review process. Finally, the research team categorized nine dashboard components based on rankings provided by the participants. CONCLUSION We successfully conducted user-centered evaluation of an IM CCC dashboard and generated four recommendations. Programs should integrate quantitative and qualitative feedback, create multiple views to display these data based on user roles, work with designers to create a usable, interpretable dashboard, and develop a strong informatics pipeline to manage the system. To our knowledge, this type of user-centered evaluation has rarely been attempted in the medical education domain. Therefore, this study provides best practices for other residency programs to evaluate current competency assessment tools and to develop new ones.
Collapse
Affiliation(s)
- Scott Vennemeyer
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
| | - Benjamin Kinnear
- Department of Pediatrics, College of Medicine, University of Cincinnati, Ohio, United States
- Department of Internal Medicine, College of Medicine, University of Cincinnati, Ohio, United States
| | - Andy Gao
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- Medical Sciences Baccalaureate Program, College of Medicine, University of Cincinnati, Ohio, United States
| | - Siyi Zhu
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- School of Design, College of Design, Architecture, Art, and Planning (DAAP), University of Cincinnati, Ohio, United States
| | - Anunita Nattam
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- Medical Sciences Baccalaureate Program, College of Medicine, University of Cincinnati, Ohio, United States
| | - Michelle I. Knopp
- Department of Internal Medicine, College of Medicine, University of Cincinnati, Ohio, United States
- Division of Hospital Medicine, Cincinnati Children's Hospital Medical Center, Department of Pediatrics, College of Medicine, University of Cincinnati, Ohio, United States
| | - Eric Warm
- Department of Internal Medicine, College of Medicine, University of Cincinnati, Ohio, United States
| | - Danny T.Y. Wu
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- Department of Pediatrics, College of Medicine, University of Cincinnati, Ohio, United States
- Medical Sciences Baccalaureate Program, College of Medicine, University of Cincinnati, Ohio, United States
- School of Design, College of Design, Architecture, Art, and Planning (DAAP), University of Cincinnati, Ohio, United States
| |
Collapse
|
6
|
Kinnear B, Weber DE, Schumacher DJ, Edje L, Warm EJ, Anderson HL. Reconstructing Neurath's Ship: A Case Study in Reevaluating Equity in a Program of Assessment. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S50-S56. [PMID: 37071695 DOI: 10.1097/acm.0000000000005249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Inequity in assessment has been described as a "wicked problem"-an issue with complex roots, inherent tensions, and unclear solutions. To address inequity, health professions educators must critically examine their implicit understandings of truth and knowledge (i.e., their epistemologies) with regard to educational assessment before jumping to solutions. The authors use the analogy of a ship (program of assessment) sailing on different seas (epistemologies) to describe their journey in seeking to improve equity in assessment. Should the education community repair the ship of assessment while sailing or should the ship be scrapped and built anew? The authors share a case study of a well-developed internal medicine residency program of assessment and describe efforts to evaluate and enable equity using various epistemological lenses. They first used a postpositivist lens to evaluate if the systems and strategies aligned with best practices, but found they did not capture important nuances of what equitable assessment entails. Next, they used a constructivist approach to improve stakeholder engagement, but found they still failed to question the inequitable assumptions inherent to their systems and strategies. Finally, they describe a shift to critical epistemologies, seeking to understand who experiences inequity and harm to dismantle inequitable systems and create better ones. The authors describe how each unique sea promoted different adaptations to their ship, and challenge programs to sail through new epistemological waters as a starting point for making their own ships more equitable.
Collapse
Affiliation(s)
- Benjamin Kinnear
- B. Kinnear is associate professor of internal medicine and pediatrics, Departments of Pediatrics and Internal Medicine, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0003-0052-4130
| | - Danielle E Weber
- D.E. Weber is assistant professor of internal medicine and pediatrics, Departments of Pediatrics and Internal Medicine, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-4857-6936
| | - Daniel J Schumacher
- D.J. Schumacher is tenured professor of pediatrics, Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0001-5507-8452
| | - Louito Edje
- L. Edje is professor of family and community medicine, Department of Medical Education and Family and Community Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio
| | - Eric J Warm
- E.J. Warm is professor of internal medicine and program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-6088-2434
| | - Hannah L Anderson
- H.L. Anderson is clinical research associate, Department of Pediatrics, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0002-9435-1535
| |
Collapse
|
7
|
Kinnear B, Santen SA, Kelleher M, Martini A, Ferris S, Edje L, Warm EJ, Schumacher DJ. How Does TIMELESS Training Impact Resident Motivation for Learning, Assessment, and Feedback? Evaluating a Competency-Based Time-Variable Training Pilot. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:828-835. [PMID: 36656286 DOI: 10.1097/acm.0000000000005147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE As competency-based medical education has become the predominant graduate medical education training model, interest in time-variable training has grown. Despite multiple competency-based time-variable training (CBTVT) pilots ongoing in the United States, little is known about how this training approach impacts learners. The authors aim to explore how their CBTVT pilot program impacted resident motivation for learning, assessment, and feedback. METHOD The authors performed a qualitative educational case study on the Transitioning in Internal Medicine Education Leveraging Entrustment Scores Synthesis (TIMELESS) program at the University of Cincinnati from October 2020 through March 2022. Semistructured interviews were conducted with TIMELESS residents (n = 9) approximately every 6 months to capture experiences over time. The authors used inductive thematic analysis to develop themes and compared their findings with existing theories of learner motivation. RESULTS The authors developed 2 themes: TIMELESS had variable effects on residents' motivation for learning and TIMELESS increased resident engagement with and awareness of the program of assessment. Participants reported increased motivation to learn and seek assessment, though some felt a tension between performance (e.g., advancement through the residency program) and growth (e.g., improvement as a physician). Participants became more aware of the quality of assessments they received, in part due to TIMELESS increasing the perceived stakes of assessment, and reported being more deliberate when assessing other residents. CONCLUSIONS Resident motivation for learning, assessment, and feedback was impacted in ways that the authors contextualize using current theories of learner motivation (i.e., goal orientation theory and attribution theory). Future research should investigate how interventions, such as coaching, guided learner reflection, or various CBTVT implementation strategies, can help keep learners oriented toward mastery learning rather than toward performance.
Collapse
Affiliation(s)
- Benjamin Kinnear
- B. Kinnear is associate professor of internal medicine and pediatrics, Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0003-0052-4130
| | - Sally A Santen
- S.A. Santen is professor of emergency medicine, Department of Emergency Medicine, Virginia Commonwealth University School of Medicine, Richmond, Virginia, and University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-8327-8002
| | - Matthew Kelleher
- M. Kelleher is assistant professor of internal medicine and pediatrics, Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-6400-1745
| | - Abigail Martini
- A. Martini is a clinical research coordinator with emergency medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio
| | - Sarah Ferris
- S. Ferris is a research administrator, Clinical Trials Unit, Michigan Medicine Research, University of Michigan, Ann Arbor, Michigan
| | - Louito Edje
- L. Edje is professor of family and community medicine, Departments of Medical Education and of Family and Community Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio
| | - Eric J Warm
- E.J. Warm is professor of internal medicine and program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-6088-2434
| | - Daniel J Schumacher
- D.J. Schumacher is professor of pediatrics, Cincinnati Children's Hospital Medical Center and Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0001-5507-8452
| |
Collapse
|
8
|
Langhan ML, Stafford DEJ, Myers AL, Herman BE, Curran ML, Czaja AS, Turner DA, Weiss P, Mink R. Clinical competency committee perceptions of entrustable professional activities and their value in assessing fellows: A qualitative study of pediatric subspecialty program directors. MEDICAL TEACHER 2023; 45:650-657. [PMID: 36420760 DOI: 10.1080/0142159x.2022.2147054] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
OBJECTIVES To examine the composition and processes of Clinical Competency Committees (CCCs) assigning entrustable professional activity (EPA) levels of supervision for pediatric subspecialty fellows and to examine fellowship program director (FPD) perspectives about using EPAs to determine fellows' graduation readiness. METHODS A qualitative study was performed using one-on-one interviews with a purposeful sample of pediatric subspecialty FPDs to yield a thematic analysis. Semi-structured interview guides were used for participants who self-identified as EPA users or non-users. Inductive analysis and coding were performed on transcripts until theoretical sufficiency was attained. RESULTS Twenty-eight FPDs were interviewed. There was significant variability in the composition and processes of CCCs across subspecialties. FPDs felt that CCCs intuitively understand what entrustment means, allowing for ease of application of level of supervision (LOS) scales and consensus. FPDs perceived that EPAs provided a global assessment of fellows and are one tool to determine graduation readiness. CONCLUSIONS Although there was variability in the makeup and processes of CCCs across subspecialties, FPDs believe EPAs are intuitive and relatively easy to implement. Consensus can be reached easily using EPA-specific LOS scales focusing on entrustment. FPDs desire a better understanding of how EPAs should be used for graduation.
Collapse
Affiliation(s)
- Melissa L Langhan
- Department of Pediatrics and Emergency Medicine, Section of Emergency Medicine, Yale University School of Medicine, New Haven, CT, USA
| | - Diane E J Stafford
- Department of Pediatrics, Division of Endocrinology, Stanford University School of Medicine, Stanford, CA, USA
| | - Angela L Myers
- Department of Pediatrics, Children's Mercy, Kansas City, University of Missouri-Kansas City School of Medicine, Kansas City, MO, USA
| | - Bruce E Herman
- Department of Pediatrics, University of Utah School of Medicine, Salt Lake City, UT, USA
| | - Megan L Curran
- Department of Pediatrics, Section of Rheumatology, University of Colorado School of Medicine, Aurora, CO, USA
| | - Angela S Czaja
- Department of Pediatrics, Section of Critical Care, University of Colorado School of Medicine, Aurora, CO, USA
| | | | - Pnina Weiss
- Department of Pediatrics, Section of Pulmonology, Allergy, Immunology and Sleep Medicine, Yale University School of Medicine, New Haven, CT, USA
| | - Richard Mink
- David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
- Department of Pediatrics, Harbor-UCLA Medical Center and The Lundquist Institute for Biomedical Innovation, Torrance, CA, USA
| |
Collapse
|
9
|
Cheung WJ, Wagner N, Frank JR, Oswald A, Van Melle E, Skutovich A, Dalseg TR, Cooke LJ, Hall AK. Implementation of competence committees during the transition to CBME in Canada: A national fidelity-focused evaluation. MEDICAL TEACHER 2022; 44:781-789. [PMID: 35199617 DOI: 10.1080/0142159x.2022.2041191] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE This study evaluated the fidelity of competence committee (CC) implementation in Canadian postgraduate specialist training programs during the transition to competency-based medical education (CBME). METHODS A national survey of CC chairs was distributed to all CBME training programs in November 2019. Survey questions were derived from guiding documents published by the Royal College of Physicians and Surgeons of Canada reflecting intended processes and design. RESULTS Response rate was 39% (113/293) with representation from all eligible disciplines. Committee size ranged from 3 to 20 members, 42% of programs included external members, and 20% included a resident representative. Most programs (72%) reported that a primary review and synthesis of resident assessment data occurs prior to the meeting, with some data reviewed collectively during meetings. When determining entrustable professional activity (EPA) achievement, most programs followed the national specialty guidelines closely with some exceptions (53%). Documented concerns about professionalism, EPA narrative comments, and EPA entrustment scores were most highly weighted when determining resident progress decisions. CONCLUSIONS Heterogeneity in CC implementation likely reflects local adaptations, but may also explain some of the variable challenges faced by programs during the transition to CBME. Our results offer educational leaders important fidelity data that can help inform the larger evaluation and transformation of CBME.
Collapse
Affiliation(s)
- Warren J Cheung
- Department of Emergency Medicine, University of Ottawa, Ottawa, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
| | - Natalie Wagner
- Office of Professional Development & Educational Scholarship and Department of Biomedical & Molecular Sciences, Queen's University, Kingston, Canada
| | - Jason R Frank
- Department of Emergency Medicine, University of Ottawa, Ottawa, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
| | - Anna Oswald
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Medicine, University of Alberta, Edmonton, Canada
| | - Elaine Van Melle
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Family Medicine, Queen's University, Kingston, Canada
| | | | - Timothy R Dalseg
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Medicine, Division of Emergency Medicine, University of Toronto, Toronto, Canada
| | - Lara J Cooke
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Clinical Neurosciences, Cumming School of Medicine, University of Calgary, Calgary, Canada
| | - Andrew K Hall
- Department of Emergency Medicine, University of Ottawa, Ottawa, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
| |
Collapse
|
10
|
Reimagining the Clinical Competency Committee to Enhance Education and Prepare for Competency-Based Time-Variable Advancement. J Gen Intern Med 2022; 37:2280-2290. [PMID: 35445932 PMCID: PMC9021365 DOI: 10.1007/s11606-022-07515-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 03/25/2022] [Indexed: 12/01/2022]
Abstract
Assessing residents and clinical fellows is a high-stakes activity. Effective assessment is important throughout training so that identified areas of strength and weakness can guide educational planning to optimize outcomes. Assessment has historically been underemphasized although medical education oversight organizations have strengthened requirements in recent years. Growing acceptance of competency-based medical education and its logical extension to competency-based time-variable (CB-TV) graduate medical education (GME) further highlights the importance of implementing effective evidence-based approaches to assessment. The Clinical Competency Committee (CCC) has emerged as a key programmatic structure in graduate medical education. In the context of launching a multi-specialty pilot of CB-TV GME in our health system, we have examined several program's CCC processes and reviewed the relevant literature to propose enhancements to CCCs. We recommend that all CCCs fulfill three core goals, regularly applied to every GME trainee: (1) discern and describe the resident's developmental status to individualize education, (2) determine readiness for unsupervised practice, and (3) foster self-assessment ability. We integrate the literature and observations from GME program CCCs in our institutions to evaluate how current CCC processes support or undermine these goals. Obstacles and key enablers are identified. Finally, we recommend ways to achieve the stated goals, including the following: (1) assess and promote the development of competency in all trainees, not just outliers, through a shared model of assessment and competency-based advancement; (2) strengthen CCC assessment processes to determine trainee readiness for independent practice; and (3) promote trainee reflection and informed self-assessment. The importance of coaching for competency, robust workplace-based assessments, feedback, and co-production of individualized learning plans are emphasized. Individual programs and their CCCs must strengthen assessment tools and frameworks to realize the potential of competency-oriented education.
Collapse
|
11
|
Westein MPD, Koster AS, Daelmans HEM, Collares CF, Bouvy ML, Kusurkar RA. Validity evidence for summative performance evaluations in postgraduate community pharmacy education. CURRENTS IN PHARMACY TEACHING & LEARNING 2022; 14:701-711. [PMID: 35809899 DOI: 10.1016/j.cptl.2022.06.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 05/30/2022] [Accepted: 06/09/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Workplace-based assessment of competencies is complex. In this study, the validity of summative performance evaluations (SPEs) made by supervisors in a two-year longitudinal supervisor-trainee relationship was investigated in a postgraduate community pharmacy specialization program in the Netherlands. The construct of competence was based on an adapted version of the 2005 Canadian Medical Education Directive for Specialists (CanMEDS) framework. METHODS The study had a case study design. Both quantitative and qualitative data were collected. The year 1 and year 2 SPE scores of 342 trainees were analyzed using confirmatory factor analysis and generalizability theory. Semi-structured interviews were held with 15 supervisors and the program director to analyze the inferences they made and the impact of SPE scores on the decision-making process. RESULTS A good model fit was found for the adapted CanMEDS based seven-factor construct. The reliability/precision of the SPE measurements could not be completely isolated, as every trainee was trained in one pharmacy and evaluated by one supervisor. Qualitative analysis revealed that supervisors varied in their standards for scoring competencies. Some supervisors were reluctant to fail trainees. The competency scores had little impact on the high-stakes decision made by the program director. CONCLUSIONS The adapted CanMEDS competency framework provided a valid structure to measure competence. The reliability/precision of SPE measurements could not be established and the SPE measurements provided limited input for the decision-making process. Indications of a shadow assessment system in the pharmacies need further investigation.
Collapse
Affiliation(s)
- Marnix P D Westein
- Department of Pharmaceutical Sciences, Utrecht University, Royal Dutch Pharmacists Association (KNMP), Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, the Netherlands.
| | - Andries S Koster
- Department of Pharmaceutical Sciences, Utrecht University, Utrecht, the Netherlands.
| | - Hester E M Daelmans
- Master's programme of Medicine, Faculty of Medicine Vrije Universiteit, Amsterdam, the Netherlands.
| | - Carlos F Collares
- Maastricht University Faculty of Health Medicine and Life Sciences, Maastricht, the Netherlands.
| | - Marcel L Bouvy
- Department of Pharmaceutical Sciences, Utrecht University, Utrecht, the Netherlands.
| | - Rashmi A Kusurkar
- Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, the Netherlands.
| |
Collapse
|
12
|
Yilmaz Y, Jurado Nunez A, Ariaeinejad A, Lee M, Sherbino J, Chan TM. Harnessing Natural Language Processing to Support Decisions Around Workplace-Based Assessment: Machine Learning Study of Competency-Based Medical Education. JMIR MEDICAL EDUCATION 2022; 8:e30537. [PMID: 35622398 PMCID: PMC9187970 DOI: 10.2196/30537] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 12/05/2021] [Accepted: 04/30/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Residents receive a numeric performance rating (eg, 1-7 scoring scale) along with a narrative (ie, qualitative) feedback based on their performance in each workplace-based assessment (WBA). Aggregated qualitative data from WBA can be overwhelming to process and fairly adjudicate as part of a global decision about learner competence. Current approaches with qualitative data require a human rater to maintain attention and appropriately weigh various data inputs within the constraints of working memory before rendering a global judgment of performance. OBJECTIVE This study explores natural language processing (NLP) and machine learning (ML) applications for identifying trainees at risk using a large WBA narrative comment data set associated with numerical ratings. METHODS NLP was performed retrospectively on a complete data set of narrative comments (ie, text-based feedback to residents based on their performance on a task) derived from WBAs completed by faculty members from multiple hospitals associated with a single, large, residency program at McMaster University, Canada. Narrative comments were vectorized to quantitative ratings using the bag-of-n-grams technique with 3 input types: unigram, bigrams, and trigrams. Supervised ML models using linear regression were trained with the quantitative ratings, performed binary classification, and output a prediction of whether a resident fell into the category of at risk or not at risk. Sensitivity, specificity, and accuracy metrics are reported. RESULTS The database comprised 7199 unique direct observation assessments, containing both narrative comments and a rating between 3 and 7 in imbalanced distribution (scores 3-5: 726 ratings; and scores 6-7: 4871 ratings). A total of 141 unique raters from 5 different hospitals and 45 unique residents participated over the course of 5 academic years. When comparing the 3 different input types for diagnosing if a trainee would be rated low (ie, 1-5) or high (ie, 6 or 7), our accuracy for trigrams was 87%, bigrams 86%, and unigrams 82%. We also found that all 3 input types had better prediction accuracy when using a bimodal cut (eg, lower or higher) compared with predicting performance along the full 7-point rating scale (50%-52%). CONCLUSIONS The ML models can accurately identify underperforming residents via narrative comments provided for WBAs. The words generated in WBAs can be a worthy data set to augment human decisions for educators tasked with processing large volumes of narrative assessments.
Collapse
Affiliation(s)
- Yusuf Yilmaz
- McMaster Education Research, Innovation, and Theory Program, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Department of Medical Education, Ege University, Izmir, Turkey
- Program for Faculty Development, Office of Continuing Professional Development, McMaster University, Hamilton, ON, Canada
- Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| | - Alma Jurado Nunez
- Department of Medicine and Masters in eHealth Program, McMaster University, Hamilton, ON, Canada
| | - Ali Ariaeinejad
- Department of Medicine and Masters in eHealth Program, McMaster University, Hamilton, ON, Canada
| | - Mark Lee
- McMaster Education Research, Innovation, and Theory Program, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| | - Jonathan Sherbino
- McMaster Education Research, Innovation, and Theory Program, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Division of Emergency Medicine, Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Division of Education and Innovation, Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| | - Teresa M Chan
- McMaster Education Research, Innovation, and Theory Program, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Program for Faculty Development, Office of Continuing Professional Development, McMaster University, Hamilton, ON, Canada
- Division of Emergency Medicine, Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Division of Education and Innovation, Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
13
|
Brown DR, Moeller JJ, Grbic D, Biskobing DM, Crowe R, Cutrer WB, Green ML, Obeso VT, Wagner DP, Warren JB, Yingling SL, Andriole DA. Entrustment Decision Making in the Core Entrustable Professional Activities: Results of a Multi-Institutional Study. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:536-543. [PMID: 34261864 DOI: 10.1097/acm.0000000000004242] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE In 2014, the Association of American Medical Colleges defined 13 Core Entrustable Professional Activities (EPAs) that all graduating students should be ready to do with indirect supervision upon entering residency and commissioned a 10-school, 5-year pilot to test implementing the Core EPAs framework. In 2019, pilot schools convened trained entrustment groups (TEGs) to review assessment data and render theoretical summative entrustment decisions for class of 2019 graduates. Results were examined to determine the extent to which entrustment decisions could be made and the nature of these decisions. METHOD For each EPA considered (4-13 per student), TEGs recorded an entrustment determination (ready, progressing but not yet ready, evidence against student progressing, could not make a decision); confidence in that determination (none, low, moderate, high); and the number of workplace-based assessments (WBAs) considered (0->15) per determination. These individual student-level data were de-identified and merged into a multischool database; chi-square analysis tested the significance of associations between variables. RESULTS The 2,415 EPA-specific determinations (for 349 students by 4 participating schools) resulted in a decision of ready (n = 997/2,415; 41.3%), progressing but not yet ready (n = 558/2,415; 23.1%), or evidence against student progression (n = 175/2,415; 7.2%). No decision could be made for the remaining 28.4% (685/2,415), generally for lack of data. Entrustment determinations' distribution varied across EPAs (chi-square P < .001) and, for 10/13 EPAs, WBA availability was associated with making (vs not making) entrustment decisions (each chi-square P < .05). CONCLUSIONS TEGs were able to make many decisions about readiness for indirect supervision; yet less than half of determinations resulted in a decision of readiness to perform this EPA with indirect supervision. More work is needed at the 10 schools to enable authentic summative entrustment in the Core EPAs framework.
Collapse
Affiliation(s)
- David R Brown
- D.R. Brown is professor, chief, Division of Family and Community Medicine, and interim chair, Department of Humanities, Health, and Society, Florida International University Herbert Wertheim College of Medicine, Miami, Florida; ORCID: http://orcid.org/0000-0002-5361-6664
| | - Jeremy J Moeller
- J.J. Moeller is associate professor and residency program director, Department of Neurology, Yale University School of Medicine, New Haven, Connecticut; ORCID: https://orcid.org/0000-0002-6135-5572
| | - Douglas Grbic
- D. Grbic is lead research analyst, Medical Education Research, Association of American Medical Colleges, Washington, DC
| | - Diane M Biskobing
- D.M. Biskobing is professor of medicine and associate dean of medical education, Virginia Commonwealth University School of Medicine, Richmond, Virginia
| | - Ruth Crowe
- R. Crowe is director of integrated clinical skills, director of practice of medicine, Office of Medical Education, and associate professor of medicine, New York University Grossman School of Medicine, New York, New York
| | - William B Cutrer
- W.B. Cutrer is associate dean for undergraduate medical education and associate professor of pediatrics (critical care medicine), Vanderbilt University School of Medicine, Nashville, Tennessee; ORCID: https://orcid.org/0000-0003-1538-9779
| | - Michael L Green
- M.L. Green is professor of medicine and director of student assessment, Teaching and Learning Center, Yale University School of Medicine, New Haven, Connecticut
| | - Vivian T Obeso
- V.T. Obeso is associate dean for curriculum and medical education and associate professor, Division of Internal Medicine, Department of Translational Medicine, Florida International University Herbert Wertheim College of Medicine, Miami, Florida
| | - Dianne P Wagner
- D.P. Wagner is associate dean for undergraduate medical education and professor of medicine, Michigan State University College of Human Medicine, East Lansing, Michigan
| | - Jamie B Warren
- J.B. Warren is associate professor, Division of Neonatology, and clinical vice chair, Department of Pediatrics, Oregon Health & Science University, Portland, Oregon; ORCID: https://orcid.org/0000-0003-4422-1502
| | - Sandra L Yingling
- S.L. Yingling is associate dean for educational planning and quality improvement, University of Illinois College of Medicine (Chicago, Peoria, Rockford, and Urbana), Chicago, Illinois
| | - Dorothy A Andriole
- D.A. Andriole is senior director, Medical Education Research, Association of American Medical Colleges, Washington, DC; ORCID: https://orcid.org/0000-0001-8902-1227
| |
Collapse
|
14
|
Schumacher DJ, Teunissen PW, Kinnear B, Driessen EW. Assessing trainee performance: ensuring learner control, supporting development, and maximizing assessment moments. Eur J Pediatr 2022; 181:435-439. [PMID: 34286373 DOI: 10.1007/s00431-021-04182-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 06/22/2021] [Accepted: 06/23/2021] [Indexed: 11/28/2022]
Abstract
In this article, the authors provide practical guidance for frontline supervisors' efforts to assess trainee performance. They focus on three areas. First, they argue the importance of promoting learner control in the assessment process, noting that providing learners agency and control can shift the stakes of assessment from high to low and promote a safe environment that facilitates learning. Second, they posit that assessment should be used to support continued development by promoting a relational partnership between trainees and supervisors. This partnership allows supervisors to reinforce desirable aspects of performance, provide real-time support for deficient areas of performance, and sequence learning with the appropriate amount of scaffolding to push trainees from competence (what they can do alone) to capability (what they are able to do with support). Finally, they advocate the importance of optimizing the use of written comments and direct observation while also recognizing that performance is interdependent in efforts to maximize assessment moments.Conclusion: Using best practices in trainee assessment can help trainees take next steps in their development in a learner-centered partnership with clinical supervisors. What is Known: • Many pediatricians are asked to assess the performance of medical students and residents they work with but few have received formal training in assessment. What is New: • This article presents evidence-based best practices for assessing trainees, including giving trainees agency in the assessment process and focusing on helping trainees take next steps in their development.
Collapse
Affiliation(s)
- Daniel J Schumacher
- Division of Emergency Medicine, Cincinnati Children's Hospital Medical Center, and Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| | - Pim W Teunissen
- School of Health Professions Education (SHE), Faculty of Health Medicine and Life Sciences and Gynecologist, Department of Obstetrics and Gynecology, Maastricht University Medical Center, Maastricht, the Netherlands
| | - Benjamin Kinnear
- Internal Medicine and Pediatrics, Division of Hospital Medicine, Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Erik W Driessen
- School of Health Professions Education (SHE), Faculty of Health Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands
| |
Collapse
|
15
|
The effect of gender dyads on the quality of narrative assessments of general surgery trainees. Am J Surg 2021; 224:179-184. [PMID: 34911639 DOI: 10.1016/j.amjsurg.2021.12.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 11/30/2021] [Accepted: 12/01/2021] [Indexed: 01/13/2023]
Abstract
BACKGROUND Prior studies have shown that gender can influence how learners are assessed and the feedback they receive. We investigated the quality of faculty narrative comments in general surgery trainee evaluation using trainee-assessor gender dyads. METHODS Narrative assessments of surgical trainees at the University of British Columbia were collected and rated using the McMaster Narrative Comment Rating Scale (MNCRS). Variables from the MNCRS were inputted into a generalized linear mixed model to explore the impact of gender dyads on the quality of narrative feedback. RESULTS 2,469 assessments were collected. Women assessors tended to give higher-quality comments (p's < 0.05) than men assessors. Comments from men assessors to women trainees were significantly more positive than comments from men assessors to men trainees (p = 0.02). Men assessors also tended to give women trainees more reinforcing than corrective comments than to men trainees (p < 0.01). CONCLUSIONS There are significant differences in the quality of faculty feedback to trainees by gender dyads. A range of solutions to improve and reduce differences in feedback quality are discussed.
Collapse
|
16
|
Tavares W, Hodwitz K, Rowland P, Ng S, Kuper A, Friesen F, Shwetz K, Brydges R. Implicit and inferred: on the philosophical positions informing assessment science. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2021; 26:1597-1623. [PMID: 34370126 DOI: 10.1007/s10459-021-10063-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 07/25/2021] [Indexed: 06/13/2023]
Abstract
Assessment practices have been increasingly informed by a range of philosophical positions. While generally beneficial, the addition of options can lead to misalignment in the philosophical assumptions associated with different features of assessment (e.g., the nature of constructs and competence, ways of assessing, validation approaches). Such incompatibility can threaten the quality and defensibility of researchers' claims, especially when left implicit. We investigated how authors state and use their philosophical positions when designing and reporting on performance-based assessments (PBA) of intrinsic roles, as well as the (in)compatibility of assumptions across assessment features. Using a representative sample of studies examining PBA of intrinsic roles, we used qualitative content analysis to extract data on how authors enacted their philosophical positions across three key assessment features: (1) construct conceptualizations, (2) assessment activities, and (3) validation methods. We also examined patterns in philosophical positioning across features and studies. In reviewing 32 papers from established peer-reviewed journals, we found (a) authors rarely reported their philosophical positions, meaning underlying assumptions could only be inferred; (b) authors approached features of assessment in variable ways that could be informed by or associated with different philosophical assumptions; (c) we experienced uncertainty in determining (in)compatibility of philosophical assumptions across features. Authors' philosophical positions were often vague or absent in the selected contemporary assessment literature. Leaving such details implicit may lead to misinterpretation by knowledge users wishing to implement, build on, or evaluate the work. As such, assessing claims, quality and defensibility, may increasingly depend more on who is interpreting, rather than what is being interpreted.
Collapse
Affiliation(s)
- Walter Tavares
- The Wilson Centre, Temerty Faculty of Medicine, Department of Medicine, Institute for Health Policy, Management and Evaluation, University of Toronto/University Health Network, Toronto, Ontario, Canada.
| | - Kathryn Hodwitz
- Li Ka Shing Knowledge Institute, St. Michaels Hospital, Toronto, Ontario, Canada
| | - Paula Rowland
- The Wilson Centre, Temerty Faculty of Medicine, Department of Occupational Therapy and Occupational Science, University of Toronto/University Health Network, Toronto, Ontario , Canada
| | - Stella Ng
- The Wilson Centre, Temerty Faculty of Medicine, Department of Speech-Language Pathology, Temerty Faculty of Medicine, The Wilson Centre, University of Toronto, Centre for Faculty Development, Unity Health Toronto, Toronto, Ontario, Canada
| | - Ayelet Kuper
- The Wilson Centre, University Health Network/University of Toronto, Division of General Internal Medicine, Sunnybrook Health Sciences Centre, Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Farah Friesen
- Centre for Faculty Development, Temerty Faculty of Medicine, University of Toronto at Unity Health Toronto, Toronto, Ontario, Canada
| | - Katherine Shwetz
- Department of English, University of Toronto, Toronto, Ontario, Canada
| | - Ryan Brydges
- The Wilson Centre, Temerty Faculty of Medicine, Department of Medicine, Unity Health Toronto, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
17
|
Anderson HL, Kurtz J, West DC. Implementation and Use of Workplace-Based Assessment in Clinical Learning Environments: A Scoping Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S164-S174. [PMID: 34406132 DOI: 10.1097/acm.0000000000004366] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE Workplace-based assessment (WBA) serves a critical role in supporting competency-based medical education (CBME) by providing assessment data to inform competency decisions and support learning. Many WBA systems have been developed, but little is known about how to effectively implement WBA. Filling this gap is important for creating suitable and beneficial assessment processes that support large-scale use of CBME. As a step toward filling this gap, the authors describe what is known about WBA implementation and use to identify knowledge gaps and future directions. METHOD The authors used Arksey and O'Malley's 6-stage scoping review framework to conduct the review, including: (1) identifying the research question; (2) identifying relevant studies; (3) study selection; (4) charting the data; (5) collating, summarizing, and reporting the results; and (6) consulting with relevant stakeholders. RESULTS In 2019-2020, the authors searched and screened 726 papers for eligibility using defined inclusion and exclusion criteria. One hundred sixty-three met inclusion criteria. The authors identified 5 themes in their analysis: (1) Many WBA tools and programs have been implemented, and barriers are common across fields and specialties; (2) Theoretical perspectives emphasize the need for data-driven implementation strategies; (3) User perceptions of WBA vary and are often dependent on implementation factors; (4) Technology solutions could provide useful tools to support WBA; and (5) Many areas of future research and innovation remain. CONCLUSIONS Knowledge of WBA as an implemented practice to support CBME remains constrained. To remove these constraints, future research should aim to generate generalizable knowledge on WBA implementation and use, address implementation factors, and investigate remaining knowledge gaps.
Collapse
Affiliation(s)
- Hannah L Anderson
- H.L. Anderson is research associate, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0002-9435-1535
| | - Joshua Kurtz
- J. Kurtz is a first-year resident, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Daniel C West
- D.C. West is professor of pediatrics, The Perelman School of Medicine at the University of Pennsylvania, and associate chair for education and senior director of medical education, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0002-0909-4213
| |
Collapse
|
18
|
Heeneman S, de Jong LH, Dawson LJ, Wilkinson TJ, Ryan A, Tait GR, Rice N, Torre D, Freeman A, van der Vleuten CPM. Ottawa 2020 consensus statement for programmatic assessment - 1. Agreement on the principles. MEDICAL TEACHER 2021; 43:1139-1148. [PMID: 34344274 DOI: 10.1080/0142159x.2021.1957088] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
INTRODUCTION In the Ottawa 2018 Consensus framework for good assessment, a set of criteria was presented for systems of assessment. Currently, programmatic assessment is being established in an increasing number of programmes. In this Ottawa 2020 consensus statement for programmatic assessment insights from practice and research are used to define the principles of programmatic assessment. METHODS For fifteen programmes in health professions education affiliated with members of an expert group (n = 20), an inventory was completed for the perceived components, rationale, and importance of a programmatic assessment design. Input from attendees of a programmatic assessment workshop and symposium at the 2020 Ottawa conference was included. The outcome is discussed in concurrence with current theory and research. RESULTS AND DISCUSSION Twelve principles are presented that are considered as important and recognisable facets of programmatic assessment. Overall these principles were used in the curriculum and assessment design, albeit with a range of approaches and rigor, suggesting that programmatic assessment is an achievable education and assessment model, embedded both in practice and research. Knowledge on and sharing how programmatic assessment is being operationalized may help support educators charting their own implementation journey of programmatic assessment in their respective programmes.
Collapse
Affiliation(s)
- Sylvia Heeneman
- Department of Pathology, School of Health Profession Education, Maastricht University, Maastricht, The Netherlands
| | - Lubberta H de Jong
- Department of Population Health Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Luke J Dawson
- School of Dentistry, University of Liverpool, Liverpool, UK
| | - Tim J Wilkinson
- Education Unit, University of Otago, Christchurch, New Zealand
| | - Anna Ryan
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Melbourne, Australia
| | - Glendon R Tait
- MD Program, Department of Psychiatry, and The Wilson Centre, University of Toronto, Toronto, Canada
| | - Neil Rice
- College of Medicine and Health, University of Exeter Medical School, Exeter, UK
| | - Dario Torre
- Department of Medicine, Uniformed Services University of Health Sciences, Bethesda, MD, USA
| | - Adrian Freeman
- College of Medicine and Health, University of Exeter Medical School, Exeter, UK
| | - Cees P M van der Vleuten
- Department of Educational Development and Research, School of Health Profession Education, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
19
|
Acai A, Cupido N, Weavers A, Saperson K, Ladhani M, Cameron S, Sonnadara RR. Competence committees: The steep climb from concept to implementation. MEDICAL EDUCATION 2021; 55:1067-1077. [PMID: 34152027 DOI: 10.1111/medu.14585] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 05/07/2021] [Accepted: 06/03/2021] [Indexed: 06/13/2023]
Abstract
INTRODUCTION Competence committees (CCs) are groups of educators tasked with reviewing resident progress throughout their training, making decisions regarding the achievement of Entrustable Professional Activities and recommendations regarding promotion and remediation. CCs have been mandated as part of competency-based medical education programmes worldwide; however, there has yet to be a thorough examination of the implementation challenges they face and how this impacts their functioning and decision-making processes. This study examined CC implementation at a Canadian institution, documenting the shared and unique challenges that CCs faced and overcame over a 3-year period. METHODS This study consisted of three phases, which were conceptually and analytically linked using Moran-Ellis and colleagues' notion of 'following a thread.' Phase 1 examined the early perceptions and experiences of 30 key informants using a survey and semi-structured interviews. Phase 2 provided insight into CCs' operations through a survey sent to 35 CC chairs 1-year post-implementation. Phase 3 invited 20 CC members to participate in semi-structured interviews to follow up on initial themes 2 years post-implementation. Detailed observation notes from 16 CC meetings across nine disciplines were used to corroborate the findings from each phase. RESULTS Response rates in each phase were 83% (n = 25), 43% (n = 15) and 60% (n = 12), respectively. Despite the high degree of support for CCs among faculty and resident members, several ongoing challenges were highlighted: adapting to programme size, optimising membership, engaging residents, maintaining capacity among members, sharing and aggregating data and developing a clear mandate. DISCUSSION Findings of this study reinforce the importance of resident engagement and information sharing between disciplines. Challenges faced by CCs are discussed in relation to the existing literature to inform a better understanding of group decision-making processes in medical education. Future research could compare implementation practices across sites and explore which adaptations lead to better or worse decision-making outcomes.
Collapse
Affiliation(s)
- Anita Acai
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, ON, Canada
- Office of Education Science, Department of Surgery, McMaster University, Hamilton, ON, Canada
- Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, ON, Canada
| | - Nathan Cupido
- Office of Education Science, Department of Surgery, McMaster University, Hamilton, ON, Canada
| | - Aliana Weavers
- Office of Education Science, Department of Surgery, McMaster University, Hamilton, ON, Canada
| | - Karen Saperson
- Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, ON, Canada
| | - Moyez Ladhani
- McMaster Postgraduate Medical Education Office, McMaster University, Hamilton, ON, Canada
- Department of Pediatrics, McMaster University, Hamilton, ON, Canada
| | - Sharon Cameron
- McMaster Postgraduate Medical Education Office, McMaster University, Hamilton, ON, Canada
| | - Ranil R Sonnadara
- Office of Education Science, Department of Surgery, McMaster University, Hamilton, ON, Canada
- Department of Surgery, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
20
|
Ten Cate O, Balmer DF, Caretta-Weyer H, Hatala R, Hennus MP, West DC. Entrustable Professional Activities and Entrustment Decision Making: A Development and Research Agenda for the Next Decade. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S96-S104. [PMID: 34183610 DOI: 10.1097/acm.0000000000004106] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
To establish a research and development agenda for Entrustable Professional Activities (EPAs) for the coming decade, the authors, all active in this area of investigation, reviewed recent research papers, seeking recommendations for future research. They pooled their knowledge and experience to identify 3 levels of potential research and development: the micro level of learning and teaching; the meso level of institutions, programs, and specialty domains; and the macro level of regional, national, and international dynamics. Within these levels, the authors categorized their recommendations for research and development. The authors identified 14 discrete themes, each including multiple questions or issues for potential exploration, that range from foundational and conceptual to practical. Much research to date has focused on a variety of issues regarding development and early implementation of EPAs. Future research should focus on large-scale implementation of EPAs to support competency-based medical education (CBME) and on its consequences at the 3 levels. In addition, emerging from the implementation phase, the authors call for rigorous studies focusing on conceptual issues. These issues include the nature of entrustment decisions and their relationship with education and learner progress and the use of EPAs across boundaries of training phases, disciplines and professions, including continuing professional development. International studies evaluating the value of EPAs across countries are another important consideration. Future studies should also remain alert for unintended consequences of the use of EPAs. EPAs were conceptualized to support CBME in its endeavor to improve outcomes of education and patient care, prompting creation of this agenda.
Collapse
Affiliation(s)
- Olle Ten Cate
- O. ten Cate is professor of medical education and senior scientist, Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, the Netherlands; ORCID: https://orcid.org/0000-0002-6379-8780
| | - Dorene F Balmer
- D.F. Balmer is associate professor, Department of Pediatrics, The Children's Hospital of Philadelphia and University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0001-6805-4062
| | - Holly Caretta-Weyer
- H. Caretta-Weyer is assistant professor, Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, California; ORCID: https://orcid.org/0000-0002-9783-5797
| | - Rose Hatala
- R. Hatala is professor, Department of Medicine, University of British Columbia, Vancouver, Canada; ORCID: https://orcid.org/0000-0003-0521-2590
| | - Marije P Hennus
- M.P. Hennus is a pediatric intensivist and program director, pediatric intensive care fellowship, University Medical Center Utrecht, Utrecht, the Netherlands; ORCID: https://orcid.org/0000-0003-1508-0456
| | - Daniel C West
- D.C. West is professor and senior director of medical education, Department of Pediatrics, Children's Hospital of Philadelphia and The Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0002-0909-4213
| |
Collapse
|
21
|
Ross S, Hauer KE, Wycliffe-Jones K, Hall AK, Molgaard L, Richardson D, Oswald A, Bhanji F. Key considerations in planning and designing programmatic assessment in competency-based medical education. MEDICAL TEACHER 2021; 43:758-764. [PMID: 34061700 DOI: 10.1080/0142159x.2021.1925099] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Programmatic assessment as a concept is still novel for many in clinical education, and there may be a disconnect between the academics who publish about programmatic assessment and the front-line clinical educators who must put theory into practice. In this paper, we clearly define programmatic assessment and present high-level guidelines about its implementation in competency-based medical education (CBME) programs. The guidelines are informed by literature and by lessons learned from established programmatic assessment approaches. We articulate five steps to consider when implementing programmatic assessment in CBME contexts: articulate the purpose of the program of assessment, determine what must be assessed, choose tools fit for purpose, consider the stakes of assessments, and define processes for interpreting assessment data. In the process, we seek to offer a helpful guide or template for front-line clinical educators. We dispel some myths about programmatic assessment to help training programs as they look to design-or redesign-programs of assessment. In particular, we highlight the notion that programmatic assessment is not 'one size fits all'; rather, it is a system of assessment that results when shared common principles are considered and applied by individual programs as they plan and design their own bespoke model of programmatic assessment for CBME in their unique context.
Collapse
Affiliation(s)
- Shelley Ross
- Department of Family Medicine, University of Alberta, Edmonton, Canada
- Canadian Association for Medical Education, Edmonton, Canada
| | | | - Keith Wycliffe-Jones
- Department of Family Medicine, Cumming School of Medicine, University of Calgary, Calgary, Canada
| | - Andrew K Hall
- Department of Emergency Medicine, Queen's University, Kingston, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
| | - Laura Molgaard
- University of Minnesota College of Veterinary Medicine, St. Paul, MIN, USA
| | - Denyse Richardson
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Division of Physiatry, Department of Medicine, University of Toronto, Toronto, Canada
| | - Anna Oswald
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Medicine and CBME lead for the Faculty of Medicine & Dentistry, University of Alberta, Edmonton, Canada
| | - Farhan Bhanji
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Pediatrics at McGill University, Montreal, Canada
| |
Collapse
|
22
|
Chan T, Oswald A, Hauer KE, Caretta-Weyer HA, Nousiainen MT, Cheung WJ. Diagnosing conflict: Conflicting data, interpersonal conflict, and conflicts of interest in clinical competency committees. MEDICAL TEACHER 2021; 43:765-773. [PMID: 34182879 DOI: 10.1080/0142159x.2021.1925101] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Clinical competency committees (CCCs) are increasingly used within health professions education as their decisions are thought to be more defensible and fairer than those generated by previous training promotion processes. However, as with most group-based processes, it is inevitable that conflict will arise. In this paper the authors explore three ways conflict may arise within a CCC: (1) conflicting data submissions that are presented to the committee, (2) conflicts between members of the committee, and (3) conflicts of interest between a specific committee member and a trainee. The authors describe each of these conflict situations, dissect out the underlying problems, and explore possible solutions based on the current literature.
Collapse
Affiliation(s)
- Teresa Chan
- Faculty Development, Faculty of Health Sciences, McMaster University, Hamilton, Canada
- Division of Emergency Medicine, Department of Medicine, McMaster University, Hamilton, Canada
- McMaster program for Education Research, Innovation, and Theory (MERIT), Hamilton, Canada
| | - Anna Oswald
- Competency Based Medical Education, Office of Postgraduate Medical Education, University of Alberta, Edmonton, Canada
- CanMEDS Clinician Educator, Royal College of Physicians and Surgeons of Canada, Edmonton, Canada
- Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Canada
| | - Karen E Hauer
- Competency Assessment and Professional Standards, San Francisco, CA, USA
- Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, CA, USA
| | - Holly A Caretta-Weyer
- Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, CA, USA
| | | | - Warren J Cheung
- Department of Emergency Medicine, University of Ottawa, Ottawa, Canada
- Senior Clinician Investigator, Ottawa Hospital Research Institute, Ottawa, Canada
- CanMEDS Clinician Educator, Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
| |
Collapse
|
23
|
Touchie C, Kinnear B, Schumacher D, Caretta-Weyer H, Hamstra SJ, Hart D, Gruppen L, Ross S, Warm E, Ten Cate O. On the validity of summative entrustment decisions. MEDICAL TEACHER 2021; 43:780-787. [PMID: 34020576 DOI: 10.1080/0142159x.2021.1925642] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Health care revolves around trust. Patients are often in a position that gives them no other choice than to trust the people taking care of them. Educational programs thus have the responsibility to develop physicians who can be trusted to deliver safe and effective care, ultimately making a final decision to entrust trainees to graduate to unsupervised practice. Such entrustment decisions deserve to be scrutinized for their validity. This end-of-training entrustment decision is arguably the most important one, although earlier entrustment decisions, for smaller units of professional practice, should also be scrutinized for their validity. Validity of entrustment decisions implies a defensible argument that can be analyzed in components that together support the decision. According to Kane, building a validity argument is a process designed to support inferences of scoring, generalization across observations, extrapolation to new instances, and implications of the decision. A lack of validity can be caused by inadequate evidence in terms of, according to Messick, content, response process, internal structure (coherence) and relationship to other variables, and in misinterpreted consequences. These two leading frameworks (Kane and Messick) in educational and psychological testing can be well applied to summative entrustment decision-making. The authors elaborate the types of questions that need to be answered to arrive at defensible, well-argued summative decisions regarding performance to provide a grounding for high-quality safe patient care.
Collapse
Affiliation(s)
- Claire Touchie
- Medical Council of Canada, Ottawa, Canada
- The University of Ottawa, Ottawa, Canada
| | - Benjamin Kinnear
- Internal Medicine and Pediatrics, University of Cincinnati College of Medicine/Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Daniel Schumacher
- Pediatrics, Hospital Medical Center/University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Holly Caretta-Weyer
- Emergency Medicine, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Stanley J Hamstra
- University of Toronto, Toronto, Ontario, Canada
- Accreditation Council for Graduate Medical Education, Chicago, IL, USA
| | - Danielle Hart
- Emergency Medicine, Hennepin Healthcare and the University of Minnesota, Minneapolis, MN, USA
| | - Larry Gruppen
- Learning Health Sciences, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Shelley Ross
- Department of Family Medicine, University of Alberta, Edmonton, AB, Canada
| | - Eric Warm
- University of Cincinnati College of Medicine Center, Cincinnati, OH, USA
| | - Olle Ten Cate
- Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
24
|
Sawyer T, Gray M, Chabra S, Johnston LC, Carbajal MM, Gillam-Krakauer M, Brady JM, French H. Milestone Level Changes From Residency to Fellowship: A Multicenter Cohort Study. J Grad Med Educ 2021; 13:377-384. [PMID: 34178263 PMCID: PMC8207935 DOI: 10.4300/jgme-d-20-00954.1] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 12/19/2020] [Accepted: 02/24/2021] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND A vital element of the Next Accreditation System is measuring and reporting educational Milestones. Little is known about changes in Milestones levels during the transition from residency to fellowship training. OBJECTIVE Evaluate the Accreditation Council for Graduate Medical Education (ACGME) Milestones' ability to provide a linear trajectory of professional development from general pediatrics residency to neonatal-perinatal medicine (NPM) fellowship training. METHODS We identified 11 subcompetencies that were the same for general pediatrics residency and NPM fellowship. We then extracted the last residency Milestone level and the first fellowship Milestone level for each subcompetency from the ACGME's Accreditation Data System on 89 subjects who started fellowship training between 2014 and 2018 at 6 NPM fellowship programs. Mixed-effects models were used to examine the intra-individual changes in Milestone scores between residency and fellowship after adjusting for the effects of the individual programs. RESULTS A total of 1905 subcompetency Milestone levels were analyzed. The average first fellowship Milestone levels were significantly lower than the last residency Milestone levels (residency, mean 3.99 [SD = 0.48] vs fellowship 2.51 [SD = 0.56]; P < .001). Milestone levels decreased by an average of -1.49 (SD = 0.65) from the last residency to the first fellowship evaluation. Significant differences in Milestone levels were seen in both context-dependent subcompetencies (patient care and medical knowledge) and context-independent subcompetencies (professionalism). CONCLUSIONS Contrary to providing a linear trajectory of professional development, we found that Milestone levels were reset when trainees transitioned from general pediatrics residency to NPM fellowship.
Collapse
Affiliation(s)
- Taylor Sawyer
- Taylor Sawyer, DO, MEd, is Associate Fellowship Director, Department of Pediatrics, Division of Neonatology, University of Washington School of Medicine
| | - Megan Gray
- Megan Gray, MD, is Fellowship Director, Department of Pediatrics, Division of Neonatology, University of Washington School of Medicine
| | - Shilpi Chabra
- Shilpi Chabra, MD, is Associate Professor of Pediatrics, Department of Pediatrics, Division of Neonatology, University of Washington School of Medicine
| | - Lindsay C. Johnston
- Lindsay C. Johnston, MD, MEd, is Fellowship Director, Department of Pediatrics, Division of Neonatology, Yale University School of Medicine
| | - Melissa M. Carbajal
- Melissa M. Carbajal, MD, is Fellowship Director, Department of Pediatrics, Division of Neonatology, Baylor College of Medicine/Texas Children's Hospital
| | - Maria Gillam-Krakauer
- Maria Gillam-Krakauer, MD, MEd, is Assistant Professor of Pediatrics, Department of Pediatrics, Division of Neonatology, Vanderbilt University Medical Center
| | - Jennifer M. Brady
- Jennifer M. Brady, MD, is Assistant Professor of Pediatrics, Perinatal Institute, Cincinnati Children's Hospital Medical Center, Department of Pediatrics, University of Cincinnati College of Medicine
| | - Heather French
- Heather French, MD, MSEd, is Fellowship Director, Department of Pediatrics, Division of Neonatology, Perelman School of Medicine at the University of Pennsylvania
| |
Collapse
|
25
|
Young JQ, Holmboe ES, Frank JR. Competency-Based Assessment in Psychiatric Education: A Systems Approach. Psychiatr Clin North Am 2021; 44:217-235. [PMID: 34049645 DOI: 10.1016/j.psc.2020.12.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Medical education programs are failing to meet the health needs of patients and communities. Misalignments exist on multiple levels, including content (what trainees learn), pedagogy (how trainees learn), and culture (why trainees learn). To address these challenges effectively, competency-based assessment (CBA) for psychiatric medical education must simultaneously produce life-long learners who can self-regulate their own growth and trustworthy processes that determine and accelerate readiness for independent practice. The key to effectively doing so is situating assessment within a carefully designed system with several, critical, interacting components: workplace-based assessment, ongoing faculty development, learning analytics, longitudinal coaching, and fit-for-purpose clinical competency committees.
Collapse
Affiliation(s)
- John Q Young
- Department of Psychiatry, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell and the Zucker Hillside Hospital at Northwell Health, Glen Oaks, NY, USA.
| | - Eric S Holmboe
- Accreditation Council for Graduate Medical Education, 401 North Michigan Avenue, Chicago, IL 60611, USA
| | - Jason R Frank
- Royal College of Physicians and Surgeons of Canada, 774 Echo Drive, Ottawa, Ontario K15 5NB, Canada; Education, Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
26
|
Hauer KE, Jurich D, Vandergrift J, Lipner RS, McDonald FS, Yamazaki K, Chick D, McAllister K, Holmboe ES. Gender Differences in Milestone Ratings and Medical Knowledge Examination Scores Among Internal Medicine Residents. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:876-884. [PMID: 33711841 DOI: 10.1097/acm.0000000000004040] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
PURPOSE To examine whether there are group differences in milestone ratings submitted by program directors working with clinical competency committees (CCCs) based on gender for internal medicine (IM) residents and whether women and men rated similarly on milestones perform comparably on subsequent in-training and certification examinations. METHOD This national retrospective study examined end-of-year medical knowledge (MK) and patient care (PC) milestone ratings and IM In-Training Examination (IM-ITE) and IM Certification Examination (IM-CE) scores for 2 cohorts (2014-2017, 2015-2018) of U.S. IM residents at ACGME-accredited programs. It included 20,098/21,440 (94%) residents, with 9,424 women (47%) and 10,674 men (53%). Descriptive statistics and differential prediction techniques using hierarchical linear models were performed. RESULTS For MK milestone ratings in PGY-1, men and women showed no statistical difference at a significance level of .01 (P = .02). In PGY-2 and PGY-3, men received statistically higher average MK ratings than women (P = .002 and P < .001, respectively). In contrast, men and women received equivalent average PC ratings in each PGY (P = .47, P = .72, and P = .80, for PGY-1, PGY-2, and PGY-3, respectively). Men slightly outperformed women with similar MK or PC ratings in PGY-1 and PGY-2 on the IM-ITE by about 1.7 and 1.5 percentage points, respectively, after adjusting for covariates. For PGY-3 ratings, women and men with similar milestone ratings performed equivalently on the IM-CE. CONCLUSIONS Milestone ratings were largely similar for women and men. Generally, women and men with similar MK or PC milestone ratings performed similarly on future examinations. Although there were small differences favoring men on earlier examinations, these differences disappeared by the final training year. It is questionable whether these small differences are educationally or clinically meaningful. The findings suggest fair, unbiased milestone ratings generated by program directors and CCCs assessing residents.
Collapse
Affiliation(s)
- Karen E Hauer
- K.E. Hauer is professor, Department of Medicine, University of California, San Francisco, San Francisco, California; ORCID: https://orcid.org/0000-0002-8812-4045
| | - Daniel Jurich
- D. Jurich is manager, psychometrics, National Board of Medical Examiners, Philadelphia, Pennsylvania
| | - Jonathan Vandergrift
- J. Vandergrift is senior research associate, American Board of Internal Medicine, Philadelphia, Pennsylvania
| | - Rebecca S Lipner
- R.S. Lipner is senior vice president for assessment and research, American Board of Internal Medicine, Philadelphia, Pennsylvania
| | - Furman S McDonald
- F.S. McDonald is senior vice president for academic and medical affairs, American Board of Internal Medicine, Philadelphia, Pennsylvania
| | - Kenji Yamazaki
- K. Yamazaki is senior analyst, milestones research and evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Davoren Chick
- D. Chick is senior vice president for medical education, American College of Physicians, Philadelphia, Pennsylvania
| | - Kevin McAllister
- K. McAllister is assessment officer, National Board of Medical Examiners, Philadelphia, Pennsylvania
| | - Eric S Holmboe
- E.S. Holmboe is chief research, milestone development, and evaluation officer, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| |
Collapse
|
27
|
Xing J, Khader SN, Ohori NP, DeFrances M, Cuda J, Monaco SE. Comparison of quantitative internal and external measures of performance for trainees in cytopathology fellowships. J Am Soc Cytopathol 2021; 10:495-503. [PMID: 34099427 DOI: 10.1016/j.jasc.2021.05.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 05/04/2021] [Accepted: 05/10/2021] [Indexed: 11/26/2022]
Abstract
INTRODUCTION Cytopathology fellowships need measures to assess performance of fellows. We sought to compare several internal quantitative assessment metrics in our fellowship with external metrics, such as performance on the American Society of Cytopathology (ASC) Progressive Evaluation of Competency (PEC) examination and United States Medical Licensing Examination (USMLE). METHODS Quantitative parameters generated from our laboratory information system (LIS) on cytopathology fellows were evaluated over 6 years, including case volume and diagnostic discrepancies, in addition to ASC PEC and USMLE scores. For discrepancy reports, interpretations made by the fellow were compared with that of the cytopathologist, and classified as none (concordant), minor (<2-levels) or major (≥2-levels). RESULTS We evaluated internal and external metrics on 13 fellows over 6 years. The program average diagnostic concordance rate was 89.9%, with an average major discrepancy rate of 1.5%, and an average monthly case volume of 260 cases. More fellows with above-average ASC PEC performance showed above-average concordant diagnoses and lower case volume, while below-average PEC scores were seen more often with higher major discrepancy rates. More fellows with above-average USMLE scores had higher case volumes, while low USMLE scores showed a trend towards higher major discrepancy rates. CONCLUSION Our fellowship program has used a variety of internal and external measures of performance for cytopathology fellows. Although the findings show no statistically significant finding correlating performance, these quantitative parameters generated from our LIS were helpful to identify areas of improvement, facilitate comparison to peers, and provide case volume documentation.
Collapse
Affiliation(s)
- Juan Xing
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania.
| | - Samer N Khader
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania
| | - N Paul Ohori
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania
| | - Marie DeFrances
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania
| | - Jackie Cuda
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania
| | - Sara E Monaco
- Department of Laboratory Medicine, Geisinger Medical Center, Danville, Pennsylvania
| |
Collapse
|
28
|
Heath JK, Davis JE, Dine CJ, Padmore JS. Faculty Development for Milestones and Clinical Competency Committees. J Grad Med Educ 2021; 13:127-131. [PMID: 33936547 PMCID: PMC8078076 DOI: 10.4300/jgme-d-20-00851.1] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Affiliation(s)
- Janae K. Heath
- Janae K. Heath, MD, MSCE, is Assistant Professor of Medicine, Division of Pulmonary and Critical Care, University of Pennsylvania
| | - Jonathan E. Davis
- Jonathan E. Davis, MD, is Professor and Academic Chair, Emergency Medicine, Georgetown University Medical Center, and System Physician Chair for GME, Medstar Health
| | - C. Jessica Dine
- C. Jessica Dine, MD, MSHP, is Associate Professor of Medicine, Division of Pulmonary and Critical Care, and Associate Dean of Faculty Development, Perelman School of Medicine, University of Pennsylvania
| | - Jamie S. Padmore
- Jamie S. Padmore, DM, is Professor and Senior Associate Dean for Medical Education, Georgetown University Medical Center, and Vice President, Academic Affairs, and Designated Institutional Official, MedStar Health
| |
Collapse
|
29
|
Edgar L, Jones MD, Harsy B, Passiment M, Hauer KE. Better Decision-Making: Shared Mental Models and the Clinical Competency Committee. J Grad Med Educ 2021; 13:51-58. [PMID: 33936533 PMCID: PMC8078083 DOI: 10.4300/jgme-d-20-00850.1] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 11/13/2020] [Accepted: 12/01/2020] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Shared mental models (SMMs) help groups make better decisions. Clinical competency committees (CCCs) can benefit from the development and use of SMMs in their decision-making as a way to optimize the quality and consistency of their decisions. OBJECTIVE We reviewed the use of SMMs for decision making in graduate medical education, particularly their use in CCCs. METHODS In May 2020, the authors conducted a narrative review of the literature related to SMMs. This review included the SMM related to teams, team functioning, CCCs, and graduate medical education. RESULTS The literature identified the general use of SMMs, SMMs in graduate medical education, and strategies for building SMMs into the work of the CCC. Through the use of clear communication and guidelines, and a shared understanding of goals and expectations, CCCs can make better decisions. SMMs can be applied to Milestones, resident performance, assessment, and feedback. CONCLUSIONS To ensure fair and robust decision-making, the CCC must develop and maintain SMMs through excellent communication and understanding of expectations among members.
Collapse
Affiliation(s)
- Laura Edgar
- Laura Edgar, EdD, CAE, is Vice President, Milestones Development, Accreditation Council for Graduate Medical Education (ACGME)
| | - M. Douglas Jones
- M. Douglas Jones Jr, MD, is Professor of Pediatrics, University of Colorado School of Medicine
| | - Braden Harsy
- Braden Harsy, MA, is Milestones Administrator, ACGME
| | - Morgan Passiment
- Morgan Passiment, MS, is Director, Institutional Outreach and Collaboration, ACGME
| | - Karen E. Hauer
- Karen E. Hauer, MD, PhD, is Associate Dean, Competency Assessment and Professional Standards, and Professor of Medicine, University of California, San Francisco
| |
Collapse
|
30
|
Turner J, Wimberly Y, Andolsek KM. Creating a High-Quality Faculty Orientation and Ongoing Member Development Curriculum for the Clinical Competency Committee. J Grad Med Educ 2021; 13:65-69. [PMID: 33936535 PMCID: PMC8078085 DOI: 10.4300/jgme-d-20-00996.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Affiliation(s)
- Jacquelyn Turner
- Jacquelyn Turner, MD, FACS, FASCRS, is Assistant Professor, Department of Surgery, Division of Surgical Education, Morehouse School of Medicine
| | - Yolanda Wimberly
- Yolanda Wimberly, MD, MSc, FAAP, is Professor, Department of Pediatrics, Associate Dean of Clinical Affairs, and Designated Institutional Official, Department of Graduate Medical Education, Morehouse School of Medicine
| | - Kathryn M. Andolsek
- Kathryn M. Andolsek, MD, MPH, is Professor, Department of Family Medicine and Community Health, Assistant Dean, Premedical Education, and Senior Fellow, Center for Study of Aging and Human Development, Duke University School of Medicine
| |
Collapse
|
31
|
Ekpenyong A, Padmore JS, Hauer KE. The Purpose, Structure, and Process of Clinical Competency Committees: Guidance for Members and Program Directors. J Grad Med Educ 2021; 13:45-50. [PMID: 33936532 PMCID: PMC8078071 DOI: 10.4300/jgme-d-20-00841.1] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/13/2023] Open
Affiliation(s)
- Andem Ekpenyong
- Andem Ekpenyong, MD, MHPE, is Associate Professor, Department of Internal Medicine, Rush University Medical Center
| | - Jamie S. Padmore
- Jamie S. Padmore, DM, is Professor and Senior Associate Dean for Medical Education, Georgetown University Medical Center, and Vice President, Academic Affairs, and Designated Institutional Official, MedStar Health
| | - Karen E. Hauer
- Karen E. Hauer, MD, PhD, is Associate Dean, Competency Assessment and Professional Standards, and Professor of Medicine, University of California, San Francisco
| |
Collapse
|
32
|
Lucey CR, Hauer KE, Boatright D, Fernandez A. Medical Education's Wicked Problem: Achieving Equity in Assessment for Medical Learners. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:S98-S108. [PMID: 32889943 DOI: 10.1097/acm.0000000000003717] [Citation(s) in RCA: 63] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Despite a lack of intent to discriminate, physicians educated in U.S. medical schools and residency programs often take actions that systematically disadvantage minority patients. The approach to assessment of learner performance in medical education can similarly disadvantage minority learners. The adoption of holistic admissions strategies to increase the diversity of medical training programs has not been accompanied by increases in diversity in honor societies, selective residency programs, medical specialties, and medical school faculty. These observations prompt justified concerns about structural and interpersonal bias in assessment. This manuscript characterizes equity in assessment as a "wicked problem" with inherent conflicts, uncertainty, dynamic tensions, and susceptibility to contextual influences. The authors review the underlying individual and structural causes of inequity in assessment. Using an organizational model, they propose strategies to achieve equity in assessment and drive institutional and systemic improvement based on clearly articulated principles. This model addresses the culture, systems, and assessment tools necessary to achieve equitable results that reflect stated principles. Three components of equity in assessment that can be measured and evaluated to confirm success include intrinsic equity (selection and design of assessment tools), contextual equity (the learning environment in which assessment occurs), and instrumental equity (uses of assessment data for learner advancement and selection and program evaluation). A research agenda to address these challenges and controversies and demonstrate reduction in bias and discrimination in medical education is presented.
Collapse
Affiliation(s)
- Catherine R Lucey
- C.R. Lucey is executive vice dean/vice dean for education and professor of medicine, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Karen E Hauer
- K.E. Hauer is professor of medicine, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Dowin Boatright
- D. Boatright is assistant professor of emergency medicine, Yale University School of Medicine, New Haven, Connecticut
| | - Alicia Fernandez
- A. Fernandez is professor of medicine, University of California, San Francisco, School of Medicine, San Francisco, California
| |
Collapse
|
33
|
Carey R, Wilson G, Bandi V, Mondal D, Martin LJ, Woods R, Chan T, Thoma B. Developing a dashboard to meet the needs of residents in a competency-based training program: A design-based research project. CANADIAN MEDICAL EDUCATION JOURNAL 2020; 11:e31-e45. [PMID: 33349752 PMCID: PMC7749685 DOI: 10.36834/cmej.69682] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
BACKGROUND Canadian specialty programs are implementing Competence By Design, a competency-based medical education (CBME) program which requires frequent assessments of entrustable professional activities. To be used for learning, the large amount of assessment data needs to be interpreted by residents, but little work has been done to determine how visualizing and interacting with this data can be supported. Within the University of Saskatchewan emergency medicine residency program, we sought to determine how our residents' CBME assessment data should be presented to support their learning and to develop a dashboard that meets our residents' needs. METHODS We utilized a design-based research process to identify and address resident needs surrounding the presentation of their assessment data. Data was collected within the emergency medicine residency program at the University of Saskatchewan via four resident focus groups held over 10 months. Focus group discussions were analyzed using a grounded theory approach to identify resident needs. This guided the development of a dashboard which contained elements (data, analytics, and visualizations) that support their interpretation of the data. The identified needs are described using quotes from the focus groups as well as visualizations of the dashboard elements. RESULTS Resident needs were classified under three themes: (1) Provide guidance through the assessment program, (2) Present workplace-based assessment data, and (3) Present other assessment data. Seventeen dashboard elements were designed to address these needs. CONCLUSIONS Our design-based research process identified resident needs and developed dashboard elements to meet them. This work will inform the creation and evolution of CBME assessment dashboards designed to support resident learning.
Collapse
Affiliation(s)
- Robert Carey
- Department of Emergency Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Grayson Wilson
- Department of Emergency Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Venkat Bandi
- Department of Computer Science, University of Saskatchewan, Saskatchewan, Canada
| | - Debajyoti Mondal
- Department of Computer Science, University of Saskatchewan, Saskatchewan, Canada
| | - Lynsey J. Martin
- Department of Emergency Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Rob Woods
- Department of Emergency Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Teresa Chan
- Division of Emergency Medicine, Department of Medicine, McMaster University, Ontario, Canada
- McMaster program for Education Research, Innovation, and Theory (MERIT), McMaster University, Ontario, Canada
| | - Brent Thoma
- Department of Emergency Medicine, University of Saskatchewan, Saskatchewan, Canada
| |
Collapse
|
34
|
Schumacher DJ, Martini A, Sobolewski B, Carraccio C, Holmboe E, Busari J, Poynter S, van der Vleuten C, Lingard L. Use of Resident-Sensitive Quality Measure Data in Entrustment Decision Making: A Qualitative Study of Clinical Competency Committee Members at One Pediatric Residency. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:1726-1735. [PMID: 32324637 DOI: 10.1097/acm.0000000000003435] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PURPOSE Resident-sensitive quality measures (RSQMs) are quality measures that are likely performed by an individual resident and are important to care quality for a given illness of interest. This study sought to explore how individual clinical competency committee (CCC) members interpret, use, and prioritize RSQMs alongside traditional assessment data when making a summative entrustment decision. METHOD In this constructivist grounded theory study, 19 members of the pediatric residency CCC at Cincinnati Children's Hospital Medical Center were purposively and theoretically sampled between February and July 2019. Participants were provided a deidentified resident assessment portfolio with traditional assessment data (milestone and/or entrustable professional activity ratings as well as narrative comments from 5 rotations) and RSQM performance data for 3 acute, common diagnoses in the pediatric emergency department (asthma, bronchiolitis, and closed head injury) from the emergency medicine rotation. Data collection consisted of 2 phases: (1) observation and think out loud while participants reviewed the portfolio and (2) semistructured interviews to probe participants' reviews. Analysis moved from close readings to coding and theme development, followed by the creation of a model illustrating theme interaction. Data collection and analysis were iterative. RESULTS Five dimensions for how participants interpret, use, and prioritize RSQMs were identified: (1) ability to orient to RSQMs: confusing to self-explanatory, (2) propensity to use RSQMs: reluctant to enthusiastic, (3) RSQM interpretation: requires contextualization to self-evident, (4) RSQMs for assessment decisions: not sticky to sticky, and (5) expectations for residents: potentially unfair to fair to use RSQMs. The interactions among these dimensions generated 3 RSQM data user profiles: eager incorporation, willing incorporation, and disinclined incorporation. CONCLUSIONS Participants used RSQMs to varying extents in their review of resident data and found such data helpful to varying degrees, supporting the inclusion of RSQMs as resident assessment data for CCC review.
Collapse
Affiliation(s)
- Daniel J Schumacher
- D.J. Schumacher is associate professor of pediatrics, Cincinnati Children's Hospital Medical Center and University of Cincinnati College of Medicine, Cincinnati, Ohio
| | - Abigail Martini
- A. Martini is a clinical research coordinator, Division of Emergency Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio
| | - Brad Sobolewski
- B. Sobolewski is associate professor of pediatrics, Cincinnati Children's Hospital Medical Center and University of Cincinnati College of Medicine, Cincinnati, Ohio
| | - Carol Carraccio
- C. Carraccio is vice president of competency-based assessment, American Board of Pediatrics, Chapel Hill, North Carolina
| | - Eric Holmboe
- E. Holmboe is senior vice president for milestones development and evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Jamiu Busari
- J. Busari is associate professor of medical education, Maastricht University, Maastricht, The Netherlands
| | - Sue Poynter
- S. Poynter is professor of pediatrics, Cincinnati Children's Hospital Medical Center and University of Cincinnati College of Medicine, Cincinnati, Ohio
| | - Cees van der Vleuten
- C. van der Vleuten is professor of education, Department of Educational Development and Research, Faculty of Health, Medicine, and Life Sciences, and scientific director, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands
| | - Lorelei Lingard
- L. Lingard is professor and scientist, Department of Medicine, and director, Center for Education Research & Innovation, Schulich School of Medicine and Dentistry at Western University, London, Ontario, Canada
| |
Collapse
|
35
|
Rich JV, Fostaty Young S, Donnelly C, Hall AK, Dagnone JD, Weersink K, Caudle J, Van Melle E, Klinger DA. Competency-based education calls for programmatic assessment: But what does this look like in practice? J Eval Clin Pract 2020; 26:1087-1095. [PMID: 31820556 DOI: 10.1111/jep.13328] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 11/17/2019] [Accepted: 11/20/2019] [Indexed: 11/29/2022]
Abstract
RATIONALE, AIMS, AND OBJECTIVES Programmatic assessment has been identified as a system-oriented approach to achieving the multiple purposes for assessment within Competency-Based Medical Education (CBME, i.e., formative, summative, and program improvement). While there are well-established principles for designing and evaluating programs of assessment, few studies illustrate and critically interpret, what a system of programmatic assessment looks like in practice. This study aims to use systems thinking and the 'two communities' metaphor to interpret a model of programmatic assessment and to identify challenges and opportunities with operationalization. METHOD An interpretive case study was used to investigate how programmatic assessment is being operationalized within one competency-based residency program at a Canadian university. Qualitative data were collected from residents, faculty, and program leadership via semi-structured group and individual interviews conducted at nine months post-CBME implementation. Data were analyzed using a combination of data-based inductive analysis and theory-derived deductive analysis. RESULTS In this model, Academic Advisors had a central role in brokering assessment data between communities responsible for producing and using residents' performance information for decision making (i.e., formative, summative/evaluative, and program improvement). As system intermediaries, Academic Advisors were in a privileged position to see how the parts of the assessment system contributed to the functioning of the whole and could identify which system components were not functioning as intended. Challenges were identified with the documentation of residents' performance information (i.e., system inputs); use of low-stakes formative assessments to inform high-stakes evaluative judgments about the achievement of competence standards; and gaps in feedback mechanisms for closing learning loops. CONCLUSIONS The findings of this research suggest that program stakeholders can benefit from a systems perspective regarding how their assessment practices contribute to the efficacy of the system as a whole. Academic Advisors are well positioned to support educational development efforts focused on overcoming challenges with operationalizing programmatic assessment.
Collapse
Affiliation(s)
- Jessica V Rich
- Faculty of Education, Queen's University, Kingston, Ontario, Canada
| | - Sue Fostaty Young
- Centre for Teaching and Learning, Queen's University, Kingston, Ontario, Canada
| | - Catherine Donnelly
- School of Rehabilitation Therapy, Queen's University, Kingston, Ontario, Canada
| | - Andrew K Hall
- Department of Emergency Medicine, Queen's University, Kingston, Ontario, Canada.,Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada
| | - J Damon Dagnone
- Department of Emergency Medicine, Queen's University, Kingston, Ontario, Canada
| | - Kristen Weersink
- Department of Emergency Medicine, Queen's University, Kingston, Ontario, Canada
| | - Jaelyn Caudle
- Department of Emergency Medicine, Queen's University, Kingston, Ontario, Canada
| | - Elaine Van Melle
- Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada
| | - Don A Klinger
- Te Kura Toi Tangata, University of Waikato, Hamilton, New Zealand
| |
Collapse
|
36
|
Thoma B, Hall AK, Clark K, Meshkat N, Cheung WJ, Desaulniers P, Ffrench C, Meiwald A, Meyers C, Patocka C, Beatty L, Chan TM. Evaluation of a National Competency-Based Assessment System in Emergency Medicine: A CanDREAM Study. J Grad Med Educ 2020; 12:425-434. [PMID: 32879682 PMCID: PMC7450748 DOI: 10.4300/jgme-d-19-00803.1] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 02/11/2020] [Accepted: 05/20/2020] [Indexed: 01/08/2023] Open
Abstract
BACKGROUND In 2018, Canadian postgraduate emergency medicine (EM) programs began implementing a competency-based medical education (CBME) assessment program. Studies evaluating these programs have focused on broad outcomes using data from national bodies and lack data to support program-specific improvement. OBJECTIVE We evaluated the implementation of a CBME assessment program within and across programs to identify successes and opportunities for improvement at the local and national levels. METHODS Program-level data from the 2018 resident cohort were amalgamated and analyzed. The number of entrustable professional activity (EPA) assessments (overall and for each EPA) and the timing of resident promotion through program stages were compared between programs and to the guidelines provided by the national EM specialty committee. Total EPA observations from each program were correlated with the number of EM and pediatric EM rotations. RESULTS Data from 15 of 17 (88%) programs containing 9842 EPA observations from 68 of 77 (88%) EM residents in the 2018 cohort were analyzed. Average numbers of EPAs observed per resident in each program varied from 92.5 to 229.6, correlating with the number of blocks spent on EM and pediatric EM (r = 0.83, P < .001). Relative to the specialty committee's guidelines, residents were promoted later than expected (eg, one-third of residents had a 2-month delay to promotion from the first to second stage) and with fewer EPA observations than suggested. CONCLUSIONS There was demonstrable variation in EPA-based assessment numbers and promotion timelines between programs and with national guidelines.
Collapse
|
37
|
Affiliation(s)
- Christopher C Stahl
- Department of Surgery, University of Wisconsin School of Medicine and Public Health, H4/710D Clinical Science Center, 600 Highland Avenue, Madison, WI 53792-7375, USA
| | - Rebecca M Minter
- Department of Surgery, University of Wisconsin School of Medicine and Public Health, H4/710D Clinical Science Center, 600 Highland Avenue, Madison, WI 53792-7375, USA.
| |
Collapse
|
38
|
CAEP 2019 Academic Symposium: Got competence? Best practices in trainee progress decisions. CAN J EMERG MED 2020; 22:187-193. [DOI: 10.1017/cem.2019.480] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
ABSTRACTBackgroundCompetence committees play a key role in a competency-based system of assessment. These committees are tasked with reviewing and synthesizing clinical performance data to make judgments regarding residents’ competence. Canadian emergency medicine (EM) postgraduate training programs recently implemented competence committees; however, a paucity of literature guides their work.ObjectiveThe objective of this study was to develop consensus-based recommendations to optimize the function and decisions of competence committees in Canadian EM training programs.MethodsSemi-structured interviews of EM competence committee chairs were conducted and analyzed. The interview guide was informed by a literature review of competence committee structure, processes, and best practices. Inductive thematic analysis of interview transcripts was conducted to identify emerging themes. Preliminary recommendations, based on themes, were drafted and presented at the 2019 CAEP Academic Symposium on Education. Through a live presentation and survey poll, symposium attendees representing the national EM community participated in a facilitated discussion of the recommendations. The authors incorporated this feedback and identified consensus among symposium attendees on a final set of nine high-yield recommendations.ConclusionThe Canadian EM community used a structured process to develop nine best practice recommendations for competence committees addressing: committee membership, meeting processes, decision outcomes, use of high-quality performance data, and ongoing quality improvement. These recommendations can inform the structure and processes of competence committees in Canadian EM training programs.
Collapse
|
39
|
Thoma B, Bandi V, Carey R, Mondal D, Woods R, Martin L, Chan T. Developing a dashboard to meet Competence Committee needs: a design-based research project. CANADIAN MEDICAL EDUCATION JOURNAL 2020; 11:e16-e34. [PMID: 32215140 PMCID: PMC7082472 DOI: 10.36834/cmej.68903] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
BACKGROUND Competency-based programs are being adopted in medical education around the world. Competence Committees must visualize learner assessment data effectively to support their decision-making. Dashboards play an integral role in decision support systems in other fields. Design-based research allows the simultaneous development and study of educational environments. METHODS We utilized a design-based research process within the emergency medicine residency program at the University of Saskatchewan to identify the data, analytics, and visualizations needed by its Competence Committee, and developed a dashboard incorporating these elements. Narrative data were collected from two focus groups, five interviews, and the observation of two Competence Committee meetings. Data were qualitatively analyzed to develop a thematic framework outlining the needs of the Competence Committee and to inform the development of the dashboard. RESULTS The qualitative analysis identified four Competence Committee needs (Explore Workplace-Based Assessment Data, Explore Other Assessment Data, Understand the Data in Context, and Ensure the Security of the Data). These needs were described with narratives and represented through visualizations of the dashboard elements. CONCLUSIONS This work addresses the practical challenges of supporting data-driven decision making by Competence Committees and will inform the development of dashboards for programs, institutions, and learner management systems.
Collapse
Affiliation(s)
- Brent Thoma
- Department of Emergency Medicine, University of Saskatchewan, Saskatchewan, Canada
- Correspondence: Dr. Brent Thoma, Room 2646, Box 16, 103 Hospital Drive, Saskatoon, SK S7N 0W8; ; phone: 1-306-881-0112; Twitter: @Brent_Thoma
| | - Venkat Bandi
- Department of Computer Science, University of Saskatchewan, Saskatchewan, Canada
| | - Robert Carey
- Department of Emergency Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Debajyoti Mondal
- Department of Computer Science, University of Saskatchewan, Saskatchewan, Canada
| | - Rob Woods
- Department of Emergency Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Lynsey Martin
- Department of Emergency Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Teresa Chan
- Division of Emergency Medicine, Department of Medicine, McMaster University, Ontario, Canada
- McMaster program for Education Research, Innovation, and Theory (MERIT), Ontario, Canada
| |
Collapse
|
40
|
Caretta-Weyer HA, Gisondi MA. Design Your Clinical Workplace to Facilitate Competency-Based Education. West J Emerg Med 2019; 20:651-653. [PMID: 31316706 PMCID: PMC6625682 DOI: 10.5811/westjem.2019.4.43216] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 05/10/2019] [Accepted: 04/29/2019] [Indexed: 11/17/2022] Open
Affiliation(s)
- Holly A Caretta-Weyer
- Stanford University School of Medicine, Department of Emergency Medicine, Palo Alto, California
| | - Michael A Gisondi
- Stanford University School of Medicine, Department of Emergency Medicine, Palo Alto, California
| |
Collapse
|
41
|
Frank AK, O'Sullivan P, Mills LM, Muller-Juge V, Hauer KE. Clerkship Grading Committees: the Impact of Group Decision-Making for Clerkship Grading. J Gen Intern Med 2019; 34:669-676. [PMID: 30993615 PMCID: PMC6502934 DOI: 10.1007/s11606-019-04879-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
BACKGROUND Faculty and students debate the fairness and accuracy of medical student clerkship grades. Group decision-making is a potential strategy to improve grading. OBJECTIVE To explore how one school's grading committee members integrate assessment data to inform grade decisions and to identify the committees' benefits and challenges. DESIGN This qualitative study used semi-structured interviews with grading committee chairs and members conducted between November 2017 and March 2018. PARTICIPANTS Participants included the eight core clerkship directors, who chaired their grading committees. We randomly selected other committee members to invite, for a maximum of three interviews per clerkship. APPROACH Interviews were recorded, transcribed, and analyzed using inductive content analysis. KEY RESULTS We interviewed 17 committee members. Within and across specialties, committee members had distinct approaches to prioritizing and synthesizing assessment data. Participants expressed concerns about the quality of assessments, necessitating careful scrutiny of language, assessor identity, and other contextual factors. Committee members were concerned about how unconscious bias might impact assessors, but they felt minimally impacted at the committee level. When committee members knew students personally, they felt tension about how to use the information appropriately. Participants described high agreement within their committees; debate was more common when site directors reviewed students' files from other sites prior to meeting. Participants reported multiple committee benefits including faculty development and fulfillment, as well as improved grading consistency, fairness, and transparency. Groupthink and a passive approach to bias emerged as the two main threats to optimal group decision-making. CONCLUSIONS Grading committee members view their practices as advantageous over individual grading, but they feel limited in their ability to address grading fairness and accuracy. Recommendations and support may help committees broaden their scope to address these aspirations.
Collapse
Affiliation(s)
- Annabel K Frank
- Department of Medicine, University of California, San Francisco, San Francisco, CA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Patricia O'Sullivan
- Department of Medicine, University of California, San Francisco, San Francisco, CA, USA
| | - Lynnea M Mills
- Department of Medicine, University of California, San Francisco, San Francisco, CA, USA
| | - Virginie Muller-Juge
- Department of Medicine, University of California, San Francisco, San Francisco, CA, USA
| | - Karen E Hauer
- Department of Medicine, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
42
|
Green EP. Refining heuristics for educators. MEDICAL EDUCATION 2019; 53:322-324. [PMID: 30701568 DOI: 10.1111/medu.13807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Affiliation(s)
- Emily Paige Green
- Student and Faculty Development, Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA
| |
Collapse
|
43
|
Affiliation(s)
| | - Pat Lilley
- a AMEE and Medical Teacher , Dundee , UK
| |
Collapse
|