1
|
Han DS, Badalato GM, Murano TE, Anderson CB. Resident Remediation: A National Survey of Urology Program Directors. JOURNAL OF SURGICAL EDUCATION 2024; 81:465-473. [PMID: 38383239 DOI: 10.1016/j.jsurg.2023.12.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 12/22/2023] [Indexed: 02/23/2024]
Abstract
OBJECTIVES To describe formal remediation rates and processes in urology training programs nationally. DESIGN, SETTING, AND PARTICIPANTS We performed a cross-sectional study by surveying program directors (PDs) through the Society of Academic Urologists. Formal remediation was defined as the process initiated when resident competency deficiencies were significant enough to necessitate documentation and notification of the Graduate Medical Education (GME) office. The primary outcome was the prevalence of urology programs that initiated formal remediation over the past 5 years. Secondary outcomes included reported competency deficiencies and formal remediation processes. RESULTS Across 148 institutions, 73 (49%) PDs responded to the survey. The majority of PDs (67%, 49/73) stated that at least 1 resident underwent formal remediation over the last 5 years (median 1). "Professionalism" and "Interpersonal and Communication Skills" were the most common competency deficiencies that prompted formal remediation, whereas "Technical Skill" was the least common. While the majority of respondents notified the GME office of residents undergoing remediation, formal remediation plans varied from faculty coaching and mentorship (80%, 39/49) to simulation training (10%, 5/49). Absence of documented faculty feedback on poor performance was the most commonly cited barrier to formal remediation. The majority of PDs reported documentation in a resident's file (81%, 59/73); however, remediation processes differed with only half of PDs reporting that GME offices were routinely involved in creating and overseeing corrective action plans (56%, 41/73). Over the study period, 15% (11/73) of PDs did not promote a resident to the next year of training, and 23% (17/73) of PDs stated "Yes" to graduating a resident who they would not trust to care for a loved one. CONCLUSIONS Formal remediation among urology residency programs is common, and processes vary across institutions. The most common competency areas prompting remediation were "Professionalism" and "Interpersonal and Communication Skills." Future research should address developing resources to facilitate resident remediation.
Collapse
Affiliation(s)
- David S Han
- Columbia University Irving Medical Center, Department of Urology, New York, New York.
| | - Gina M Badalato
- Columbia University Irving Medical Center, Department of Urology, New York, New York
| | - Tiffany E Murano
- Columbia University Irving Medical Center, Department of Emergency Medicine, New York, New York
| | | |
Collapse
|
2
|
Caretta-Weyer HA, Schumacher DJ, Kinnear B. Lessons From Organic Chemistry: The Case for Considering Both High Standards and Equity in Assessment. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2024; 99:243-246. [PMID: 38011041 DOI: 10.1097/acm.0000000000005578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
ABSTRACT In this commentary, the authors explore the tension of balancing high performance standards in medical education with the acceptability of those standards to stakeholders (e.g., learners and patients). The authors then offer a lens through which this tension might be considered and ways forward that focus on both patient outcomes and learner needs.In examining this phenomenon, the authors argue that high performance standards are often necessary. Societal accountability is key to medical education, with the public demanding that training programs prepare physicians to provide high-quality care. Medical schools and residency programs, therefore, require rigorous standards to ensure graduates are ready to care for patients. At the same time, learners' experience is important to consider. Making sure that performance standards are acceptable to stakeholders supports the validity of assessment decisions.Equity should also be central to program evaluation and validity arguments when considering performance standards. Currently, learners across the continuum are variably prepared for the next phase in training and often face inequities in resource availability to meet high passing standards, which may lead to learner attrition. Many students who face these inequities come from underrepresented or disadvantaged backgrounds and are essential to ensuring a diverse medical workforce to meet the needs of patients and society. When these students struggle, it contributes to the leaky pipeline of more socioeconomically and racially diverse applicants.The authors posit that 4 key factors can balance the tension between high performance standards and stakeholder acceptability: standards that are acceptable and defensible, progression that is time variable, requisite support structures that are uniquely tailored for each learner, and assessment systems that are equitably designed.
Collapse
|
3
|
Alruqi I, Al-Nasser S, Agha S. Family Medicine Resident Experience Toward Workplace-Based Assessment Form in Improving Clinical Teaching: An Exploratory Qualitative Study. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2024; 15:37-46. [PMID: 38223750 PMCID: PMC10787555 DOI: 10.2147/amep.s431497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 12/18/2023] [Indexed: 01/16/2024]
Abstract
Background Workplace-Based Assessment (WPBA) has been widely utilized for assessing performance in training sites for both formative and summative purposes. Currently, with the recently updated duration of the family medicine (FM) training program in Saudi Arabia from four years to three years, the possible impact of such a change on assessment would need to be investigated. This objective was to explore the experiences of FM residents regarding the usage of WPBA as an assessment tool for improving clinical teaching at King Abdulaziz Hospital (KAH), Al Ahsa, Saudi Arabia. Methods The study involves an exploratory qualitative phenomenological approach targeting family medicine resident in KAH was used. Purposive sampling techniques were used. In this descriptive study, data was collected through the utilization of 1:1 semi-structured interviews guided by directive prompts. All recorded interviews were transcribed verbatim. An inductive analytical approach was applied for thematic analysis of transcripts. Results Fifteen participants were individually interviewed until data saturation was reached. The themes that emerged were organized into the categories of underlying principles of WPBA, the impact of the learning environment, associated opportunities and challenges, and making WPBA more effective. Participants expressed that the orientation provided by the program was insufficient, although the core principles were clear to them. They valued the senior peers' support and encouragement for the creation of a positive learning environment. However, time limit, workload, and a lack of optimum ideal implementation reduced the educational value and effectiveness of WPBA among senior residents. Conclusion The study examined residents' experiences with WPBA and concluded that low levels of satisfaction were attributed to implementation-related problems. Improvements should be made primarily in two areas: better use of available resources and more systematic prior planning. Revision and assignment of the selection process were suggested, in addition to the implementation of the new curriculum. The research will assist stakeholders in selecting and carrying out evaluation techniques that will enhance residents' abilities.
Collapse
Affiliation(s)
- Ibrahim Alruqi
- Department of Medical Education, College of Medicine, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
- Family Medicine Department, King Abdulaziz Hospital, Ministry of the National Guard - Health Affairs, Al Ahsa, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
| | - Sami Al-Nasser
- Department of Medical Education, College of Medicine, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
| | - Sajida Agha
- Department of Medical Education, College of Medicine, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
| |
Collapse
|
4
|
Poeppelman RS, Moore-Clingenpeel M, Siems A, Mitchell DL, Jani P, Stewart C. Faculty Decision Making in Ad Hoc Entrustment of Pediatric Critical Care Fellows: A National Case-Based Survey. TEACHING AND LEARNING IN MEDICINE 2023:1-8. [PMID: 37933862 DOI: 10.1080/10401334.2023.2269402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 10/02/2023] [Indexed: 11/08/2023]
Abstract
Phenomenon: Ad hoc entrustment decisions reflect a clinical supervisor's estimation of the amount of supervision a trainee needs to successfully complete a task in the moment. These decisions have important consequences for patient safety, trainee learning, and preparation for independent practice. Determinants of these decisions have previously been described but have not been well described for acute care contexts such as critical care and emergency medicine. The ad hoc entrustment of trainees caring for vulnerable patient populations is a high-stakes decision that may differ from other contexts. Critically ill patients and children are vulnerable patient populations, making the ad hoc entrustment of a pediatric critical care medicine (PCCM) fellow a particularly high-stakes decision. This study sought to characterize how ad hoc entrustment decisions are made for PCCM fellows through faculty ratings of vignettes. The authors investigated how acuity, relationship, training level, and task interact to influence ad hoc entrustment decisions. Approach: A survey containing 16 vignettes that varied by four traits (acuity, relationship, training level, and task) was distributed to U.S. faculty of pediatric critical care fellowships in 2020. Respondents determined an entrustment level for each case and provided demographic data. Entrustment ratings were dichotomized by "high entrustment" versus "low entrustment" (direct supervision or observation only). The authors used logistic regression to evaluate the individual and interactive effects of the four traits on dichotomized entrustment ratings. Findings: One hundred seventy-eight respondents from 30 institutions completed the survey (44% institutional response rate). Acuity, relationship, and task all significantly influenced the entrustment level selected but did not interact. Faculty most frequently selected "direct supervision" as the entrustment level for vignettes, including for 24% of vignettes describing fellows in their final year of training. Faculty rated the majority of vignettes (61%) as "low entrustment." There was no relationship between faculty or institutional demographics and the entrustment level selected. Insights: As has been found in summative entrustment for pediatrics, internal medicine, and surgery trainees, PCCM fellows often rated at or below the "direct supervision" level of ad hoc entrustment. This may relate to declining opportunities to practice procedures, a culture of low trust propensity among the specialty, and/or variation in interpretation of entrustment scales.
Collapse
Affiliation(s)
- Rachel Stork Poeppelman
- Department of Pediatrics, Nationwide Children's Hospital, The Ohio State University, Columbus, Ohio, USA
| | - Melissa Moore-Clingenpeel
- Department of Pediatrics, Nationwide Children's Hospital, The Ohio State University, Columbus, Ohio, USA
| | - Ashley Siems
- Department of Pediatrics, Children's National, Washington, DC, USA
| | - Diana L Mitchell
- Department of Pediatrics, Advocate Children's Hospital Park Ridge, Park Ridge, Illinois, USA
- Department of Pediatrics, University of Chicago, Chicago, Illinois, USA
| | - Priti Jani
- Department of Pediatrics, University of Chicago, Chicago, Illinois, USA
| | - Claire Stewart
- Department of Pediatrics, Nationwide Children's Hospital, The Ohio State University, Columbus, Ohio, USA
| |
Collapse
|
5
|
Marty AP, Linsenmeyer M, George B, Young JQ, Breckwoldt J, Ten Cate O. Mobile technologies to support workplace-based assessment for entrustment decisions: Guidelines for programs and educators: AMEE Guide No. 154. MEDICAL TEACHER 2023; 45:1203-1213. [PMID: 36706225 DOI: 10.1080/0142159x.2023.2168527] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
With the rise of competency-based medical education and workplace-based assessment (WBA) since the turn of the century, much has been written about methods of assessment. Direct observation and other sources of information have become standard in many clinical programs. Entrustable professional activities (EPAs) have also become a central focus of assessment in the clinical workplace. Paper and pencil (one of the earliest mobile technologies!) to document observations have become almost obsolete with the advent of digital technology. Typically, clinical supervisors are asked to document assessment ratings using forms on computers. However, accessing these forms can be cumbersome and is not easily integrated into existing clinical workflows. With a call for more frequent documentation, this practice is hardly sustainable, and mobile technology is quickly becoming indispensable. Documentation of learner performance at the point of care merges WBA with patient care and WBA increasingly uses smartphone applications for this purpose.This AMEE Guide was developed to support institutions and programs who wish to use mobile technology to implement EPA-based assessment and, more generally, any type of workplace-based assessment. It covers backgrounds of WBA, EPAs and entrustment decision-making, provides guidance for choosing or developing mobile technology, discusses challenges and describes best practices.
Collapse
Affiliation(s)
| | - Machelle Linsenmeyer
- West Virginia School of Osteopathic Medicine, Lewisburg, WV, United States of America
| | - Brian George
- Surgery and Learning Health Sciences, University of Michigan, Ann Arbor, Michigan, United States of America
| | - John Q Young
- Department of Psychiatry, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell &, Zucker Hillside Hospital, NY, United States of America
| | - Jan Breckwoldt
- Institute of Anesthesia at the University Hospital Zurich, Switzerland
| | - Olle Ten Cate
- Utrecht Center for Research and Development of Health Professions Education at UMC Utrecht, The Netherlands
| |
Collapse
|
6
|
Perez S, Schwartz A, Hauer KE, Karani R, Hirshfield LE, McNamara M, Henry D, Lupton KL, Woods M, Teherani A. Developing Evidence for Equitable Assessment Characteristics Based on Clinical Learner Preferences Using Discrete Choice Experiments. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S108-S115. [PMID: 37983403 DOI: 10.1097/acm.0000000000005360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
PURPOSE Medical education is only beginning to explore the factors that contribute to equitable assessment in clinical settings. Increasing knowledge about equitable assessment ensures a quality medical education experience that produces an excellent, diverse physician workforce equipped to address the health care disparities facing patients and communities. Through the lens of the Anti-Deficit Achievement framework, the authors aimed to obtain evidence for a model for equitable assessment in clinical training. METHOD A discrete choice experiment approach was used which included an instrument with 6 attributes each at 2 levels to reveal learner preferences for the inclusion of each attribute in equitable assessment. Self-identified underrepresented in medicine (UIM) and not underrepresented in medicine (non-UIM) (N = 306) fourth-year medical students and senior residents in medicine, pediatrics, and surgery at 9 institutions across the United States completed the instrument. A mixed-effects logit model was used to determine attributes learners valued most. RESULTS Participants valued the inclusion of all assessment attributes provided except for peer comparison. The most valued attribute of an equitable assessment was how learner identity, background, and trajectory were appreciated by clinical supervisors. The next most valued attributes were assessment of growth, supervisor bias training, narrative assessments, and assessment of learner's patient care, with participants willing to trade off any of the attributes to get several others. There were no significant differences in value placed on assessment attributes between UIM and non-UIM learners. Residents valued clinical supervisors valuing learner identity, background, and trajectory and clinical supervisor bias training more so than medical students. CONCLUSIONS This study offers support for the components of an antideficit-focused model for equity in assessment and informs efforts to promote UIM learner success and guide equity, diversity, and inclusion initiatives in medical education.
Collapse
Affiliation(s)
- Sandra Perez
- S. Perez is a resident, Department of Pathology, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Alan Schwartz
- A. Schwartz is the Michael Reese Endowed Professor of Medical Education, Department of Medical Education, and research professor, Department of Pediatrics, University of Illinois at Chicago, Chicago, Illinois, and director, Association of Pediatric Program Directors Longitudinal Educational Assessment Research Network (APPD LEARN), McLean, Virginia; ORCID: http://orcid.org/0000-0003-3809-6637
| | - Karen E Hauer
- K.E. Hauer is professor, Department of Medicine, and associate dean for competency assessment and professional standards, University of California, San Francisco, School of Medicine, San Francisco, California; ORCID: https://orcid.org/0000-0002-8812-4045
| | - Reena Karani
- R. Karani is professor, Departments of Medicine, Medical Education, and Geriatrics and Palliative Medicine, and director, Institute for Medical Education, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Laura E Hirshfield
- L.E. Hirshfield is the Dr. Georges Bordage Medical Education Faculty Scholar, associate professor, PhD program codirector, and associate director of graduate studies, Department of Medical Education, University of Illinois College of Medicine, Chicago, Illinois; ORCID: https://orcid.org/0000-0003-0894-2994
| | - Margaret McNamara
- M. McNamara is professor, Department of Pediatrics, and pediatric residency program director, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Duncan Henry
- D. Henry is associate professor, Department of Pediatrics, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Katherine L Lupton
- K.L. Lupton is professor, Department of Medicine, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Majka Woods
- M. Woods holds the Dibrell Family Professorship in the Art of Medicine, and is assistant professor, Department of Surgery, and vice dean for academic affairs, John Sealy School of Medicine at the University of Texas Medical Branch, Galveston, Texas
| | - Arianne Teherani
- A. Teherani is professor, Department of Medicine, education scientist, Center for Faculty Educators, director of program evaluation and education continuous quality improvement, and founding codirector, University of California Center for Climate Health and Equity, University of California, San Francisco, School of Medicine, San Francisco, California; ORCID: http://orcid.org/0000-0003-2936-9832
| |
Collapse
|
7
|
Vennemeyer S, Kinnear B, Gao A, Zhu S, Nattam A, Knopp MI, Warm E, Wu DT. User-Centered Evaluation and Design Recommendations for an Internal Medicine Resident Competency Assessment Dashboard. Appl Clin Inform 2023; 14:996-1007. [PMID: 38122817 PMCID: PMC10733060 DOI: 10.1055/s-0043-1777103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 10/25/2023] [Indexed: 12/23/2023] Open
Abstract
OBJECTIVES Clinical Competency Committee (CCC) members employ varied approaches to the review process. This makes the design of a competency assessment dashboard that fits the needs of all members difficult. This work details a user-centered evaluation of a dashboard currently utilized by the Internal Medicine Clinical Competency Committee (IM CCC) at the University of Cincinnati College of Medicine and generated design recommendations. METHODS Eleven members of the IM CCC participated in semistructured interviews with the research team. These interviews were recorded and transcribed for analysis. The three design research methods used in this study included process mapping (workflow diagrams), affinity diagramming, and a ranking experiment. RESULTS Through affinity diagramming, the research team identified and organized opportunities for improvement about the current system expressed by study participants. These areas include a time-consuming preprocessing step, lack of integration of data from multiple sources, and different workflows for each step in the review process. Finally, the research team categorized nine dashboard components based on rankings provided by the participants. CONCLUSION We successfully conducted user-centered evaluation of an IM CCC dashboard and generated four recommendations. Programs should integrate quantitative and qualitative feedback, create multiple views to display these data based on user roles, work with designers to create a usable, interpretable dashboard, and develop a strong informatics pipeline to manage the system. To our knowledge, this type of user-centered evaluation has rarely been attempted in the medical education domain. Therefore, this study provides best practices for other residency programs to evaluate current competency assessment tools and to develop new ones.
Collapse
Affiliation(s)
- Scott Vennemeyer
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
| | - Benjamin Kinnear
- Department of Pediatrics, College of Medicine, University of Cincinnati, Ohio, United States
- Department of Internal Medicine, College of Medicine, University of Cincinnati, Ohio, United States
| | - Andy Gao
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- Medical Sciences Baccalaureate Program, College of Medicine, University of Cincinnati, Ohio, United States
| | - Siyi Zhu
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- School of Design, College of Design, Architecture, Art, and Planning (DAAP), University of Cincinnati, Ohio, United States
| | - Anunita Nattam
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- Medical Sciences Baccalaureate Program, College of Medicine, University of Cincinnati, Ohio, United States
| | - Michelle I. Knopp
- Department of Internal Medicine, College of Medicine, University of Cincinnati, Ohio, United States
- Division of Hospital Medicine, Cincinnati Children's Hospital Medical Center, Department of Pediatrics, College of Medicine, University of Cincinnati, Ohio, United States
| | - Eric Warm
- Department of Internal Medicine, College of Medicine, University of Cincinnati, Ohio, United States
| | - Danny T.Y. Wu
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- Department of Pediatrics, College of Medicine, University of Cincinnati, Ohio, United States
- Medical Sciences Baccalaureate Program, College of Medicine, University of Cincinnati, Ohio, United States
- School of Design, College of Design, Architecture, Art, and Planning (DAAP), University of Cincinnati, Ohio, United States
| |
Collapse
|
8
|
Stone SN, Rydberg L. Creating and confirming observable professional activities (OPAs): A brief report on the practical approach for OPA design for resident education. J Spinal Cord Med 2023; 46:865-869. [PMID: 36972220 PMCID: PMC10446771 DOI: 10.1080/10790268.2023.2191100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/18/2023] Open
Abstract
CONTEXT The transition of graduate medical education to competency-based education systems has resulted in exploration of the efficacy of Entrustable Professional Activities (EPAs) and related Observable Practice Activities (OPAs) as evaluation tools. EPAs were introduced to PM&R in 2017, but no OPAs have been reported for a non-procedurally based EPA. The primary aims of this study were to create and form consensus on OPAs for the Spinal Cord Injury EPA. METHODS A Modified Delphi panel of seven experts in the field was utilized to gain consensus on ten PM&R OPAs for the Spinal Cord Injury EPA. RESULTS After the first round of evaluations, most OPAs were evaluated by experts as requiring modifications (30/70 votes to keep, 34/70 votes to modify) with a majority of comments focusing on the specific content of the OPAs. Edits were made, and after the second round, the OPAs were evaluated and determined to be kept (62/70 votes to keep, 6/70 votes to modify) with most edits being about semantics of the OPAs. Ultimately, there was significant difference in all three categories between round 1 and round 2 (P < 0.0001) and 10 OPAs were finalized for use. CONCLUSIONS This study created 10 OPAs that can potentially help provide targeted feedback to residents on their competency in caring for patients with spinal cord injury. With regular usage, OPAs are designed to provide residents with insight into how they are progressing towards independent practice. In the future, studies should aim to assess the feasibility and utility of implementing the newly developed OPAs.
Collapse
Affiliation(s)
- Shane N. Stone
- Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois, USA
| | - Leslie Rydberg
- Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois, USA
| |
Collapse
|
9
|
Tanaka P, Marty A, Park YS, Kakazu C, Udani A, Pardo M, Sullivan K, Sandhu C, Turner J, Mitchell J, Macario A. Defining entrustable professional activities for first year anesthesiology residents: A Delphi study. J Clin Anesth 2023. [DOI: 10.1016/j.jclinane.2023.111116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
|
10
|
Kinnear B, Weber DE, Schumacher DJ, Edje L, Warm EJ, Anderson HL. Reconstructing Neurath's Ship: A Case Study in Reevaluating Equity in a Program of Assessment. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S50-S56. [PMID: 37071695 DOI: 10.1097/acm.0000000000005249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Inequity in assessment has been described as a "wicked problem"-an issue with complex roots, inherent tensions, and unclear solutions. To address inequity, health professions educators must critically examine their implicit understandings of truth and knowledge (i.e., their epistemologies) with regard to educational assessment before jumping to solutions. The authors use the analogy of a ship (program of assessment) sailing on different seas (epistemologies) to describe their journey in seeking to improve equity in assessment. Should the education community repair the ship of assessment while sailing or should the ship be scrapped and built anew? The authors share a case study of a well-developed internal medicine residency program of assessment and describe efforts to evaluate and enable equity using various epistemological lenses. They first used a postpositivist lens to evaluate if the systems and strategies aligned with best practices, but found they did not capture important nuances of what equitable assessment entails. Next, they used a constructivist approach to improve stakeholder engagement, but found they still failed to question the inequitable assumptions inherent to their systems and strategies. Finally, they describe a shift to critical epistemologies, seeking to understand who experiences inequity and harm to dismantle inequitable systems and create better ones. The authors describe how each unique sea promoted different adaptations to their ship, and challenge programs to sail through new epistemological waters as a starting point for making their own ships more equitable.
Collapse
Affiliation(s)
- Benjamin Kinnear
- B. Kinnear is associate professor of internal medicine and pediatrics, Departments of Pediatrics and Internal Medicine, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0003-0052-4130
| | - Danielle E Weber
- D.E. Weber is assistant professor of internal medicine and pediatrics, Departments of Pediatrics and Internal Medicine, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-4857-6936
| | - Daniel J Schumacher
- D.J. Schumacher is tenured professor of pediatrics, Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0001-5507-8452
| | - Louito Edje
- L. Edje is professor of family and community medicine, Department of Medical Education and Family and Community Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio
| | - Eric J Warm
- E.J. Warm is professor of internal medicine and program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-6088-2434
| | - Hannah L Anderson
- H.L. Anderson is clinical research associate, Department of Pediatrics, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0002-9435-1535
| |
Collapse
|
11
|
Kinnear B, Santen SA, Kelleher M, Martini A, Ferris S, Edje L, Warm EJ, Schumacher DJ. How Does TIMELESS Training Impact Resident Motivation for Learning, Assessment, and Feedback? Evaluating a Competency-Based Time-Variable Training Pilot. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:828-835. [PMID: 36656286 DOI: 10.1097/acm.0000000000005147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE As competency-based medical education has become the predominant graduate medical education training model, interest in time-variable training has grown. Despite multiple competency-based time-variable training (CBTVT) pilots ongoing in the United States, little is known about how this training approach impacts learners. The authors aim to explore how their CBTVT pilot program impacted resident motivation for learning, assessment, and feedback. METHOD The authors performed a qualitative educational case study on the Transitioning in Internal Medicine Education Leveraging Entrustment Scores Synthesis (TIMELESS) program at the University of Cincinnati from October 2020 through March 2022. Semistructured interviews were conducted with TIMELESS residents (n = 9) approximately every 6 months to capture experiences over time. The authors used inductive thematic analysis to develop themes and compared their findings with existing theories of learner motivation. RESULTS The authors developed 2 themes: TIMELESS had variable effects on residents' motivation for learning and TIMELESS increased resident engagement with and awareness of the program of assessment. Participants reported increased motivation to learn and seek assessment, though some felt a tension between performance (e.g., advancement through the residency program) and growth (e.g., improvement as a physician). Participants became more aware of the quality of assessments they received, in part due to TIMELESS increasing the perceived stakes of assessment, and reported being more deliberate when assessing other residents. CONCLUSIONS Resident motivation for learning, assessment, and feedback was impacted in ways that the authors contextualize using current theories of learner motivation (i.e., goal orientation theory and attribution theory). Future research should investigate how interventions, such as coaching, guided learner reflection, or various CBTVT implementation strategies, can help keep learners oriented toward mastery learning rather than toward performance.
Collapse
Affiliation(s)
- Benjamin Kinnear
- B. Kinnear is associate professor of internal medicine and pediatrics, Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0003-0052-4130
| | - Sally A Santen
- S.A. Santen is professor of emergency medicine, Department of Emergency Medicine, Virginia Commonwealth University School of Medicine, Richmond, Virginia, and University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-8327-8002
| | - Matthew Kelleher
- M. Kelleher is assistant professor of internal medicine and pediatrics, Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-6400-1745
| | - Abigail Martini
- A. Martini is a clinical research coordinator with emergency medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio
| | - Sarah Ferris
- S. Ferris is a research administrator, Clinical Trials Unit, Michigan Medicine Research, University of Michigan, Ann Arbor, Michigan
| | - Louito Edje
- L. Edje is professor of family and community medicine, Departments of Medical Education and of Family and Community Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio
| | - Eric J Warm
- E.J. Warm is professor of internal medicine and program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-6088-2434
| | - Daniel J Schumacher
- D.J. Schumacher is professor of pediatrics, Cincinnati Children's Hospital Medical Center and Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0001-5507-8452
| |
Collapse
|
12
|
Carraccio C, Lentz A, Schumacher DJ. "Dismantling Fixed Time, Variable Outcome Education: Abandoning 'Ready or Not, Here they Come' is Overdue". PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:68-75. [PMID: 36937800 PMCID: PMC10022540 DOI: 10.5334/pme.10] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 02/28/2023] [Indexed: 05/05/2023]
Abstract
Two decades after competency-based medical education appeared in the lexicon of medical educators, the community continues to struggle with realizing its full potential. The implementation of the time variable, fixed outcome component has languished based on complexity compounded by resistance to change. Learners continue to transition from medical school to residency, and then practice, primarily based on time rather than having achieved the ability to meet the needs of the patient populations they will serve. Only those few who demonstrate glaring deficiencies do not graduate. The authors urge the medical education community to move from the current fixed time path of medical education toward the implementation of a true continuum of time variable, fixed outcome education, training, and deliberate practice. The latter is defined by purposeful learning, coaching, feedback, and repetition on the path to achieving and maintaining expertise. The opportunities afforded by such a time-variable, fixed outcome approach include: 1) development of a career long growth mindset, 2) ability to address evolving population health needs and careers within the context of one's practice, and 3) continual improvement of care quality and outcomes for patients on the journey towards expertise for providers.
Collapse
Affiliation(s)
| | | | - Daniel J. Schumacher
- Cincinnati Children’s Hospital Medical Center and University of Cincinnati College of Medicine, Cincinnati, Ohio, US
| |
Collapse
|
13
|
Cully JL, Schwartz SB, Quinonez R, Martini A, Klein M, Schumacher DJ. Development of entrustable professional activities for post-doctorate pediatric dentistry education. J Dent Educ 2023; 87:6-17. [PMID: 36052829 DOI: 10.1002/jdd.13096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 06/21/2022] [Accepted: 08/06/2022] [Indexed: 01/11/2023]
Abstract
PURPOSE To identify the core components of pediatric dentistry defining entrustable professional activities (EPAs) representing the profession. METHODS Potential core components of pediatric dentistry and corresponding domains were identified through review of literature and existing pediatric dentistry standards. A modified Delphi technique was utilized to rate these candidate EPAs to achieve consensus around prioritized EPAs. RESULTS Eleven participants participated in all three rounds of the Delphi. After three rounds, 16 candidate EPAs reached consensus for pediatric dentistry. Each EPA fell into one of four domains: "assessment and planning," "provision of care," "behavior guidance," and "professional development." An original candidate EPA focused on non-pharmacological behavior guidance was deemed too broad by the Delphi. This EPA was subsequently developed into three separate components on nitrous oxide analgesia, moderate sedation, and general anesthesia. CONCLUSIONS Prioritized EPAs will help define the essential activities of the profession and provide a framework for creating assessments to ensure that graduating pediatric residents are ready for unsupervised practice.
Collapse
Affiliation(s)
- Jennifer L Cully
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA.,Division of Pediatric Dentistry, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - Scott B Schwartz
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA.,Division of Pediatric Dentistry, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - Rocio Quinonez
- Division of Pediatric Dentistry and Public Health, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Abigail Martini
- Division of Emergency Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - Melissa Klein
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA.,Division of General and Community Pediatrics, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - Daniel J Schumacher
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA.,Division of Emergency Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| |
Collapse
|
14
|
Pizzuti C, Palmieri C, Shaw T. Using eHealth Data to Inform CPD for Medical Practitioners: A Scoping Review with a Consultation Exercise with International Experts. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2023; 43:S47-S58. [PMID: 38054492 DOI: 10.1097/ceh.0000000000000534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
INTRODUCTION eHealth data analytics is widely used in health care research. However, there is limited knowledge on the role of eHealth data analysis to inform continuing professional development (CPD). The aim of this study was to collate available research evidence on the use of eHealth data for the development of CPD programs and plans for medical practitioners. METHODS A scoping review was conducted using the six-stage Arksey and O'Malley Framework. A consultation exercise (stage 6) was performed with 15 international experts in the fields of learning and practice analytics to deepen the insights. RESULTS Scoping review. The literature searches identified 9876 articles published from January 2010 to May 2022. After screening and full-text review, a total of nine articles were deemed relevant for inclusion. The results provide varied-and at times partial or diverging-answers to the scoping review research questions. Consultation exercise. Research rigor, field of investigation, and developing the field were the three themes emerged from analysis. Participants validated the scoping review methodology and confirmed its results. Moreover, they provided a meta-analysis of the literature, a description of the current CPD ecosystem, and clear indications of what is and should be next for the field. DISCUSSION This study shows that there is no formal or well-established correlation between eHealth data and CPD planning and programming. Overall findings fill a gap in the literature and provide a basis for further investigation. More foundational work, multidisciplinary collaborations, and stakeholders' engagement are necessary to advance the use of eHealth data analysis for CPD purposes.
Collapse
Affiliation(s)
- Carol Pizzuti
- Ms. Pizzuti: Industry PhD Candidate, Faculty of Medicine and Health, The University of Sydney, Camperdown, Australia; and Senior Research Officer, Professional Practice, The Royal Australasian College of Physicians, Sydney, Australia. Dr. Palmieri: Head of Member Learning and Development, Professional Practice, The Royal Australasian College of Physicians, Sydney, Australia; and Faculty of Arts and Social Sciences, The University of Sydney, Camperdown, Australia. Dr. Shaw: Professor of Digital Health, Faculty of Medicine and Health, The University of Sydney, Camperdown, Australia
| | | | | |
Collapse
|
15
|
Schauer DP, Kinnear B, Kelleher M, Sall D, Schumacher DJ, Warm EJ. Developing the Expected Entrustment Score: Accounting for Variation in Resident Assessment. J Gen Intern Med 2022; 37:3670-3675. [PMID: 35377114 PMCID: PMC9585130 DOI: 10.1007/s11606-022-07492-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 03/22/2022] [Indexed: 11/25/2022]
Abstract
BACKGROUND Clinical competency committees (CCCs) and residency program leaders may find it difficult to interpret workplace-based assessment (WBA) ratings knowing that contextual factors and bias play a large role. OBJECTIVE We describe the development of an expected entrustment score for resident performance within the context of our well-developed Observable Practice Activity (OPA) WBA system. DESIGN Observational study PARTICIPANTS: Internal medicine residents MAIN MEASURE: Entrustment KEY RESULTS: Each individual resident had observed entrustment scores with a unique relationship to the expected entrustment scores. Many residents' observed scores oscillated closely around the expected scores. However, distinct performance patterns did emerge. CONCLUSIONS We used regression modeling and leveraged large numbers of historical WBA data points to produce an expected entrustment score that served as a guidepost for performance interpretation.
Collapse
Affiliation(s)
- Daniel P Schauer
- Department of Internal Medicine, University of Cincinnati, PO Box 670535, Cincinnati, OH, 45267-0535, USA.
| | - Benjamin Kinnear
- Department of Pediatrics, College of Medicine, University of Cincinnati, Cincinnati, OH, USA
| | - Matthew Kelleher
- Department of Pediatrics, College of Medicine, University of Cincinnati, Cincinnati, OH, USA
| | - Dana Sall
- HonorHealth Thompson Peak Medical Center, Scottsdale, USA
- University of Arizona College of Medicine, Phoenix, AZ, USA
| | - Daniel J Schumacher
- Department of Pediatrics, , College of Medicine, Cincinnati Children's Hospital/University of Cincinnati, Cincinnati, OH, USA
| | - Eric J Warm
- Department of Internal Medicine, University of Cincinnati, PO Box 670535, Cincinnati, OH, 45267-0535, USA
| |
Collapse
|
16
|
Brown DR, Moeller JJ, Grbic D, Andriole DA, Cutrer WB, Obeso VT, Hormann MD, Amiel JM. Comparing Entrustment Decision-Making Outcomes of the Core Entrustable Professional Activities Pilot, 2019-2020. JAMA Netw Open 2022; 5:e2233342. [PMID: 36156144 PMCID: PMC9513644 DOI: 10.1001/jamanetworkopen.2022.33342] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
IMPORTANCE Gaps in readiness for indirect supervision have been identified for essential responsibilities encountered early in residency, presenting risks to patient safety. Core Entrustable Professional Activities (EPAs) for entering residency have been proposed as a framework to address these gaps and strengthen the transition from medical school to residency. OBJECTIVE To assess progress in developing an entrustment process in the Core EPAs framework. DESIGN, SETTING, AND PARTICIPANTS In this quality improvement study in the Core EPAs for Entering Residency Pilot, trained faculty made theoretical entrustment determinations and recorded the number of workplace-based assessments (WBAs) available for each determination in 2019 and 2020. Four participating schools attempted entrustment decision-making for all graduating students or a randomly selected subset of students. Deidentified, individual-level data were merged into a multischool database. INTERVENTIONS Schools implemented EPA-related curriculum, WBAs, and faculty development; developed systems to compile and display data; and convened groups to make theoretical summative entrustment determinations. MAIN OUTCOMES AND MEASURES On an EPA-specific basis, the percentage of students for whom an entrustment determination could be made, the percentage of students ready for indirect supervision, and the volume of WBAs available were recorded. RESULTS Four participating schools made 4525 EPA-specific readiness determinations (2296 determinations in 2019 and 2229 determinations in 2020) for 732 graduating students (349 students in 2019 and 383 students in 2020). Across all EPAs, the proportion of determinations of "ready for indirect supervision" increased from 2019 to 2020 (997 determinations [43.4%] vs 1340 determinations [60.1%]; 16.7 percentage point increase; 95% CI, 13.8-19.6 percentage points; P < .001), as did the proportion of determinations for which there were 4 or more WBAs (456 of 2295 determinations with WBA data [19.9%] vs 938 [42.1%]; 22.2 percentage point increase; 95% CI, 19.6-24.8 percentage points; P < .001). The proportion of EPA-specific data sets considered for which an entrustment determination could be made increased from 1731 determinations (75.4%) in 2019 to 2010 determinations (90.2%) in 2020 (14.8 percentage point increase; 95% CI, 12.6-16.9 percentage points; P < .001). On an EPA-specific basis, there were 5 EPAs (EPA 4 [orders], EPA 8 [handovers], EPA 10 [urgent care], EPA 11 [informed consent], and EPA 13 [patient safety]) for which few students were deemed ready for indirect supervision and for which there were few WBAs available per student in either year. For example, for EPA 13, 0 of 125 students were deemed ready in 2019 and 0 of 127 students were deemed ready in 2020, while 0 determinations in either year included 4 or more WBAs. CONCLUSIONS AND RELEVANCE These findings suggest that there was progress in WBA data collected, the extent to which entrustment determinations could be made, and proportions of entrustment determinations reported as ready for indirect supervision. However, important gaps remained, particularly for a subset of Core EPAs.
Collapse
Affiliation(s)
- David R. Brown
- Division of Family and Community Medicine, Department of Humanities, Health, and Society, Florida International University Herbert Wertheim College of Medicine, Miami
| | - Jeremy J. Moeller
- Department of Neurology, Yale University School of Medicine, New Haven, Connecticut
| | - Douglas Grbic
- Medical Education Research, Association of American Medical Colleges, Washington, District of Columbia
| | - Dorothy A. Andriole
- Medical Education Research, Association of American Medical Colleges, Washington, District of Columbia
| | - William B. Cutrer
- Department of Pediatrics, Division of Critical Care Medicine at Vanderbilt University School of Medicine, Nashville, Tennessee
| | - Vivian T. Obeso
- Division of Internal Medicine, Department of Translational Medicine, Florida International University Herbert Wertheim College of Medicine, Miami
| | - Mark D. Hormann
- Division of Community and General Pediatrics, Department of Pediatrics, McGovern Medical School at UTHealth, Houston, Texas
| | - Jonathan M. Amiel
- Dean’s Office, Columbia University Vagelos College of Physicians and Surgeons, New York, New York
- Department of Psychiatry, Columbia University Vagelos College of Physicians and Surgeons, New York, New York
| |
Collapse
|
17
|
Warm EJ, Carraccio C, Kelleher M, Kinnear B, Schumacher DJ, Santen S. The education passport: connecting programmatic assessment across learning and practice. CANADIAN MEDICAL EDUCATION JOURNAL 2022; 13:82-91. [PMID: 36091737 PMCID: PMC9441115 DOI: 10.36834/cmej.73871] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Competency-based medical education (CBME) shifts us from static assessment of learning to developmental assessment for learning. However, implementation challenges associated with CBME remain a major hurdle, especially after training and into practice. The full benefit of developmental assessment for learning over time requires collaboration, cooperation, and trust among learners, regulators, and the public that transcends each individual phase. The authors introduce the concept of an "Education Passport" that provides evidence of readiness to travel across the boundaries between undergraduate medical education, graduate medical education, and the expanse of practice. The Education Passport uses programmatic assessment, a process of collecting numerous low stakes assessments from multiple sources over time, judging these data using criterion-referencing, and enhancing this with coaching and competency committees to understand, process, and accelerate growth without end. Information in the Passport is housed on a cloud-based server controlled by the student/physician over the course of training and practice. These data are mapped to various educational frameworks such Entrustable Professional Activities or milestones for ease of longitudinal performance tracking. At each stage of education and practice the student/physician grants Passport access to all entities that can provide data on performance. Database managers use learning analytics to connect and display information over time that are then used by the student/physician, their assigned or chosen coaches, and review committees to maintain or improve performance. Global information is also collected and analyzed to improve the entire system of learning and care. Developing a true continuum that embraces performance and growth will be a long-term adaptive challenge across many organizations and jurisdictions and will require coordination from regulatory and national agencies. An Education Passport could also serve as an organizing tool and will require research and high-value communication strategies to maximize public trust in the work.
Collapse
Affiliation(s)
- Eric J Warm
- Department of Internal Medicine, University of Cincinnati College of Medicine, Ohio, USA
- Correspondence to: Eric J. Warm,
| | | | - Matthew Kelleher
- Department of Internal Medicine, University of Cincinnati College of Medicine, Ohio, USA
| | - Benjamin Kinnear
- Department of Pediatrics, University of Cincinnati College of Medicine, Ohio, USA
| | - Daniel J Schumacher
- Division of Emergency Medicine, Cincinnati Children's Hospital Medical Center and the University of Cincinnati College of Medicine, Ohio, USA
| | - Sally Santen
- Division of Emergency Medicine, Cincinnati Children's Hospital Medical Center and the University of Cincinnati College of Medicine, Ohio, USA
- Virginia Commonwealth University, Ohio, USA
| |
Collapse
|
18
|
Dunne D, Gielissen K, Slade M, Park YS, Green M. WBAs in UME-How Many Are Needed? A Reliability Analysis of 5 AAMC Core EPAs Implemented in the Internal Medicine Clerkship. J Gen Intern Med 2022; 37:2684-2690. [PMID: 34561828 PMCID: PMC9411433 DOI: 10.1007/s11606-021-07151-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 09/09/2021] [Indexed: 01/07/2023]
Abstract
BACKGROUND Reliable assessments of clinical skills are important for undergraduate medical education, trustworthy handoffs to graduate medical programs, and safe, effective patient care. Entrustable professional activities (EPAs) for entering residency have been developed; research is needed to assess reliability of such assessments in authentic clinical workspaces. DESIGN A student-driven mobile assessment platform was developed and used for clinical supervisors to record ad hoc entrustment decisions using the modified Ottawa scale on 5 core EPAs in an 8-week internal medicine (IM) clerkship. After a 12-month period, generalizability (G) theory analysis was performed to estimate the reliability of entrustment scores and determine the proportion of variance attributable to the student and the other facets, including particular EPA, evaluator type (attending versus resident), or case complexity. Decision (D) theory analysis determined the expected reliability based on the number of hypothetical observations. A g-coefficient of 0.7 was used as a generally agreed upon minimum reliability threshold. KEY RESULTS A total of 1368 ratings over the 5 EPAs were completed on 94 students. Variance attributed to person (true variance) was high for all EPAs; EPA-5 had the lowest person variance (9.8% across cases and four blocks). Across cases, reliability ranged from 0.02 to 0.60. Applying this to the Decision study, the estimated number of observations needed to reach a reliability index of 0.7 ranged between 9 and 11 for all EPAs except EPA5 which was sensitive to case complexity. CONCLUSIONS Work place-based clinical skills in IM clerkship students were assessed and logged using a convenient mobile platform. Our analysis suggests that 9-11 observations are needed for these EPA workplace-based assessments (WBAs) to achieve a reliability index of 0.7. Note writing was very sensitive to case complexity. Further reliability analyses of core EPAs are needed before US medical schools consider wider adoption into summative entrustment processes and GME handoffs.
Collapse
Affiliation(s)
- Dana Dunne
- Department of Internal Medicine, Section of Infectious Diseases, Yale School of Medicine, 15 York Street LMP 1074, New Haven, CT, 065111, USA.
| | - Katherine Gielissen
- Department of Internal Medicine, Section of General Medicine, Yale School of Medicine, New Haven, CT, 06511, USA
| | - Martin Slade
- Occupational Medicine, Yale School of Medicine, New Haven, CT, 06511, USA
| | | | - Michael Green
- Department of Internal Medicine, Section of General Medicine, Yale School of Medicine, New Haven, CT, 06511, USA
| |
Collapse
|
19
|
Reimagining the Clinical Competency Committee to Enhance Education and Prepare for Competency-Based Time-Variable Advancement. J Gen Intern Med 2022; 37:2280-2290. [PMID: 35445932 PMCID: PMC9021365 DOI: 10.1007/s11606-022-07515-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 03/25/2022] [Indexed: 12/01/2022]
Abstract
Assessing residents and clinical fellows is a high-stakes activity. Effective assessment is important throughout training so that identified areas of strength and weakness can guide educational planning to optimize outcomes. Assessment has historically been underemphasized although medical education oversight organizations have strengthened requirements in recent years. Growing acceptance of competency-based medical education and its logical extension to competency-based time-variable (CB-TV) graduate medical education (GME) further highlights the importance of implementing effective evidence-based approaches to assessment. The Clinical Competency Committee (CCC) has emerged as a key programmatic structure in graduate medical education. In the context of launching a multi-specialty pilot of CB-TV GME in our health system, we have examined several program's CCC processes and reviewed the relevant literature to propose enhancements to CCCs. We recommend that all CCCs fulfill three core goals, regularly applied to every GME trainee: (1) discern and describe the resident's developmental status to individualize education, (2) determine readiness for unsupervised practice, and (3) foster self-assessment ability. We integrate the literature and observations from GME program CCCs in our institutions to evaluate how current CCC processes support or undermine these goals. Obstacles and key enablers are identified. Finally, we recommend ways to achieve the stated goals, including the following: (1) assess and promote the development of competency in all trainees, not just outliers, through a shared model of assessment and competency-based advancement; (2) strengthen CCC assessment processes to determine trainee readiness for independent practice; and (3) promote trainee reflection and informed self-assessment. The importance of coaching for competency, robust workplace-based assessments, feedback, and co-production of individualized learning plans are emphasized. Individual programs and their CCCs must strengthen assessment tools and frameworks to realize the potential of competency-oriented education.
Collapse
|
20
|
Brown DR, Moeller JJ, Grbic D, Biskobing DM, Crowe R, Cutrer WB, Green ML, Obeso VT, Wagner DP, Warren JB, Yingling SL, Andriole DA. Entrustment Decision Making in the Core Entrustable Professional Activities: Results of a Multi-Institutional Study. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:536-543. [PMID: 34261864 DOI: 10.1097/acm.0000000000004242] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE In 2014, the Association of American Medical Colleges defined 13 Core Entrustable Professional Activities (EPAs) that all graduating students should be ready to do with indirect supervision upon entering residency and commissioned a 10-school, 5-year pilot to test implementing the Core EPAs framework. In 2019, pilot schools convened trained entrustment groups (TEGs) to review assessment data and render theoretical summative entrustment decisions for class of 2019 graduates. Results were examined to determine the extent to which entrustment decisions could be made and the nature of these decisions. METHOD For each EPA considered (4-13 per student), TEGs recorded an entrustment determination (ready, progressing but not yet ready, evidence against student progressing, could not make a decision); confidence in that determination (none, low, moderate, high); and the number of workplace-based assessments (WBAs) considered (0->15) per determination. These individual student-level data were de-identified and merged into a multischool database; chi-square analysis tested the significance of associations between variables. RESULTS The 2,415 EPA-specific determinations (for 349 students by 4 participating schools) resulted in a decision of ready (n = 997/2,415; 41.3%), progressing but not yet ready (n = 558/2,415; 23.1%), or evidence against student progression (n = 175/2,415; 7.2%). No decision could be made for the remaining 28.4% (685/2,415), generally for lack of data. Entrustment determinations' distribution varied across EPAs (chi-square P < .001) and, for 10/13 EPAs, WBA availability was associated with making (vs not making) entrustment decisions (each chi-square P < .05). CONCLUSIONS TEGs were able to make many decisions about readiness for indirect supervision; yet less than half of determinations resulted in a decision of readiness to perform this EPA with indirect supervision. More work is needed at the 10 schools to enable authentic summative entrustment in the Core EPAs framework.
Collapse
Affiliation(s)
- David R Brown
- D.R. Brown is professor, chief, Division of Family and Community Medicine, and interim chair, Department of Humanities, Health, and Society, Florida International University Herbert Wertheim College of Medicine, Miami, Florida; ORCID: http://orcid.org/0000-0002-5361-6664
| | - Jeremy J Moeller
- J.J. Moeller is associate professor and residency program director, Department of Neurology, Yale University School of Medicine, New Haven, Connecticut; ORCID: https://orcid.org/0000-0002-6135-5572
| | - Douglas Grbic
- D. Grbic is lead research analyst, Medical Education Research, Association of American Medical Colleges, Washington, DC
| | - Diane M Biskobing
- D.M. Biskobing is professor of medicine and associate dean of medical education, Virginia Commonwealth University School of Medicine, Richmond, Virginia
| | - Ruth Crowe
- R. Crowe is director of integrated clinical skills, director of practice of medicine, Office of Medical Education, and associate professor of medicine, New York University Grossman School of Medicine, New York, New York
| | - William B Cutrer
- W.B. Cutrer is associate dean for undergraduate medical education and associate professor of pediatrics (critical care medicine), Vanderbilt University School of Medicine, Nashville, Tennessee; ORCID: https://orcid.org/0000-0003-1538-9779
| | - Michael L Green
- M.L. Green is professor of medicine and director of student assessment, Teaching and Learning Center, Yale University School of Medicine, New Haven, Connecticut
| | - Vivian T Obeso
- V.T. Obeso is associate dean for curriculum and medical education and associate professor, Division of Internal Medicine, Department of Translational Medicine, Florida International University Herbert Wertheim College of Medicine, Miami, Florida
| | - Dianne P Wagner
- D.P. Wagner is associate dean for undergraduate medical education and professor of medicine, Michigan State University College of Human Medicine, East Lansing, Michigan
| | - Jamie B Warren
- J.B. Warren is associate professor, Division of Neonatology, and clinical vice chair, Department of Pediatrics, Oregon Health & Science University, Portland, Oregon; ORCID: https://orcid.org/0000-0003-4422-1502
| | - Sandra L Yingling
- S.L. Yingling is associate dean for educational planning and quality improvement, University of Illinois College of Medicine (Chicago, Peoria, Rockford, and Urbana), Chicago, Illinois
| | - Dorothy A Andriole
- D.A. Andriole is senior director, Medical Education Research, Association of American Medical Colleges, Washington, DC; ORCID: https://orcid.org/0000-0001-8902-1227
| |
Collapse
|
21
|
Warm EJ, Kinnear B, Lance S, Schauer DP, Brenner J. What Behaviors Define a Good Physician? Assessing and Communicating About Noncognitive Skills. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:193-199. [PMID: 34166233 DOI: 10.1097/acm.0000000000004215] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Once medical students attain a certain level of medical knowledge, success in residency often depends on noncognitive attributes, such as conscientiousness, empathy, and grit. These traits are significantly more difficult to assess than cognitive performance, creating a potential gap in measurement. Despite its promise, competency-based medical education (CBME) has yet to bridge this gap, partly due to a lack of well-defined noncognitive observable behaviors that assessors and educators can use in formative and summative assessment. As a result, typical undergraduate to graduate medical education handovers stress standardized test scores, and program directors trust little of the remaining information they receive, sometimes turning to third-party companies to better describe potential residency candidates. The authors have created a list of noncognitive attributes, with associated definitions and noncognitive skills-called observable practice activities (OPAs)-written for learners across the continuum to help educators collect assessment data that can be turned into valuable information. OPAs are discrete work-based assessment elements collected over time and mapped to larger structures, such as milestones, entrustable professional activities, or competencies, to create learning trajectories for formative and summative decisions. Medical schools and graduate medical education programs could adapt these OPAs or determine ways to create new ones specific to their own contexts. Once OPAs are created, programs will have to find effective ways to assess them, interpret the data, determine consequence validity, and communicate information to learners and institutions. The authors discuss the need for culture change surrounding assessment-even for the adoption of behavior-based tools such as OPAs-including grounding the work in a growth mindset and the broad underpinnings of CBME. Ultimately, improving assessment of noncognitive capacity should benefit learners, schools, programs, and most importantly, patients.
Collapse
Affiliation(s)
- Eric J Warm
- E.J. Warm is professor of medicine and program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-6088-2434
| | - Benjamin Kinnear
- B. Kinnear is associate professor of medicine and pediatrics and associate program director, Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0003-0052-4130
| | - Samuel Lance
- S. Lance is associate professor of plastic surgery and craniofacial surgery and program director of plastic surgery, Division of Plastic Surgery, University of California San Diego, San Diego, California; ORCID: https://orcid.org/0000-0002-5186-2677
| | - Daniel P Schauer
- D.P. Schauer is associate professor of medicine and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0003-3264-8154
| | - Judith Brenner
- J. Brenner is associate professor of science education and medicine and associate dean for curricular integration and assessment, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York; ORCID: https://orcid.org/0000-0002-8697-5401
| |
Collapse
|
22
|
Kelleher M, Kinnear B, Sall DR, Weber DE, DeCoursey B, Nelson J, Klein M, Warm EJ, Schumacher DJ. Warnings in early narrative assessment that might predict performance in residency: signal from an internal medicine residency program. PERSPECTIVES ON MEDICAL EDUCATION 2021; 10:334-340. [PMID: 34476730 PMCID: PMC8633188 DOI: 10.1007/s40037-021-00681-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 07/08/2021] [Accepted: 07/11/2021] [Indexed: 05/10/2023]
Abstract
INTRODUCTION Narrative assessment data are valuable in understanding struggles in resident performance. However, it remains unknown which themes in narrative data that occur early in training may indicate a higher likelihood of struggles later in training, allowing programs to intervene sooner. METHODS Using learning analytics, we identified 26 internal medicine residents in three cohorts that were below expected entrustment during training. We compiled all narrative data in the first 6 months of training for these residents as well as 13 typically performing residents for comparison. Narrative data were blinded for all 39 residents during initial phases of an inductive thematic analysis for initial coding. RESULTS Many similarities were identified between the two cohorts. Codes that differed between typical and lower entrusted residents were grouped into two types of themes: three explicit/manifest and three implicit/latent with six total themes. The explicit/manifest themes focused on specific aspects of resident performance with assessors describing 1) Gaps in attention to detail, 2) Communication deficits with patients, and 3) Difficulty recognizing the "big picture" in patient care. Three implicit/latent themes, focused on how narrative data were written, were also identified: 1) Feedback described as a deficiency rather than an opportunity to improve, 2) Normative comparisons to identify a resident as being behind their peers, and 3) Warning of possible risk to patient care. DISCUSSION Clinical competency committees (CCCs) usually rely on accumulated data and trends. Using the themes in this paper while reviewing narrative comments may help CCCs with earlier recognition and better allocation of resources to support residents' development.
Collapse
Affiliation(s)
- Matthew Kelleher
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| | - Benjamin Kinnear
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Dana R Sall
- HonorHealth Internal Medicine Residency Program, Scottsdale, Arizona and University of Arizona College of Medicine, Phoenix, AZ, USA
| | - Danielle E Weber
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Bailey DeCoursey
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Jennifer Nelson
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Melissa Klein
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Eric J Warm
- Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Daniel J Schumacher
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| |
Collapse
|
23
|
Weber DE, Kinnear B, Kelleher M, Klein M, Sall D, Schumacher DJ, Zhang N, Warm E, Schauer DP. Effect of resident and assessor gender on entrustment-based observational assessment in an internal medicine residency program. MEDEDPUBLISH 2021. [DOI: 10.12688/mep.17410.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Background: Implicit gender bias leads to differences in assessment. Studies examining gender differences in resident milestone assessment data demonstrate variable results. The purpose of this study was to determine if observational entrustment scores differ by resident and assessor gender in a program of assessment based on discrete, observable skills. Methods: We analyzed overall entrustment scores and entrustment scores by Accreditation Council for Graduate Medical Education (ACGME) core competency for 238 residents (49% female) from 396 assessors (38% female) in one internal medicine residency program from July 2012 to June 2019. We conducted analyses at 1-12 months, 1-36 months, 1-6 months, 7-12 months, and 31-36 months. We used linear mixed-effect models to assess the role of resident and assessor gender, with resident-specific and assessor-specific random effect to account for repeated measures. Results: Statistically significant interactions existed between resident and assessor gender for overall entrustment at 1-12 months (p < 0.001), 1-36 months (p< 0.001), 1-6 months (p<0.001), 7-12 months (p=0.04), and 31-36 months (p<0.001). However, group differences were not statistically significant. In several instances an interaction was significant between resident and assessor gender by ACGME core competency, but there were no statistically significant group differences for all competencies at any time point. When applicable, subsequent analysis of main effect of resident or assessor gender independently of one another revealed no statistically significant differences. Conclusions: No significant differences in entrustment scores were found based on resident or assessor gender in our large, robust entrustment-based program of assessment. Determining the reasons for our findings may help identify ways to mitigate gender bias in assessment.
Collapse
|
24
|
Weiss PG, Schwartz A, Carraccio CL, Herman BE, Turner DA, Aye T, Fussell JJ, Kesselheim J, Mahan JD, McGann KA, Myers A, Stafford DEJ, Chess PR, Curran ML, Dammann CEL, High P, Hsu DC, Pitts S, Sauer C, Srivastava S, Mink RB. Achieving Entrustable Professional Activities During Fellowship. Pediatrics 2021; 148:peds.2021-050196. [PMID: 34667096 DOI: 10.1542/peds.2021-050196] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/12/2021] [Indexed: 11/24/2022] Open
Abstract
BACKGROUND AND OBJECTIVES Entrustable Professional Activities (EPAs) were developed to assess pediatric fellows. We previously showed that fellowship program directors (FPDs) may graduate fellows who still require supervision. How this compares with their expectations for entrustment of practicing subspecialists is unknown. METHODS We surveyed US FPDs in 14 pediatric subspecialties through the Subspecialty Pediatrics Investigator Network between April and August 2017. For each of 7 common pediatric subspecialty EPAs, we compared the minimum level of supervision that FPDs required for graduation with the level they expected of subspecialists for safe and effective practice using the Friedman rank sum test and paired t test. We compared differences between subspecialties using linear regression. RESULTS We collected data from 660 FPDs (response rate 82%). For all EPAs, FPDs did not require fellows to reach the level of entrustment for graduation that they expected of subspecialists to practice (P < .001). FPDs expected the least amount of supervision for the EPAs consultation and handovers. Mean differences between supervision levels for graduation and practice were smaller for clinical EPAs (consultation, handovers, lead a team) when compared with nonclinical EPAs (quality improvement, management, lead the profession and scholarship; P = .001) and were similar across nearly all subspecialties. CONCLUSIONS Fellowship graduates may need continued development of clinical and nonclinical skills in their early practice period, underscoring a need for continued assessment and mentoring. Graduation readiness must be based on clear requirements, with alignment of FPD expectations and regulatory standards, to ensure quality care for patients.
Collapse
Affiliation(s)
- Pnina G Weiss
- Department of Pediatrics, School of Medicine, Yale University, New Haven, Connecticut
| | - Alan Schwartz
- Departments of Medical Education and Pediatrics, College of Medicine, University of Illinois at Chicago, Chicago, Illinois
| | | | - Bruce E Herman
- Department of Pediatrics, School of Medicine, University of Utah, Salt Lake City, Utah
| | | | - Tandy Aye
- Department of Pediatrics, School of Medicine, Stanford University, Stanford, California
| | - Jill J Fussell
- Department of Pediatrics, University of Arkansas for Medical Sciences, Little Rock, Arkansas
| | - Jennifer Kesselheim
- Department of Pediatrics, Dana-Farber/Boston Children's Cancer and Blood Disorders Center and Harvard Medical School, Harvard University, Boston, Massachusetts
| | - John D Mahan
- Department of Pediatrics, College of Medicine, The Ohio State University, Columbus, Ohio
| | - Kathleen A McGann
- Department of Pediatrics, School of Medicine, Duke University, Durham, North Carolina
| | - Angela Myers
- Department of Pediatrics, Children's Mercy Kansas City and School of Medicine, University of Missouri-Kansas City, Kansas City, Missouri
| | - Diane E J Stafford
- Department of Pediatrics, School of Medicine, Stanford University, Stanford, California
| | | | - Megan L Curran
- Department of Pediatrics, University of Colorado, Denver, Colorado
| | | | - Pamela High
- Department of Pediatrics, The Warren Alpert Medical School, Brown University, Providence, Rhode Island
| | - Deborah C Hsu
- Department of Pediatrics, Baylor College of Medicine and Texas Children's Hospital, Houston, Texas
| | - Sarah Pitts
- Department of Pediatrics, Harvard Medical School, Harvard University, Boston, Massachusetts
| | - Cary Sauer
- Department of Pediatrics, School of Medicine, Emory University and Children's Healthcare of Atlanta, Atlanta, Georgia
| | | | - Richard B Mink
- Department of Pediatrics, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California.,The Lundquist Institute for Biomedical Innovation, Harbor-University of California, Los Angeles Medical Center, Torrance, California
| |
Collapse
|
25
|
Heath JK, Wang T, Santhosh L, Denson JL, Holmboe E, Yamazaki K, Clay AS, Carlos WG. Longitudinal Milestone Assessment Extending Through Subspecialty Training: The Relationship Between ACGME Internal Medicine Residency Milestones and Subsequent Pulmonary and Critical Care Fellowship Milestones. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:1603-1608. [PMID: 34010863 DOI: 10.1097/acm.0000000000004165] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
PURPOSE Accreditation Council for Graduate Medical Education (ACGME) milestones were implemented across medical subspecialties in 2015. Although milestones were proposed as a longitudinal assessment tool potentially providing opportunities for early implementation of individualized fellowship learning plans, the association of subspecialty fellowship ratings with prior residency ratings remains unclear. This study aimed to assess the relationship between internal medicine (IM) residency milestones and pulmonary and critical care medicine (PCCM) fellowship milestones. METHOD A multicenter retrospective cohort analysis was conducted for all PCCM trainees in ACGME-accredited PCCM fellowship programs, 2017-2018, who had complete prior IM milestone ratings from 2014 to 2017. Only professionalism and interpersonal and communication skills (ICS) were included based on shared anchors between IM and PCCM milestones. Using a generalized estimating equations model, the association of PCCM milestones ≤ 2.5 during the first fellowship year with corresponding IM subcompetencies was assessed at each time point, nested by program. Statistical significance was determined using logistic regression. RESULTS The study included 354 unique PCCM fellows. For ICS and professionalism subcompetencies, fellows with higher IM ratings were less likely to obtain PCCM ratings ≤ 2.5 during the first fellowship year. Each ICS subcompetency was significantly associated with future lapses in fellowship (ICS01: β = -0.67, P = .003; ICS02: β = -0.70, P = .001; ICS03: β = -0.60, P = .004) at various residency time points. Similar associations were noted for PROF03 (β = -0.57, P = .007). CONCLUSIONS Findings demonstrated an association between IM milestone ratings and low milestone ratings during PCCM fellowship. IM trainees with low ratings in several professionalism and ICS subcompetencies were more likely to be rated ≤ 2.5 during the first PCCM fellowship year. This highlights a potential use of longitudinal milestones to target educational gaps at the beginning of PCCM fellowship.
Collapse
Affiliation(s)
- Janae K Heath
- J.K. Heath is assistant professor, Department of Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0002-0533-3088
| | - Tisha Wang
- T. Wang is associate professor, Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California
| | - Lekshmi Santhosh
- L. Santhosh is assistant professor, Department of Medicine, University of California, San Francisco, San Francisco, California
| | - Joshua L Denson
- J.L. Denson is assistant professor, Section of Pulmonary, Critical Care, and Environmental Medicine, Tulane University School of Medicine, New Orleans, Louisiana; ORCID: https://orcid.org/0000-0002-8654-7765
| | - Eric Holmboe
- E. Holmboe is adjunct professor, Department of Medicine, Yale University, New Haven, Connecticut, and Chief Research, Milestone Development, and Evaluation Officer for the Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Kenji Yamazaki
- K. Yamazaki is senior analyst, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Alison S Clay
- A.S. Clay is assistant professor, Department of Medicine, Duke University, Durham, North Carolina
| | - W Graham Carlos
- W.G. Carlos is associate professor, Department of Medicine, Indiana University, Indianapolis, Indiana
| |
Collapse
|
26
|
Iqbal MZ, Könings KD, Al-Eraky MM, van Merriënboer JJG. Entrustable Professional Activities for Small-Group Facilitation: A Validation Study Using Modified Delphi Technique. TEACHING AND LEARNING IN MEDICINE 2021; 33:536-545. [PMID: 33588650 DOI: 10.1080/10401334.2021.1877714] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
: Entrustable professional activities (EPAs) provide a novel approach to support teachers' structured professionalization and to assess improvement in teaching competence thereafter. Despite their novelty, it is important to assess EPAs as a construct to ensure that they accurately reflect the work of the targeted profession. BACKGROUND: The co-creation of an EPA framework for training and entrustment of small-group facilitators has been discussed in the literature. Although a rigorous design process was used to develop the framework, its content validity has not been established yet.Approach: A modified Delphi technique was used. Three survey rounds were conducted from December 2019 to April 2020. Expert health professions educationalists were recruited using purposive sampling and snowball techniques. In Round 1, a rubric consisting of seven items was used to assess the quality of nine pre-designed EPAs. In Round 2, competencies required to perform the agreed-upon EPAs were selected from 12 competencies provided. In Round 3, consensus was sought on sub-activities recommended for agreed-upon EPAs. Quantitative data were analyzed using multiple statistical analyses, including item-wise and rubric-wise content validity indices, asymmetric confidence interval, mean, standard deviation, and response frequencies. Qualitative data were thematically analyzed using content analysis. FINDINGS: Three of the nine proposed EPAs achieved statistical consensus for retention. These EPAs were: (1) preparing an activity, (2) facilitating a small-group session, and (3) reflecting upon self and the session. Nine of the 12 pre-determined competencies achieved consensus and were then mapped against each agreed-upon EPA based on their relevance. Finally, results indicated consensus on five, six, and four sub-activities for EPA 1, EPA 2, and EPA 3, respectively. CONCLUSIONS: The final framework delineates three EPAs for small-group facilitation and their associated sub-activities. The full description of each EPA provided in this article includes the title, context, task specification, required competencies, and entrustment resources. Program developers, administrative bodies, and teaching staff may find this EPA framework useful to structure faculty development, to entrust teachers, and to support personal development.
Collapse
Affiliation(s)
- Muhammad Zafar Iqbal
- Medical Education Department, College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Karen D Könings
- School of Health Professions Education, Maastricht University, Maastricht, The Netherlands
| | - Mohamed M Al-Eraky
- Vice-President office of Academic Initiatives, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | | |
Collapse
|
27
|
Weller JM, Coomber T, Chen Y, Castanelli DJ. Key dimensions of innovations in workplace-based assessment for postgraduate medical education: a scoping review. Br J Anaesth 2021; 127:689-703. [PMID: 34364651 DOI: 10.1016/j.bja.2021.06.038] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Revised: 05/31/2021] [Accepted: 06/20/2021] [Indexed: 11/28/2022] Open
Abstract
BACKGROUND Specialist training bodies continue to devise innovative methods of gathering information on trainee workplace performance to meet the requirements of competency-based medical education. We reviewed recent innovations in workplace-based assessment (WBA) tools to identify strengths, weaknesses, and trade-offs inherent in their design and use. METHODS In this scoping review, using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, we systematically searched databases between 2009 and 2019 for WBA tools with novel characteristics not typically seen in traditional WBAs. These included innovations in rating scales, ways of collecting information, technological innovations, ways of triggering WBAs, and approaches to compiling and using information. RESULTS We identified 30 innovative WBA tools whose characteristics could be categorised into seven dimensions: frequency of assessment, granularity (unit of performance assessed), coverage of the curriculum, rating method, initiation of the WBA, information use, and incentives. These dimensions had multiple interdependencies and trade-offs, often balancing generating assessment data with available resources. Philosophical stance on assessment also influenced WBA choice, for example prioritising trainee-centred learning (i.e. initiation of WBA and transparency of assessment data), perceptions of assessment and feedback as burdensome or beneficial, and holistic vs reductionist views on assessment of performance. CONCLUSIONS Our synthesis of the literature on innovative WBAs provides a framework for categorising tool characteristics across seven dimensions, systematically teasing apart the considerations in design and use of workplace assessments. It also draws attention to the trade-offs inherent in tool design and selection, and enables a more deliberate consideration of the tool characteristics most appropriate to the local context.
Collapse
Affiliation(s)
- Jennifer M Weller
- Centre for Medical and Health Sciences Education, School of Medicine, University of Auckland, Auckland, New Zealand; Department of Anaesthesia, Auckland City Hospital, Auckland, New Zealand.
| | - Ties Coomber
- Centre for Medical and Health Sciences Education, School of Medicine, University of Auckland, Auckland, New Zealand
| | - Yan Chen
- Centre for Medical and Health Sciences Education, School of Medicine, University of Auckland, Auckland, New Zealand
| | - Damian J Castanelli
- School of Clinical Sciences at Monash Health, Monash University, Clayton, VIC, Australia
| |
Collapse
|
28
|
Kinnear B, Kelleher M, May B, Sall D, Schauer DP, Schumacher DJ, Warm EJ. Constructing a Validity Map for a Workplace-Based Assessment System: Cross-Walking Messick and Kane. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S64-S69. [PMID: 34183604 DOI: 10.1097/acm.0000000000004112] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PROBLEM Health professions education has shifted to a competency-based paradigm in which many programs rely heavily on workplace-based assessment (WBA) to produce data for summative decisions about learners. However, WBAs are complex and require validity evidence beyond psychometric analysis. Here, the authors describe their use of a rhetorical argumentation process to develop a map of validity evidence for summative decisions in an entrustment-based WBA system. APPROACH To organize evidence, the authors cross-walked 2 contemporary validity frameworks, one that emphasizes sources of evidence (Messick) and another that stresses inferences in an argument (Kane). They constructed a validity map using 4 steps: (1) Asking critical questions about the stated interpretation and use, (2) Seeking validity evidence as a response, (3) Categorizing evidence using both Messick's and Kane's frameworks, and (4) Building a visual representation of the collected and organized evidence. The authors used an iterative approach, adding new critical questions and evidence over time. OUTCOMES The first map draft produced 25 boxes of evidence that included all 5 sources of evidence detailed by Messick and spread across all 4 inferences described by Kane. The rhetorical question-response process allowed for structured critical appraisal of the WBA system, leading to the identification of evidentiary gaps. NEXT STEPS Future map iterations will integrate evidence quality indicators and allow for deeper dives into the evidence. The authors intend to share their map with graduate medical education stakeholders (e.g., accreditors, institutional leaders, learners, patients) to understand if it adds value for evaluating their WBA programs' validity arguments.
Collapse
Affiliation(s)
- Benjamin Kinnear
- B. Kinnear is associate professor of internal medicine and pediatrics, Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0003-0052-4130
| | - Matthew Kelleher
- M. Kelleher is assistant professor of internal medicine and pediatrics, Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio
| | - Brian May
- B. May is assistant professor of internal medicine and pediatrics, Department of Internal Medicine, University of Alabama Birmingham School of Medicine, Birmingham, Alabama
| | - Dana Sall
- D. Sall is program director, HonorHealth Internal Medicine Residency Program, Scottsdale, Arizona, and assistant professor of internal medicine, University of Arizona College of Medicine, Phoenix, Arizona
| | - Daniel P Schauer
- D.P. Schauer is associate professor of internal medicine and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0003-3264-8154
| | - Daniel J Schumacher
- D.J. Schumacher is associate professor of pediatrics at Cincinnati Children's Hospital Medical Center/University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0001-5507-8452
| | - Eric J Warm
- E.J. Warm is professor of internal medicine and program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-6088-2434
| |
Collapse
|
29
|
Kinnear B, Warm EJ, Caretta-Weyer H, Holmboe ES, Turner DA, van der Vleuten C, Schumacher DJ. Entrustment Unpacked: Aligning Purposes, Stakes, and Processes to Enhance Learner Assessment. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S56-S63. [PMID: 34183603 DOI: 10.1097/acm.0000000000004108] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Educators use entrustment, a common framework in competency-based medical education, in multiple ways, including frontline assessment instruments, learner feedback tools, and group decision making within promotions or competence committees. Within these multiple contexts, entrustment decisions can vary in purpose (i.e., intended use), stakes (i.e., perceived risk or consequences), and process (i.e., how entrustment is rendered). Each of these characteristics can be conceptualized as having 2 distinct poles: (1) purpose has formative and summative, (2) stakes has low and high, and (3) process has ad hoc and structured. For each characteristic, entrustment decisions often do not fall squarely at one pole or the other, but rather lie somewhere along a spectrum. While distinct, these continua can, and sometimes should, influence one another, and can be manipulated to optimally integrate entrustment within a program of assessment. In this article, the authors describe each of these continua and depict how key alignments between them can help optimize value when using entrustment in programmatic assessment within competency-based medical education. As they think through these continua, the authors will begin and end with a case study to demonstrate the practical application as it might occur in the clinical learning environment.
Collapse
Affiliation(s)
- Benjamin Kinnear
- B. Kinnear is associate professor of internal medicine and pediatrics, Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0003-0052-4130
| | - Eric J Warm
- E.J. Warm is professor of internal medicine and program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-6088-2434
| | - Holly Caretta-Weyer
- H. Caretta-Weyer is assistant professor of emergency medicine, Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, California; ORCID: https://orcid.org/0000-0002-9783-5797
| | - Eric S Holmboe
- E.S. Holmboe is chief, research, milestones development and evaluation officer, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0003-0108-6021
| | - David A Turner
- D.A. Turner is vice president, Competency-Based Medical Education, American Board of Pediatrics, Chapel Hill, North Carolina
| | - Cees van der Vleuten
- C. van der Vleuten is professor of education, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0001-6802-3119
| | - Daniel J Schumacher
- D.J. Schumacher is associate professor of pediatrics, Cincinnati Children's Hospital Medical Center/University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0001-5507-8452
| |
Collapse
|
30
|
Young JQ, Holmboe ES, Frank JR. Competency-Based Assessment in Psychiatric Education: A Systems Approach. Psychiatr Clin North Am 2021; 44:217-235. [PMID: 34049645 DOI: 10.1016/j.psc.2020.12.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Medical education programs are failing to meet the health needs of patients and communities. Misalignments exist on multiple levels, including content (what trainees learn), pedagogy (how trainees learn), and culture (why trainees learn). To address these challenges effectively, competency-based assessment (CBA) for psychiatric medical education must simultaneously produce life-long learners who can self-regulate their own growth and trustworthy processes that determine and accelerate readiness for independent practice. The key to effectively doing so is situating assessment within a carefully designed system with several, critical, interacting components: workplace-based assessment, ongoing faculty development, learning analytics, longitudinal coaching, and fit-for-purpose clinical competency committees.
Collapse
Affiliation(s)
- John Q Young
- Department of Psychiatry, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell and the Zucker Hillside Hospital at Northwell Health, Glen Oaks, NY, USA.
| | - Eric S Holmboe
- Accreditation Council for Graduate Medical Education, 401 North Michigan Avenue, Chicago, IL 60611, USA
| | - Jason R Frank
- Royal College of Physicians and Surgeons of Canada, 774 Echo Drive, Ottawa, Ontario K15 5NB, Canada; Education, Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
31
|
Young JQ, Frank JR, Holmboe ES. Advancing Workplace-Based Assessment in Psychiatric Education: Key Design and Implementation Issues. Psychiatr Clin North Am 2021; 44:317-332. [PMID: 34049652 DOI: 10.1016/j.psc.2021.03.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
With the adoption of competency-based medical education, assessment has shifted from traditional classroom domains of knows and knows how to the workplace domain of doing. This workplace-based assessment has 2 purposes; assessment of learning (summative feedback) and the assessment for learning (formative feedback). What the trainee does becomes the basis for identifying growth edges and determining readiness for advancement and ultimately independent practice. High-quality workplace-based assessment programs require thoughtful choices about the framework of assessment, the tools themselves, the platforms used, and the contexts in which the assessments take place, with an emphasis on direct observation.
Collapse
Affiliation(s)
- John Q Young
- Department of Psychiatry, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell and, Zucker Hillside Hospital at Northwell Health, 75-59 263rd Street, Kaufman Building, Glen Oaks, NY 11004, USA.
| | - Jason R Frank
- Department of Emergency Medicine, University of Ottawa, Royal College of Physicians and Surgeons of Canada, 774 Echo Drive, Ottawa, Ontario K15 5NB, Canada
| | - Eric S Holmboe
- Accreditation Council for Graduate Medical Education, ACGME, 401 North Michigan Avenue, Chicago, IL 60611, USA
| |
Collapse
|
32
|
See One, Do One, Forget One: Early Skill Decay After Paracentesis Training. J Gen Intern Med 2021; 36:1346-1351. [PMID: 32968968 PMCID: PMC8131447 DOI: 10.1007/s11606-020-06242-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Accepted: 09/11/2020] [Indexed: 01/03/2023]
Abstract
INTRODUCTION Internal medicine residents perform paracentesis, but programs lack standard methods for assessing competence or maintenance of competence and instead rely on number of procedures completed. This study describes differences in resident competence in paracentesis over time. METHODS From 2016 to 2017, internal medicine residents (n = 118) underwent paracentesis simulation training. Competence was assessed using the Paracentesis Competency Assessment Tool (PCAT), which combines a checklist, global scale, and entrustment score. The PCAT also delineates two categorical cut-point scores: the Minimum Passing Standard (MPS) and the Unsupervised Practice Standard (UPS). Residents were randomized to return to the simulation lab at 3 and 6 months (group A, n = 60) or only 6 months (group B, n = 58). At each session, faculty raters assessed resident performance. Data were analyzed to compare resident performance at each session compared with initial training scores, and performance between groups at 6 months. RESULTS After initial training, all residents met the MPS. The number achieving UPS did not differ between groups: group A = 24 (40%), group B = 20 (34.5%), p = 0.67. When group A was retested at 3 months, performance on each PCAT component significantly declined, as did the proportion of residents meeting the MPS and UPS. At the 6-month test, residents in group A performed significantly better than residents in group B, with 52 (89.7%) and 20 (34.5%) achieving the MPS and UPS, respectively, in group A compared with 25 (46.3%) and 2 (3.70%) in group B (p < .001 for both comparison). DISCUSSION Skill in paracentesis declines as early as 3 months after training. However, retraining may help interrupt skill decay. Only a small proportion of residents met the UPS 6 months after training. This suggests using the PCAT to objectively measure competence would reclassify residents from being permitted to perform paracentesis independently to needing further supervision.
Collapse
|
33
|
Kinnear B, Srinivas N, Jerardi K. Striking While the Iron Is Hot: Using the Updated PHM Competencies in Time-Variable Training. J Hosp Med 2021; 16:251-253. [PMID: 33734982 DOI: 10.12788/jhm.3611] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/26/2020] [Accepted: 02/16/2021] [Indexed: 11/20/2022]
Affiliation(s)
- Benjamin Kinnear
- Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, Ohio
| | - Nivedita Srinivas
- Department of Pediatrics, Lucile Packard Children's Hospital, Stanford University School of Medicine, Stanford, California
| | - Karen Jerardi
- Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, Ohio
| |
Collapse
|
34
|
White K, Qualtieri J, Courville EL, Beck RC, Alobeid B, Czuchlewski DR, Teruya-Feldstein J, Soma LA, Prakash S, Gratzinger D. Entrustable Professional Activities in Hematopathology Pathology Fellowship Training: Consensus Design and Proposal. Acad Pathol 2021; 8:2374289521990823. [PMID: 33644302 PMCID: PMC7894592 DOI: 10.1177/2374289521990823] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Revised: 11/24/2020] [Accepted: 12/20/2020] [Indexed: 12/16/2022] Open
Abstract
Hematopathology fellowship education has grown in complexity as patient-centered treatment plans have come to depend on integration of clinical, morphologic, immunophenotypic, molecular, and cytogenetic variables. This complexity is in competition with the need for timely hematopathology care with stewardship of patient, laboratory, and societal resources. Accreditation Council for Graduate Medical Education Milestones provide a guidance document for hematopathology training, but fellows and their educators are in need of a simple framework that allows assessment and feedback of growth toward independent hematopathology practice. Entrustable professional activities provide one such framework, and herein, we provide proposed Hematopathology Fellowship Entrustable Professional Activities based on review of pertinent guidelines and literature, with multiple rounds of expert and stakeholder input utilizing a modified mini-Delphi approach. Ten core entrustable professional activities deemed essential for graduating hematopathology fellows were developed together with skills and knowledge statements, example scenarios, and corresponding Accreditation Council for Graduate Medical Education Milestones. Application of these entrustable professional activities in program design, fellow evaluation, and decisions regarding level of supervision is discussed with consideration of benefits and barriers to implementation. These entrustable professional activities may be used by hematopathology fellowship directors and faculty to provide fellows with timely constructive feedback, determine entrustment decisions, provide the Clinical Competency Committee with granular data to support Milestone evaluations, and provide insight into areas of potential improvement in fellowship training. Fellows will benefit from a clear roadmap to independent hematopathology practice with concrete and timely feedback.
Collapse
Affiliation(s)
- Kristie White
- Department of Laboratory Medicine, University of California at San Francisco School of Medicine, San Francisco, CA, USA
| | - Julianne Qualtieri
- Department of Pathology and Laboratory Medicine, University of Pennsylvania School of Medicine, Philadelphia, PA, USA
| | - Elizabeth L. Courville
- Department of Pathology, University of Virginia School of Medicine, Charlottesville, VA, USA
| | - Rose C. Beck
- Department of Pathology, University Hospitals of Cleveland/Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Bachir Alobeid
- Department of Pathology and Cell Biology, Columbia University, New York, NY, USA
| | - David R. Czuchlewski
- Department of Pathology, University of New Mexico School of Medicine, Albuquerque, NM, USA
| | - Julie Teruya-Feldstein
- Department of Pathology, Molecular, and Cell-Based Medicine, Icahn School of Medicine, Mount Sinai Health System, New York, NY, USA
| | - Lorinda A. Soma
- Department of Laboratory Medicine and Pathology, University of Washington School of Medicine, Seattle, WA, USA
| | - Sonam Prakash
- Department of Laboratory Medicine, University of California at San Francisco School of Medicine, San Francisco, CA, USA
| | - Dita Gratzinger
- Stanford University School of Medicine, Stanford, CA, USA
- Dita Gratzinger, Department of Pathology, Stanford University School of Medicine, 300 Pasteur Drive, L235, Stanford, CA 94305, USA.
| |
Collapse
|
35
|
Teherani A, Perez S, Muller-Juge V, Lupton K, Hauer KE. A Narrative Study of Equity in Clinical Assessment Through the Antideficit Lens. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:S121-S130. [PMID: 33229956 DOI: 10.1097/acm.0000000000003690] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PURPOSE Efforts to address inequities in medical education are centered on a dialogue of deficits that highlight negative underrepresented in medicine (UIM) learner experiences and lower performance outcomes. An alternative narrative explores perspectives on achievement and equity in assessment. This study sought to understand UIM learner perceptions of successes and equitable assessment practices. METHOD Using narrative research, investigators selected a purposeful sample of self-identified UIM fourth-year medical students and senior-level residents and conducted semistructured interviews. Questions elicited personal stories of achievement during clinical training, clinical assessment practices that captured achievement, and equity in clinical assessment. Using re-storying and thematic analysis, investigators coded transcripts and synthesized data into themes and representative stories. RESULTS Twenty UIM learners (6 medical students and 14 residents) were interviewed. Learners often thought about equity during clinical training and provided personal definitions of equity in assessment. Learners shared stories that reflected their achievements in patient care, favorable assessment outcomes, and growth throughout clinical training. Sound assessments that captured achievements included frequent observations with real-time feedback on predefined expectations by supportive, longitudinal clinical supervisors. Finally, equitable assessment systems were characterized as sound assessment systems that also avoided comparison to peers, used narrative assessment, assessed patient care and growth, trained supervisors to avoid bias, and acknowledged learner identity. CONCLUSIONS UIM learners characterized equitable and sound assessment systems that captured achievements during clinical training. These findings guide future efforts to create an inclusive, fair, and equitable clinical assessment experience.
Collapse
Affiliation(s)
- Arianne Teherani
- A. Teherani is professor, Department of Medicine, education scientist, Center for Faculty Educators, and director of program evaluation, University of California, San Francisco, School of Medicine, San Francisco, California; ORCID: http://orcid.org/0000-0003-2936-9832
| | - Sandra Perez
- S. Perez is a medical student, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Virginie Muller-Juge
- V. Muller-Juge is associate specialist, University of California, San Francisco, School of Medicine, San Francisco, California; ORCID: https://orcid.org/0000-0002-2346-8904
| | - Katherine Lupton
- K. Lupton is associate professor, Department of Medicine, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Karen E Hauer
- K.E. Hauer is professor, Department of Medicine, and associate dean for competency assessment and professional standards, University of California, San Francisco, School of Medicine, San Francisco, California; ORCID: https://orcid.org/0000-0002-8812-4045
| |
Collapse
|
36
|
Dagnone JD, Chan MK, Meschino D, Bandiera G, den Rooyen C, Matlow A, McEwen L, Scheele F, St Croix R. Living in a World of Change: Bridging the Gap From Competency-Based Medical Education Theory to Practice in Canada. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:1643-1646. [PMID: 32079931 DOI: 10.1097/acm.0000000000003216] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Within graduate medical education, many educators are experiencing a climate of significant change. One transformation, competency-based medical education (CBME), is occurring simultaneously across much of the world, and implementation will require navigating numerous tensions and paradoxes. Successful transformation requires many types of power and is most likely to happen when the medical education community of professionals is engaged in designing, experimenting, acting, and sensemaking together.In this complex climate, the craft of change facilitators and community leaders is needed more than ever. National top-down policies and structures, while important, are not sufficient. The operationalization of new advances is best done when local leaders are afforded room to shape their local context. An evidence-based approach to thinking about the transformative change associated with CBME needs to be adopted. In this age of entrustment, 3 priorities are paramount: (1) engage, entrust, and empower professionals with increasing shared ownership of the innovation; (2) better prepare education professionals in leadership and transformational change techniques in the complex system of medical education; and (3) leverage the wider community of practice to maximize local CBME customization. These recommendations, although based largely on the Canadian experience, are intended to inform CBME transformation in any context.
Collapse
Affiliation(s)
- Jeffrey Damon Dagnone
- J.D. Dagnone is associate professor of emergency medicine and competency-based medical education faculty lead, Queen's University, Kingston, Ontario, Canada; ORCID: http://orcid.org/0000-0001-6963-7948
| | - Ming-Ka Chan
- M.-K. Chan is associate professor and clinician educator of pediatrics and child health, University of Manitoba, Winnipeg, Manitoba, Canada
| | - Diane Meschino
- D. Meschino is assistant professor, Department of Psychiatry, University of Toronto (Women's College Hospital), Toronto, Ontario, Canada
| | - Glen Bandiera
- G. Bandiera is professor of emergency medicine and associate dean of postgraduate medical education, University of Toronto, Toronto, Ontario, Canada
| | - Corry den Rooyen
- C. den Rooyen is an educationalist and change manager, Utrecht, the Netherlands
| | - Anne Matlow
- A. Matlow is faculty lead, strategic initiatives, postgraduate medical education, University of Toronto, Toronto, Ontario, Canada
| | - Laura McEwen
- L. McEwen is director of assessment and evaluation for postgraduate medical education, Queen's University, Kingston, Ontario, Canada
| | - Fedde Scheele
- F. Scheele is professor of health systems innovation and education, Athena Institute, VU University and Amsterdam UMC, and a practicing clinician, obstetrics and gynecology, OLVG Hospital, Amsterdam, the Netherlands
| | - Rhonda St Croix
- R. St. Croix is change advisor, The Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada
| |
Collapse
|
37
|
Holmboe ES, Yamazaki K, Hamstra SJ. The Evolution of Assessment: Thinking Longitudinally and Developmentally. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:S7-S9. [PMID: 32769451 DOI: 10.1097/acm.0000000000003649] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Becoming a physician or other health care professional is a complex and intensely developmental process occurring over a prolonged period of time. The learning path for each medical student, resident, and fellow varies due to different individual learner abilities and curricular designs, clinical contexts, and assessments used by the training program. The slow and uneven evolution to outcomes-based medical education is partly the result of inadequate approaches to programmatic assessment that do not fully address all essential core competencies needed for practice or account for the developmental nature of training. Too many assessments in medical education still focus on single point-in-time performance or function as indirect proxies for actual performance in clinical care for patients and families.Milestones are a modest first step of providing predictive, longitudinal data on a national scale. Longitudinal Milestones data can facilitate the continuous improvement efforts of programs in assessment. However, Milestone judgments are only as good as the assessment data and group processes that inform them. Programmatic assessment should be longitudinally focused and provide all learners with comprehensive and actionable data to guide their professional development and support creation of meaningful individualized action plans. Efforts are urgently needed to rebalance programmatic assessment away from an overreliance on assessment proxies toward more effectively using developmentally focused work-based assessments, routinely incorporate clinical performance and patient experience data, and partner with learners through iterative coproduced assessment activities.
Collapse
Affiliation(s)
- Eric S Holmboe
- E.S. Holmboe is chief, Research, Milestones Development and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Kenji Yamazaki
- K. Yamazaki is senior analyst, Research, Milestones Development and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Stanley J Hamstra
- S.J. Hamstra is vice president for outcomes research, Research, Milestones Development and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| |
Collapse
|
38
|
Validity of entrustment scales within anesthesiology residency training. Can J Anaesth 2020; 68:53-63. [PMID: 33083924 DOI: 10.1007/s12630-020-01823-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Revised: 07/09/2020] [Accepted: 07/12/2020] [Indexed: 10/23/2022] Open
Abstract
INTRODUCTION Competency-based medical education requires robust assessment in authentic clinical environments. Using work-based assessments, entrustment scales have emerged as a means of describing a trainee's ability to perform competently. Nevertheless, psychometric properties of entrustment-based assessment are relatively unknown, particularly in anesthesiology. This study assessed the generalizability and extrapolation evidence for entrustment scales within a program of assessment during anesthesiology training. METHODS Entrustment scores were collected during the first seven blocks of training for three resident cohorts. Entrustment scores were assessed during daily evaluations using a Clinical Case Assessment Tool (CCAT) within the preoperative, intraoperative, and postoperative setting. The reliability of the entrustment scale was estimated using generalizability theory. Spearman's correlations measured the relationship between median entrustment scores and percentiles scores on the Anesthesia Knowledge Test (AKT)-1 and AKT-6, mean Objective Structured Clinical Examination (OSCE) scores, and rankings of performance by the Clinical Competence Committee (CCC). RESULTS Analyses were derived from 2,309 CCATs from 35 residents. The reliability or generalizability (G) coefficient of the entrustment scale was 0.73 (95% confidence interval [CI], 0.70 to 0.76), and the internal consistency was 0.86 (95% CI, 0.84 to 0.88). Intraoperative entrustment scores significantly correlated with the AKT-6 (rho = 0.51, P = 0.01), mean OSCE (rho = 0.45, P = 0.04), and CCC performance rankings (rho = 0.52, P = 0.006). CONCLUSION As part of an assessment program, entrustment scales used early during anesthesiology training showed evidence of validity. Intraoperative entrustment scores had good reliability and showed acceptable internal consistency. Interpreting entrustment scores in this setting may constitute a valuable adjunct complementing traditional summative evaluations.
Collapse
|
39
|
Young JQ, McClure M. Fast, Easy, and Good: Assessing Entrustable Professional Activities in Psychiatry Residents With a Mobile App. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:1546-1549. [PMID: 32271227 DOI: 10.1097/acm.0000000000003390] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PROBLEM Entrustable professional activities (EPAs) can be used to operationalize competency-based medical education. Mobile apps can efficiently capture feedback based on direct observation. To leverage the benefits of both, the authors developed an assessment tool that combines EPAs with mobile technology. APPROACH The authors designed an app to collect EPA data based on direct observation using human-technology interface guidelines. Data collected in the app included: name of resident, the 13 end-of-training EPAs for psychiatry, entrustment ratings, and corrective narrative feedback. The app was implemented in an outpatient continuity clinic for second-year psychiatry residents over a 10-month period between September 2017 and June 2018. Ten faculty-resident dyads piloted the app. To assess the feasibility, utility, and validity of this intervention, the authors examined 3 outcomes: (1) utilization (mean time to complete each assessment; percentage of dyads who completed 10 assessments), (2) quality of the comments (proportion of comments that were behaviorally specific and actionable), and (3) correlation between entrustment level and resident experience (defined as days elapsed since the beginning of the experience). OUTCOMES A total of 99 assessments were completed during the pilot. Mean time to complete an assessment was 76 seconds (standard deviation = 50 seconds, median = 67 seconds). Only 6 of the 10 dyads completed at least 10 assessments. Of all comments, 95% (94) were behaviorally specific and actionable and 91% (90) were corrective. Entrustment scores correlated moderately with resident experience (r = 0.43, P < .001). NEXT STEPS The authors' EPA mobile app was efficient, generated high-quality feedback, and produced entrustment scores that improved as the residents gained experience. Challenges included uneven adoption. Looking forward, the authors plan to examine the enablers and barriers to adoption from an implementation science perspective.
Collapse
Affiliation(s)
- John Q Young
- J.Q. Young is professor, Department of Psychiatry, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York
| | - Matthew McClure
- M. McClure is chief resident, Department of Psychiatry, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York
| |
Collapse
|
40
|
Rencic J, Schuwirth LWT, Gruppen LD, Durning SJ. A situated cognition model for clinical reasoning performance assessment: a narrative review. Diagnosis (Berl) 2020; 7:227-240. [PMID: 32352400 DOI: 10.1515/dx-2019-0106] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Accepted: 04/04/2020] [Indexed: 02/17/2024]
Abstract
Background Clinical reasoning performance assessment is challenging because it is a complex, multi-dimensional construct. In addition, clinical reasoning performance can be impacted by contextual factors, leading to significant variation in performance. This phenomenon called context specificity has been described by social cognitive theories. Situated cognition theory, one of the social cognitive theories, posits that cognition emerges from the complex interplay of human beings with each other and the environment. It has been used as a valuable conceptual framework to explore context specificity in clinical reasoning and its assessment. We developed a conceptual model of clinical reasoning performance assessment based on situated cognition theory. In this paper, we use situated cognition theory and the conceptual model to explore how this lens alters the interpretation of articles or provides additional insights into the interactions between the assessee, patient, rater, environment, assessment method, and task. Methods We culled 17 articles from a systematic literature search of clinical reasoning performance assessment that explicitly or implicitly demonstrated a situated cognition perspective to provide an "enriched" sample with which to explore how contextual factors impact clinical reasoning performance assessment. Results We found evidence for dyadic, triadic, and quadratic interactions between different contextual factors, some of which led to dramatic changes in the assessment of clinical reasoning performance, even when knowledge requirements were not significantly different. Conclusions The analysis of the selected articles highlighted the value of a situated cognition perspective in understanding variations in clinical reasoning performance assessment. Prospective studies that evaluate the impact of modifying various contextual factors, while holding others constant, can provide deeper insights into the mechanisms by which context impacts clinical reasoning performance assessment.
Collapse
Affiliation(s)
- Joseph Rencic
- Department of Medicine, Boston University School of Medicine, 72 East Concord Street, Boston, MA 02118, USA
| | - Lambert W T Schuwirth
- Prideaux Centre for Research in Health Professions Education, Flinders University, Flinders, Australia
| | - Larry D Gruppen
- Department of Medical Education, University of Michigan, Ann Arbor, MI, USA
| | - Steven J Durning
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| |
Collapse
|
41
|
Young JQ, Sugarman R, Schwartz J, McClure M, O'Sullivan PS. A mobile app to capture EPA assessment data: Utilizing the consolidated framework for implementation research to identify enablers and barriers to engagement. PERSPECTIVES ON MEDICAL EDUCATION 2020; 9:210-219. [PMID: 32504446 PMCID: PMC7459074 DOI: 10.1007/s40037-020-00587-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
INTRODUCTION Mobile apps that utilize the framework of entrustable professional activities (EPAs) to capture and deliver feedback are being implemented. If EPA apps are to be successfully incorporated into programmatic assessment, a better understanding of how they are experienced by the end-users will be necessary. The authors conducted a qualitative study using the Consolidated Framework for Implementation Research (CFIR) to identify enablers and barriers to engagement with an EPA app. METHODS Structured interviews of faculty and residents were conducted with an interview guide based on the CFIR. Transcripts were independently coded by two study authors using directed content analysis. Differences were resolved via consensus. The study team then organized codes into themes relevant to the domains of the CFIR. RESULTS Eight faculty and 10 residents chose to participate in the study. Both faculty and residents found the app easy to use and effective in facilitating feedback immediately after the observed patient encounter. Faculty appreciated how the EPA app forced brief, distilled feedback. Both faculty and residents expressed positive attitudes and perceived the app as aligned with the department's philosophy. Barriers to engagement included faculty not understanding the EPA framework and scale, competing clinical demands, residents preferring more detailed feedback and both faculty and residents noting that the app's feedback should be complemented by a tool that generates more systematic, nuanced, and comprehensive feedback. Residents rarely if ever returned to the feedback after initial receipt. DISCUSSION This study identified key enablers and barriers to engagement with the EPA app. The findings provide guidance for future research and implementation efforts focused on the use of mobile platforms to capture direct observation feedback.
Collapse
Affiliation(s)
- John Q Young
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell and the Zucker Hillside Hospital at Northwell Health, Hempstead, NY, USA.
| | - Rebekah Sugarman
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell and the Zucker Hillside Hospital at Northwell Health, Hempstead, NY, USA
| | - Jessica Schwartz
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell and the Zucker Hillside Hospital at Northwell Health, Hempstead, NY, USA
| | - Matthew McClure
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell and the Zucker Hillside Hospital at Northwell Health, Hempstead, NY, USA
| | - Patricia S O'Sullivan
- Department of Medicine, University of California at San Francisco School of Medicine, San Francisco, USA
| |
Collapse
|
42
|
A Fellow Assessment Tool for The Pediatric Gastroenterology Entrustable Professional Activities: A Great Start! J Pediatr Gastroenterol Nutr 2020; 71:4-5. [PMID: 32404752 DOI: 10.1097/mpg.0000000000002765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
|
43
|
Ginsburg S, Kogan JR, Gingerich A, Lynch M, Watling CJ. Taken Out of Context: Hazards in the Interpretation of Written Assessment Comments. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:1082-1088. [PMID: 31651432 DOI: 10.1097/acm.0000000000003047] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
PURPOSE Written comments are increasingly valued for assessment; however, a culture of politeness and the conflation of assessment with feedback lead to ambiguity. Interpretation requires reading between the lines, which is untenable with large volumes of qualitative data. For computer analytics to help with interpreting comments, the factors influencing interpretation must be understood. METHOD Using constructivist grounded theory, the authors interviewed 17 experienced internal medicine faculty at 4 institutions between March and July, 2017, asking them to interpret and comment on 2 sets of words: those that might be viewed as "red flags" (e.g., good, improving) and those that might be viewed as signaling feedback (e.g., should, try). Analysis focused on how participants ascribed meaning to words. RESULTS Participants struggled to attach meaning to words presented acontextually. Four aspects of context were deemed necessary for interpretation: (1) the writer; (2) the intended and potential audiences; (3) the intended purpose(s) for the comments, including assessment, feedback, and the creation of a permanent record; and (4) the culture, including norms around assessment language. These contextual factors are not always apparent; readers must balance the inevitable need to interpret others' language with the potential hazards of second-guessing intent. CONCLUSIONS Comments are written for a variety of intended purposes and audiences, sometimes simultaneously; this reality creates dilemmas for faculty attempting to interpret these comments, with or without computer assistance. Attention to context is essential to reduce interpretive uncertainty and ensure that written comments can achieve their potential to enhance both assessment and feedback.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- S. Ginsburg is professor of medicine, Department of Medicine, Faculty of Medicine, University of Toronto, scientist, Wilson Centre for Research in Education, University Health Network, University of Toronto, Toronto, Ontario, Canada, and Canada Research Chair in Health Professions Education; ORCID: http://orcid.org/0000-0002-4595-6650. J.R. Kogan is professor of medicine, Department of Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania. A. Gingerich is assistant professor, Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada; ORCID: http://orcid.org/0000-0001-5765-3975. M. Lynch is postdoctoral fellow, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada. C.J. Watling is professor, Department of Clinical Neurological Sciences, scientist, Centre for Education Research and Innovation, and associate dean of postgraduate medical education, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada; ORCID: http://orcid.org/0000-0001-9686-795X
| | | | | | | | | |
Collapse
|
44
|
Ten Cate O, Dahdal S, Lambert T, Neubauer F, Pless A, Pohlmann PF, van Rijen H, Gurtner C. Ten caveats of learning analytics in health professions education: A consumer's perspective. MEDICAL TEACHER 2020; 42:673-678. [PMID: 32150499 DOI: 10.1080/0142159x.2020.1733505] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
A group of 22 medical educators from different European countries, gathered in a meeting in Utrecht in July 2019, discussed the topic of learning analytics (LA) in an open conversation and addressed its definition, its purposes and potential risks for learners and teachers. LA was seen as a significant advance with important potential to improve education, but the group felt that potential drawbacks of using LA may yet be under-exposed in the literature. After transcription and interpretation of the discussion's conclusions, a document was drafted and fed back to the group in two rounds to arrive at a series of 10 caveats educators should be aware of when developing and using LA, including too much standardized learning, with undue consequences of over-efficiency and pressure on learners and teachers, and a decrease of the variety of 'valid' learning resources. Learning analytics may misalign with eventual clinical performance and can run the risk of privacy breaches and inescapability of documented failures. These consequences may not happen, but the authors, on behalf of the full group of educators, felt it worth to signal these caveats from a consumers' perspective.
Collapse
Affiliation(s)
- Olle Ten Cate
- Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | - Thomas Lambert
- Kepler University Hospital Linz, Johannes Kepler University Linz, Linz, Austria
| | - Florian Neubauer
- Institute for Medical Education, University of Bern, Bern, Switzerland
| | - Anina Pless
- Institute of Primary Health Care (BIHAM), University of Bern, Bern, Switzerland
| | | | - Harold van Rijen
- Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Corinne Gurtner
- Institute of Animal Pathology, Vetsuisse Faculty Bern, University of Bern, Bern, Switzerland
| |
Collapse
|
45
|
Kelleher M, Kinnear B, Wong SEP, O'Toole J, Warm E. Linking Workplace-Based Assessment to ACGME Milestones: A Comparison of Mapping Strategies in Two Specialties. TEACHING AND LEARNING IN MEDICINE 2020; 32:194-203. [PMID: 31530183 DOI: 10.1080/10401334.2019.1653764] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Construct: The construct that is assessed is competency in Pediatrics and Internal Medicine residency training. Background: The Accreditation Council for Graduate Medical Education (ACGME) created milestones to measure learner progression toward competence over time but not as direct assessment tools. Ideal measurement of resident performance includes direct observation and assessment of patient care skills in the workplace. Residency programs have linked these concepts by mapping workplace-based assessments to the milestones of ACGME subcompetencies. Mapping is a subjective process, and little is known about specific techniques or the resulting consequences of mapping program-specific assessment data to larger frameworks of competency. Approach: In this article, the authors compare and contrast the techniques used to link workplace-based assessments called Observable Practice Activities (OPAs) to ACGME subcompetencies in two large academic residency programs from different specialties (Internal Medicine and Pediatrics). Descriptive analysis explored the similarities and differences in the assessment data generated by mapping assessment items to larger frameworks of competency. Results: Each program assessed the core competencies with similar frequencies. The largest discrepancy between the two subspecialties was the assessment of Medical Knowledge, which Internal Medicine assessed twice as often. Pediatrics also assessed the core competency Systems-based Practice almost twice as often as Internal Medicine. Both programs had several subcompetencies that were assessed more or less often than what appeared to be emphasized by the blueprint of mapping. Despite using independent mapping processes, both programs mapped each OPA to approximately three subcompetencies. Conclusions: Mapping workplace-based assessments to the ACGME subcompetencies allowed each program to see the whole of their curricula in ways that were not possible before and to identify existing curricular and assessment gaps. Although each program used similar assessment tools, the assessment data generated were different. The lessons learned in this work could inform other programs attempting to link their own workplace-based assessment elements to ACGME subcompetencies.
Collapse
Affiliation(s)
- Matthew Kelleher
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| | - Benjamin Kinnear
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| | - Sue E Poynter Wong
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| | - Jennifer O'Toole
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| | - Eric Warm
- Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| |
Collapse
|
46
|
Weber DE, Held JD, Jandarov RA, Kelleher M, Kinnear B, Sall D, O'Toole JK. Development and Establishment of Initial Validity Evidence for a Novel Tool for Assessing Trainee Admission Notes. J Gen Intern Med 2020; 35:1078-1083. [PMID: 31993944 PMCID: PMC7174454 DOI: 10.1007/s11606-020-05669-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/19/2019] [Accepted: 01/12/2020] [Indexed: 10/25/2022]
Abstract
BACKGROUND Documentation is a key component of practice, yet few curricula have been published to teach trainees proper note construction. Additionally, a gold standard for assessing note quality does not exist, and no documentation assessment tools integrate with established competency-based frameworks. OBJECTIVE To develop and establish initial validity evidence for a novel tool that assesses key components of trainee admission notes and maps to the Accreditation Council for Graduate Medical Education (ACGME) milestone framework. DESIGN Using an iterative, consensus building process we developed the Admission Note Assessment Tool (ANAT). Pilot testing was performed with both the supervising attending and study team raters not involved in care of the patients. The finalized tool was piloted with attendings from other institutions. PARTICIPANTS Local experts participated in tool development and pilot testing. Additional attending physicians participated in pilot testing. MAIN MEASURES Content, response process, and internal structure validity evidence was gathered using Messick's framework. Inter-rater reliability was assessed using percent agreement. KEY RESULTS The final tool consists of 16 checklist items and two global assessment items. Pilot testing demonstrated rater agreement of 72% to 100% for checklist items and 63% to 70% for global assessment items. Note assessment required an average of 12.3 min (SD 3.7). The study generated validity evidence in the domains of content, response process, and internal structure for use of the tool in rating admission notes. CONCLUSIONS The ANAT assesses individual components of a note, incorporates billing criteria, targets note "bloat," allows for narrative feedback, and provides global assessments mapped to the ACGME milestone framework. The ANAT can be used to assess admission notes by any attending and at any time after note completion with minimal rater training. The ANAT allows programs to implement routine note assessment for multiple functions with the use of a single tool.
Collapse
Affiliation(s)
- Danielle E Weber
- Department of Internal Medicine, University of Cincinnati College of Medicine, University of Cincinnati Medical Center, 231 Albert Sabin Way, ML 0535, Cincinnati, OH, 45267, USA. .,Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| | - Justin D Held
- Department of Internal Medicine, University of Cincinnati College of Medicine, University of Cincinnati Medical Center, 231 Albert Sabin Way, ML 0535, Cincinnati, OH, 45267, USA
| | - Roman A Jandarov
- Department of Internal Medicine, University of Cincinnati College of Medicine, University of Cincinnati Medical Center, 231 Albert Sabin Way, ML 0535, Cincinnati, OH, 45267, USA
| | - Matthew Kelleher
- Department of Internal Medicine, University of Cincinnati College of Medicine, University of Cincinnati Medical Center, 231 Albert Sabin Way, ML 0535, Cincinnati, OH, 45267, USA.,Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Ben Kinnear
- Department of Internal Medicine, University of Cincinnati College of Medicine, University of Cincinnati Medical Center, 231 Albert Sabin Way, ML 0535, Cincinnati, OH, 45267, USA.,Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Dana Sall
- Department of Internal Medicine, University of Cincinnati College of Medicine, University of Cincinnati Medical Center, 231 Albert Sabin Way, ML 0535, Cincinnati, OH, 45267, USA
| | - Jennifer K O'Toole
- Department of Internal Medicine, University of Cincinnati College of Medicine, University of Cincinnati Medical Center, 231 Albert Sabin Way, ML 0535, Cincinnati, OH, 45267, USA.,Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| |
Collapse
|
47
|
Larrabee JG. Entrustable Professional Activities: Correlation of Entrustment Assessments of Pediatric Residents With Concurrent Subcompetency Milestones Ratings. J Grad Med Educ 2020; 12:66-73. [PMID: 32089796 PMCID: PMC7012520 DOI: 10.4300/jgme-d-19-00408.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 09/19/2019] [Accepted: 10/23/2019] [Indexed: 01/14/2023] Open
Abstract
BACKGROUND In competency-based medical education, subcompetency milestones represent a theoretical stepwise description for a resident to move from the level of novice to expert. Despite their ubiquitous use in the assessment of residents, they were not designed for that purpose. Because entrustable professional activities (EPAs) require observable behaviors, they could serve as a potential link between clinical observation of residents and competency-based assessment. OBJECTIVE We hypothesized that global faculty-of-resident entrustment ratings would correlate with concurrent subcompetency milestones-based assessments. METHODS This prospective study evaluated the correlation between concurrent entrustment assessments and subcompetency milestones ratings. Pediatric residents were assessed in 4 core rotations (pediatric intensive care unit, neonatal intensive care unit, general inpatient, and continuity clinic) at 3 different residency training programs during the 2014-2015 academic year. Subcompetencies were mapped to rotation-specific EPAs, and shared assessments were utilized across the 3 programs. RESULTS We compared 29 143 pairs of entrustment levels and corresponding subcompetency levels from 630 completed assessments. Pearson correlation coefficients demonstrated statistical significance for all pairs (P < .001). Multivariate linear regression models produced R-squared values that demonstrated strong correlation between mapped EPA levels and corresponding subcompetency milestones ratings (median R 2 = 0.81; interquartile range 0.73-0.83; P < .001). CONCLUSIONS This study demonstrates a strong association between assessment of EPAs and subcompetency milestones assessment, providing a link between entrustment decisions and assessment of competence. Our data support creating resident assessment tools where multiple subcompetencies can be mapped and assessed by a smaller set of rotation-specific EPAs.
Collapse
|
48
|
Abstract
Milestones specific to orthopaedic surgical training document individual resident progress through skill development in multiple dimensions. Residents increasingly interact with and are assessed by surgeons in both academic and private practice environments. Milestones describe the skills that support competence. One of the primary goals of milestones is to provide continuous data for educational quality improvement of residency programs. They provide a dialogue between surgeons who supervise residents or fellows and the program's Clinical Competency Committee throughout a resident's education. The orthopaedic milestones were developed jointly by the Accreditation Council for Graduate Medical Education and the American Board of Orthopaedic Surgery. The working team was designed with broad representation within the specialty. The milestones were introduced to orthopaedic residencies in 2013. Orthopaedics is a 5-year training program; the first comprehensive longitudinal data set is now available for study. This summary provides historical perspective on the development of the milestones, state of current milestone implementation, attempts to establish validity, challenges with the milestones, and the development of next-generation assessment tools.
Collapse
|
49
|
How Do Thresholds of Principle and Preference Influence Surgeon Assessments of Learner Performance? Ann Surg 2019; 268:385-390. [PMID: 28463897 DOI: 10.1097/sla.0000000000002284] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
OBJECTIVE The present study asks whether intraoperative principles are shared among faculty in a single residency program and explores how surgeons' individual thresholds between principles and preferences might influence assessment. BACKGROUND Surgical education continues to face significant challenges in the implementation of intraoperative assessment. Competency-based medical education assumes the possibility of a shared standard of competence, but intersurgeon variation is prevalent and, at times, valued in surgical education. Such procedural variation may pose problems for assessment. METHODS An entire surgical division (n = 11) was recruited to participate in video-guided interviews. Each surgeon assessed intraoperative performance in 8 video clips from a single laparoscopic radical left nephrectomy performed by a senior learner (>PGY5). Interviews were audio recorded, transcribed, and analyzed using the constant comparative method of grounded theory. RESULTS Surgeons' responses revealed 5 shared generic principles: choosing the right plane, knowing what comes next, recognizing normal and abnormal, making safe progress, and handling tools and tissues appropriately. The surgeons, however, disagreed both on whether a particular performance upheld a principle and on how the performance could improve. This variation subsequently shaped their reported assessment of the learner's performance. CONCLUSIONS The findings of the present study provide the first empirical evidence to suggest that surgeons' attitudes toward their own procedural variations may be an important influence on the subjectivity of intraoperative assessment in surgical education. Assessment based on intraoperative entrustment may harness such subjectivity for the purpose of implementing competency-based surgical education.
Collapse
|
50
|
Emke AR. Workplace-Based Assessments Using Pediatric Critical Care Entrustable Professional Activities. J Grad Med Educ 2019; 11:430-438. [PMID: 31440338 PMCID: PMC6699545 DOI: 10.4300/jgme-d-18-01006.1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 04/22/2019] [Accepted: 05/29/2019] [Indexed: 12/22/2022] Open
Abstract
BACKGROUND Workplace-based assessment (WBA) is critical to graduating competent physicians. Developing assessment tools that combine the needs of faculty, trainees, and governing bodies is challenging but imperative. Entrustable professional activities (EPAs) are emerging as a clinically oriented framework for trainee assessment. OBJECTIVE We sought to develop an EPA-based WBA tool for pediatric critical care medicine (PCCM) fellows. The goals of the tool were to promote learning through benchmarking and tracking entrustment. METHODS A single PCCM EPA was iteratively subdivided into observable practice activities (OPAs) based on national and local data. Using a mixed-methods approach following van der Vleuten's conceptual model for assessment tool utility and Messick's unified validity framework, we sought validity evidence for acceptability, content, internal structure, relation to other variables, response process, and consequences. RESULTS Evidence was gathered after 1 year of use. Items for assessment were based on correlation between the number of times each item was assessed and the frequency professional activity occurred. Phi-coefficient reliability was 0.65. Narrative comments demonstrated all factors influencing trust, identified by current literature, were cited when determining level of entrustment granted. Mean entrustment levels increased significantly between fellow training years (P = .001). Compliance for once- and twice-weekly tool completion was 50% and 100%, respectively. Average time spent completing the assessment was less than 5 minutes. CONCLUSIONS Using an EPA-OPA framework, we demonstrated utility and validity evidence supporting the tool's outcomes. In addition, narrative comments about entrustment decisions provide important insights for the training program to improve individual fellow advancement toward autonomy.
Collapse
|