1
|
Woodworth GE, Goldstein ZT, Ambardekar AP, Arthur ME, Bailey CF, Booth GJ, Carney PA, Chen F, Duncan MJ, Fromer IR, Hallman MR, Hoang T, Isaak R, Klesius LL, Ladlie BL, Mitchell SA, Miller Juve AK, Mitchell JD, McGrath BJ, Shepler JA, Sims CR, Spofford CM, Tanaka PP, Maniker RB. Development and Pilot Testing of a Programmatic System for Competency Assessment in US Anesthesiology Residency Training. Anesth Analg 2024; 138:1081-1093. [PMID: 37801598 DOI: 10.1213/ane.0000000000006667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
BACKGROUND In 2018, a set of entrustable professional activities (EPAs) and procedural skills assessments were developed for anesthesiology training, but they did not assess all the Accreditation Council for Graduate Medical Education (ACGME) milestones. The aims of this study were to (1) remap the 2018 EPA and procedural skills assessments to the revised ACGME Anesthesiology Milestones 2.0, (2) develop new assessments that combined with the original assessments to create a system of assessment that addresses all level 1 to 4 milestones, and (3) provide evidence for the validity of the assessments. METHODS Using a modified Delphi process, a panel of anesthesiology education experts remapped the original assessments developed in 2018 to the Anesthesiology Milestones 2.0 and developed new assessments to create a system that assessed all level 1 through 4 milestones. Following a 24-month pilot at 7 institutions, the number of EPA and procedural skill assessments and mean scores were computed at the end of the academic year. Milestone achievement and subcompetency data for assessments from a single institution were compared to scores assigned by the institution's clinical competency committee (CCC). RESULTS New assessment development, 2 months of testing and feedback, and revisions resulted in 5 new EPAs, 11 nontechnical skills assessments (NTSAs), and 6 objective structured clinical examinations (OSCEs). Combined with the original 20 EPAs and procedural skills assessments, the new system of assessment addresses 99% of level 1 to 4 Anesthesiology Milestones 2.0. During the 24-month pilot, aggregate mean EPA and procedural skill scores significantly increased with year in training. System subcompetency scores correlated significantly with 15 of 23 (65.2%) corresponding CCC scores at a single institution, but 8 correlations (36.4%) were <30.0, illustrating poor correlation. CONCLUSIONS A panel of experts developed a set of EPAs, procedural skill assessment, NTSAs, and OSCEs to form a programmatic system of assessment for anesthesiology residency training in the United States. The method used to develop and pilot test the assessments, the progression of assessment scores with time in training, and the correlation of assessment scores with CCC scoring of milestone achievement provide evidence for the validity of the assessments.
Collapse
Affiliation(s)
- Glenn E Woodworth
- From the Department of Anesthesiology and Perioperative Medicine, Oregon Health & Science University, Portland, Oregon
| | - Zachary T Goldstein
- Department of Anesthesiology, Cedars Sinai Medical Center, Los Angeles, California
| | - Aditee P Ambardekar
- Department of Anesthesiology and Pain Management, University of Texas, Southwestern Medical Center, Dallas, Texas
| | - Mary E Arthur
- Department of Anesthesiology and Perioperative Medicine, Medical College of Georgia at Augusta University, Augusta, Georgia
| | - Caryl F Bailey
- Department of Anesthesiology and Perioperative Medicine, Medical College of Georgia at Augusta University, Augusta, Georgia
| | - Gregory J Booth
- Uniformed Services University of the Health Sciences, Department of Anesthesiology and Pain Medicine, Naval Medical Center Portsmouth, Portsmouth, Virginia
| | - Patricia A Carney
- Division of Hospital Medicine, Department of Family Medicine and Internal Medicine, Oregon Health & Science University, Portland, Oregon
| | - Fei Chen
- Department of Anesthesiology, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina
| | - Michael J Duncan
- Department of Anesthesiology, University of Missouri-Kansas City, Kansas City, Missouri
| | - Ilana R Fromer
- Department of Anesthesiology, University of Minnesota, Minneapolis, Minnesota
| | - Matthew R Hallman
- Department of Anesthesiology and Pain Medicine, University of Washington, Seattle, Washington
| | - Thomas Hoang
- From the Department of Anesthesiology and Perioperative Medicine, Oregon Health & Science University, Portland, Oregon
| | - Robert Isaak
- Department of Anesthesiology, University of North Carolina, Chapel Hill, North Carolina
| | - Lisa L Klesius
- Department of Anesthesiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin
| | - Beth L Ladlie
- Department of Anesthesiology, Mayo Clinic, Rochester, Minnesota
| | | | - Amy K Miller Juve
- From the Department of Anesthesiology and Perioperative Medicine, Oregon Health & Science University, Portland, Oregon
| | - John D Mitchell
- Department of Anesthesiology, Critical Care, and Perioperative Medicine, Henry Ford Health, Detroit, Michigan
| | - Brian J McGrath
- Department of Anesthesiology, University of Florida College of Medicine-Jacksonville, Jacksonville, Florida
| | - John A Shepler
- Department of Anesthesiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin
| | - Charles R Sims
- Department of Anesthesiology & Perioperative Medicine, Mayo Clinic, Rochester, Minnesota
| | - Christina M Spofford
- Department of Anesthesiology, Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Pedro P Tanaka
- Department of Anesthesiology, Stanford University, Stanford, California
| | - Robert B Maniker
- Department of Anesthesiology, Columbia University, New York, New York
| |
Collapse
|
2
|
Barbagallo C, Osborne K, Dempsey C. Implementation of a programmatic assessment model in radiation oncology medical physics training. J Appl Clin Med Phys 2024; 25:e14354. [PMID: 38620004 PMCID: PMC11087179 DOI: 10.1002/acm2.14354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 01/03/2024] [Accepted: 03/14/2024] [Indexed: 04/17/2024] Open
Abstract
PURPOSE In 2019, a formal review and update of the current training program for medical physics residents/registrars in Australasia was conducted. The purpose of this was to ensure the program met current local clinical and technological requirements, to improve standardization of training across Australia and New Zealand and generate a dynamic curriculum and programmatic assessment model. METHODS A four-phase project was initiated, including a consultant desktop review of the current program and stakeholder consultation. Overarching program outcomes on which to base the training model were developed, with content experts used to update the scientific content. Finally, assessment specialists reviewed a range of assessment models to determine appropriate assessment methods for each learning outcome, creating a model of programmatic assessment. RESULTS The first phase identified a need for increased standardized assessment incorporating programmatic assessment. Seven clear program outcome statements were generated and used to guide and underpin the new curriculum framework. The curriculum was expanded from the previous version to include emerging technologies, while removing previous duplication. Finally, a range of proposed assessments for learning outcomes in the curriculum were generated into the programmatic assessment model. These new assessment methods were structured to incorporate rubric scoring to provide meaningful feedback. CONCLUSIONS An updated training program for Radiation Oncology Medial Physics registrars/residents was released in Australasia. Scientific content from a previous program was used as a foundation and revised for currency with the ability to accommodate a dynamic curriculum model. A programmatic model of assessment was created after comprehensive review and consultation. This new model of assessment provides more structured, ongoing assessment throughout the training period. It contains allowances for local bespoke assessment, and guidance for supervisors by the provision of marking templates and rubrics.
Collapse
Affiliation(s)
- Cathy Barbagallo
- Australasian College of Physical Scientists and Engineers in Medicine (ACPSEM)SydneyNew South WalesAustralia
- Department of Radiation OncologyAlfred HealthPrahranVictoriaAustralia
| | - Kristy Osborne
- Australian Council for Educational ResearchEducation ResearchPolicy and Development DivisionCamberwellVictoriaAustralia
| | - Claire Dempsey
- Australasian College of Physical Scientists and Engineers in Medicine (ACPSEM)SydneyNew South WalesAustralia
- Department of Radiation OncologyCalvary Mater NewcastleWaratahNew South WalesAustralia
- Department of Radiation OncologyUniversity of WashingtonSeattleWashingtonUSA
- School of Health SciencesUniversity of NewcastleCallaghanNew South WalesAustralia
| |
Collapse
|
3
|
Pearce J, Chiavaroli N, Tavares W. On the use and abuse of metaphors in assessment. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:1333-1345. [PMID: 36729196 DOI: 10.1007/s10459-022-10203-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 12/29/2022] [Indexed: 06/18/2023]
Abstract
This paper is motivated by a desire to advance assessment in the health professions through encouraging the judicious and productive use of metaphors. Through five specific examples (pixels, driving lesson/test, jury deliberations, signal processing, and assessment as a toolbox), we interrogate how metaphors are being used in assessment to consider what value they add to understanding and implementation of assessment practices. By unpacking these metaphors in action, we probe each metaphor's rationale and function, the gains each metaphor makes, and explore the unintended meanings they may carry. In summarizing common uses of metaphors, we elucidate how there may be both advantages and/or disadvantages. Metaphors can play important roles in simplifying, complexifying, communicating, translating, encouraging reflection, and convincing. They may be powerfully rhetorical, leading to intended consequences, actions, and other pragmatic outcomes. Although metaphors can be extremely helpful, they do not constitute thorough critique, justified evidence or argumentation. We argue that although metaphors have utility, they must be carefully considered if they are to serve assessment needs in intended ways. We should pay attention to how metaphors may be misinterpreted, what they ignore or unintentionally signal, and perhaps mitigate against this with anticipated corrections or nuanced qualifications. Failure to do so may lead to implementing practices that miss underlying and relevant complexities for assessment science and practice. Using metaphors requires careful attention with respect to their role, contributions, benefits and limitations. We highlight the value that comes from critiquing metaphors, and demonstrate the care required to ensure their continued utility.
Collapse
Affiliation(s)
- Jacob Pearce
- Tertiary Education, Australian Council for Educational Research, Camberwell, Australia.
| | - Neville Chiavaroli
- Tertiary Education, Australian Council for Educational Research, Camberwell, Australia
| | - Walter Tavares
- Department of Health and Society and Wilson Centre, Temerty Faculty of Medicine, University of Toronto, Toronto, ON., Canada
| |
Collapse
|
4
|
Greenfield J, Qua K, Prayson RA, Bierer SB. "It Changed How I Think"-Impact of Programmatic Assessment Upon Practicing Physicians: A Qualitative Study. MEDICAL SCIENCE EDUCATOR 2023; 33:963-974. [PMID: 37546195 PMCID: PMC10403454 DOI: 10.1007/s40670-023-01829-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/23/2023] [Indexed: 08/08/2023]
Abstract
Programmatic assessment is a systematic approach used to document and assess learner performance. It offers learners frequent formative feedback from a variety of contexts and uses both high- and low-stakes assessments to determine student progress. Existing research has explored learner and faculty perceptions of programmatic assessment, reporting favorable impact on faculty understanding of the importance of assessment stakes and feedback to learners while students report the ability to establish and navigate towards goals and reflect on their performance. The Cleveland Clinic Lerner College of Medicine (CCLCM) of Case Western Reserve University adopted programmatic assessment methods at its inception. With more than 18 years' experience with programmatic assessment and a portfolio-based assessment system, CCLCM is well-positioned to explore its graduates' perceptions of their programmatic assessment experiences during and after medical school. In 2020, the investigators interviewed 26 of the 339 physician graduates. Participants were purposefully sampled to represent multiple class cohorts (2009-2019), clinical specialties, and practice locations. The investigators analyzed interview transcripts using thematic analysis informed by the frameworks of self-determination theory and professional identity formation. The authors identified themes and support each with participant quotes from the interviews. Based on findings, the investigators compiled a series of recommendations for other institutions who have already or plan to incorporate elements of programmatic assessment into their curricula. The authors concluded by discussing future directions for research and additional avenues of inquiry.
Collapse
Affiliation(s)
- Jessica Greenfield
- University of Virginia School of Medicine, Room 2008A Pinn Hall, Box 800866, Charlottesville, VA 22908-0366 USA
| | - Kelli Qua
- Case Western Reserve University School of Medicine, Cleveland, OH USA
| | - Richard A. Prayson
- Department of Anatomic Pathology, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland Clinic, Cleveland, OH USA
| | - S. Beth Bierer
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH USA
| |
Collapse
|
5
|
From Traditional to Programmatic Assessment in Three (Not So) Easy Steps. EDUCATION SCIENCES 2022. [DOI: 10.3390/educsci12070487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Programmatic assessment (PA) has strong theoretical and pedagogical underpinnings, but its practical implementation brings a number of challenges—particularly in traditional university settings involving large cohort sizes. This paper presents a detailed case report of an in-progress programmatic assessment implementation involving a decade of assessment innovation occurring in three significant and transformative steps. The starting position and subsequent changes represented in each step are reflected against the framework of established principles and implementation themes of PA. This case report emphasises the importance of ongoing innovation and evaluative research, the advantage of a dedicated team with a cohesive plan, and the fundamental necessity of electronic data collection. It also highlights the challenge of traditional university cultures, the potential advantage of a major pandemic disruption, and the necessity for curriculum renewal to support significant assessment change. Our PA implementation began with a plan to improve the learning potential of individual assessments and over the subsequent decade expanded to encompass a cohesive and course wide assessment program involving meaningful aggregation of assessment data. In our context (large cohort sizes and university-wide assessment policy) regular progress review meetings and progress decisions based on aggregated qualitative and quantitative data (rather than assessment format) remain local challenges.
Collapse
|
6
|
Pearce J, Tavares W. A philosophical history of programmatic assessment: tracing shifting configurations. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2021; 26:1291-1310. [PMID: 33893881 DOI: 10.1007/s10459-021-10050-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Accepted: 04/09/2021] [Indexed: 06/12/2023]
Abstract
Programmatic assessment is now well entrenched in medical education, allowing us to reflect on when it first emerged and how it evolved into the form we know today. Drawing upon the intellectual tradition of historical epistemology, we provide a philosophically-oriented historiographical study of programmatic assessment. Our goal is to trace its relatively short historical trajectory by describing shifting configurations in its scene of inquiry-focusing on questions, practices, and philosophical presuppositions. We identify three historical phases: emergence, evolution and entrenchment. For each, we describe the configurations of the scene; examine underlying philosophical presuppositions driving changes; and detail upshots in assessment practice. We find that programmatic assessment emerged in response to positivist 'turmoil' prior to 2005, driven by utility considerations and implicit pragmatist undertones. Once introduced, it evolved with notions of diversity and learning being underscored, and a constructivist ontology developing at its core. More recently, programmatic assessment has become entrenched as its own sub-discipline. Rich narratives have been emphasised, but philosophical underpinnings have been blurred. We hope to shed new light on current assessment practices in the medical education community by interrogating the history of programmatic assessment from this philosophical vantage point. Making philosophical presuppositions explicit highlights the perspectival nature of aspects of programmatic assessment, and suggest reasons for perceived benefits as well as potential tensions, contradictions and vulnerabilities in the approach today. We conclude by offering some reflections on important points to emerge from our historical study, and suggest 'what next' for programmatic assessment in light of this endeavour.
Collapse
Affiliation(s)
- J Pearce
- Tertiary Education (Assessment), Australian Council for Educational Research, 19 Prospect Hill Road, Camberwell, VIC, 3124, Australia.
| | - W Tavares
- The Wilson Centre and Post-MD Education. University Health Network and University of Toronto, Toronto, ON, Canada
| |
Collapse
|
8
|
Pearce J, Reid K, Chiavaroli N, Hyam D. Incorporating aspects of programmatic assessment into examinations: Aggregating rich information to inform decision-making. MEDICAL TEACHER 2021; 43:567-574. [PMID: 33556294 DOI: 10.1080/0142159x.2021.1878122] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
CONTEXT A programmatic approach to assessment entails gathering and aggregating 'rich information' on candidates to inform progress decisions. However, there is little guidance on how such an approach might be implemented in practice. OBJECTIVE We describe an approach to aggregating rich information across assessment formats to inform committee decision-making in a specialist medical college. METHODS Each item (n = 272) for every examination was blueprinted to 15 curriculum modules and 7 proficiencies. We developed a six-point holistic rating scale with detailed rubrics outlining expected performance standards for every item. Examiners used this rating scale in making judgements for each item, generating rich performance data for each candidate. RESULTS A colour-coded 'mosaic' of patterns of performance across modules and proficiencies was generated along with frequency distributions of ratings. These data allowed examiners to easily visualise candidate performance and to use these data to inform deliberations on borderline candidates. Committee decision-making was facilitated by maintaining the richness of assessment information throughout the process. Moreover, the data facilitated detailed and useful feedback to candidates. CONCLUSIONS Our study demonstrates that incorporating aspects of programmatic thinking into high-stakes examinations by using a novel approach to aggregating information is a useful first step in reforming an assessment program.
Collapse
Affiliation(s)
- Jacob Pearce
- Department of Tertiary Education (Assessment), Australian Council for Educational Research, Camberwell, Australia
| | - Katharine Reid
- Department of Medical Education, University of Melbourne, Australia & Educational Monitoring and Research, Australian Council for Educational Research, Camberwell, Australia
| | - Neville Chiavaroli
- Department of Tertiary Education (Assessment), Australian Council for Educational Research, Camberwell, Australia
| | - Dylan Hyam
- School of Dentistry, Charles Sturt University, Orange, Australia
| |
Collapse
|