1
|
Torre D, Schuwirth L. Programmatic assessment for learning: A programmatically designed assessment for the purpose of learning: AMEE Guide No. 174. MEDICAL TEACHER 2024:1-16. [PMID: 39368061 DOI: 10.1080/0142159x.2024.2409936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Accepted: 09/24/2024] [Indexed: 10/07/2024]
Abstract
Programmatic assessment for learning (PAL) involves programmatically structured collection of assessment data for the purpose of learning. In this guide, we examine and provide recommendations on several aspects: First, we review the evolution that has led to the development of programmatic assessment, providing clarification of some of its terminology. Second, we outline the learning processes that guide the design of PAL, including distributed learning, interleaving, overlearning, and test-enhanced learning. Third, we review the evolving nature of validity and provide insights into validity from a program perspective. Finally, we examine opportunities, challenges, and future directions of assessment in the context of artificial intelligence.
Collapse
Affiliation(s)
- Dario Torre
- University of Central Florida College of Medicine, Orlando, FL, USA
| | - Lambert Schuwirth
- College of Medicine and Public Health, Flinders University, Adelaide, Australia
| |
Collapse
|
2
|
Guth TA, Wolfe RM, Martinez O, Subhiyah RG, Henderek JJ, McAllister C, Roussel D. Assessment of Clinical Reasoning in Undergraduate Medical Education: A Pragmatic Approach to Programmatic Assessment. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2024; 99:912-921. [PMID: 38412485 DOI: 10.1097/acm.0000000000005665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/29/2024]
Abstract
PURPOSE Clinical reasoning, a complex construct integral to the practice of medicine, has been challenging to define, teach, and assess. Programmatic assessment purports to overcome validity limitations of judgments made from individual assessments through proportionality and triangulation processes. This study explored a pragmatic approach to the programmatic assessment of clinical reasoning. METHOD The study analyzed data from 2 student cohorts from the University of Utah School of Medicine (UUSOM) (n = 113 in cohort 1 and 119 in cohort 2) and 1 cohort from the University of Colorado School of Medicine (CUSOM) using assessment data that spanned from 2017 to 2021 (n = 199). The study methods included the following: (1) asking faculty judges to categorize student clinical reasoning skills, (2) selecting institution-specific assessment data conceptually aligned with clinical reasoning, (3) calculating correlations between assessment data and faculty judgments, and (4) developing regression models between assessment data and faculty judgments. RESULTS Faculty judgments of student clinical reasoning skills were converted to a continuous variable of clinical reasoning struggles, with mean (SD) ratings of 2.93 (0.27) for the 232 UUSOM students and 2.96 (0.17) for the 199 CUSOM students. A total of 67 and 32 discrete assessment variables were included from the UUSOM and CUSOM, respectively. Pearson r correlations were moderate to strong between many individual and composite assessment variables and faculty judgments. Regression models demonstrated an overall adjusted R2 (standard error of the estimate) of 0.50 (0.19) for UUSOM cohort 1, 0.28 (0.15) for UUSOM cohort 2, and 0.30 (0.14) for CUSOM. CONCLUSIONS This study represents an early pragmatic exploration of regression analysis as a potential tool for operationalizing the proportionality and triangulation principles of programmatic assessment. The study found that programmatic assessment may be a useful framework for longitudinal assessment of complicated constructs, such as clinical reasoning.
Collapse
|
3
|
Laurin S, Castonguay V, Dory V, Cusson L, Côté L. "They were very very nice but just not very good": The interplay between resident-supervisor relationships and assessment in the emergency setting. AEM EDUCATION AND TRAINING 2024; 8:e10976. [PMID: 38532737 PMCID: PMC10962126 DOI: 10.1002/aet2.10976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 02/17/2024] [Accepted: 03/04/2024] [Indexed: 03/28/2024]
Abstract
Purpose Clinical supervisors hesitate to report learner weaknesses, a widely documented phenomenon referred to as "failure to fail." They also struggle to discuss weaknesses with learners themselves. Their reluctance to report and discuss learner weaknesses threatens the validity of assessment-of-learning decisions and the effectiveness of assessment for learning. Personal and interpersonal factors have been found to act as barriers to reporting learners' difficulties, but the precise role of the resident-supervisor relationship remains underexplored, specifically in the emergency setting. This study aims to better understand if and how factors related to the resident-supervisor relationship are involved in assessment of and for learning in the emergency setting. Methods We conducted a qualitative study, using semistructured interviews of 15 clinical supervisors in emergency medicine departments affiliated with our institution. Transcripts were independently coded by three members of the team using an iterative mixed deductive-inductive thematic analysis approach. The team then synthesized the coding and discussed analysis following guidelines for thematic analysis. Results Participating emergency medicine supervisors valued resident-supervisor relationships built on collaboration and trust and believed that such relationships support learning. They described how these relationships influenced assessment of and for learning and how in turn assessment influenced the relationship. Almost all profiles of resident-supervisor relationships in our study could hinder the disclosing of resident weaknesses, through a variety of mechanisms. To protect residents and themselves from the discomfort of disclosing weaknesses and to avoid deteriorating the resident-supervisor relationship, many downplayed or even masked residents' difficulties. Supervisors who described themselves as able to provide negative assessment of and for learning often adopted a more distant or professional stance. Conclusions This study contributes to a growing literature on failure to fail by confirming the critical impact that the resident-supervisor relationship has on the willingness and ability of emergency medicine supervisors to play their part as assessors.
Collapse
Affiliation(s)
- Suzanne Laurin
- Department of Family Medicine and Emergency MedicineUniversité de MontréalMontréalQuébecCanada
- Centre for Applied Health Sciences EducationUniversité de MontréalMontréalQuébecCanada
| | - Véronique Castonguay
- Department of Family Medicine and Emergency MedicineUniversité de MontréalMontréalQuébecCanada
- Centre for Applied Health Sciences EducationUniversité de MontréalMontréalQuébecCanada
| | - Valérie Dory
- Department of General PracticeUniversité de LiègeLiègeBelgium
| | - Lise Cusson
- Department of Family Medicine and Emergency MedicineUniversité de MontréalMontréalQuébecCanada
| | - Luc Côté
- Department of Family Medicine and Emergency MedicineUniversité LavalQuébecQuébecCanada
| |
Collapse
|
4
|
Tan E, Kearney GP, Cleland J, Driessen E, Frambach J. Navigating Confidentiality Dilemmas in Student Support: An Institutional Ethnography Informed Study. PERSPECTIVES ON MEDICAL EDUCATION 2024; 13:182-191. [PMID: 38496364 PMCID: PMC10941695 DOI: 10.5334/pme.1151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 02/22/2024] [Indexed: 03/19/2024]
Abstract
Introduction School-level student support programmes provide students with pastoral care and support for academic, wellbeing and other issues often via a personal tutor (PT). PT work is a balancing act between respecting the confidential information divulged by students and doing what is expected in terms of accountability and duty of care. We aimed to explore how tutors manage this tension, with the aim of advancing understanding of student support programmes. Methods This qualitative study was informed by an Institutional Ethnography approach. We conducted 11 semi-structured interviews with PTs from one medical school in Singapore. We considered how they worked in relation to relevant national and institutional-level policy documents and reporting guidelines. Data collection and analysis were iterative. Results We crafted two composite accounts to illustrate the dilemmas faced by PTs. The first depicts a PT who supports student confidentiality in the same way as doctor-patient confidentiality. The second account is a PT who adopted a more mentoring approach. Both tutors faced confidentiality challenges, using different strategies to "work around" and balance tensions between accountability and maintaining trust. PTs were torn between school and student expectations. Discussion Fostering trust in the tutor-student relationship is a priority for tutors but tensions between confidentiality, accountability and governance sometimes make it difficult for tutors to reconcile with doing what they think is best for the student. A more nuanced understanding of the concept of confidentiality may help support PTs and ultimately students.
Collapse
Affiliation(s)
- Emmanuel Tan
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
- Maastricht University, Maastricht, The Netherlands
| | - Grainne P. Kearney
- Centre for Medical Education, Queen’s University Belfast, Belfast, United Kingdom
| | - Jennifer Cleland
- Medical Education Research and Scholarship Unit, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Erik Driessen
- Department of Educational Development and Research, School of Health Professions Education (SHE), Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Janneke Frambach
- Department of Educational Development and Research, School of Health Professions Education (SHE), Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
5
|
Ginsburg S, Stroud L, Brydges R, Melvin L, Hatala R. Dual purposes by design: exploring alignment between residents' and academic advisors' documents in a longitudinal program. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024:10.1007/s10459-024-10318-2. [PMID: 38438699 DOI: 10.1007/s10459-024-10318-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 02/04/2024] [Indexed: 03/06/2024]
Abstract
Longitudinal academic advising (AA) and coaching programs are increasingly implemented in competency based medical education (CBME) to help residents reflect and act on the voluminous assessment data they receive. Documents created by residents for purposes of reflection are often used for a second, summative purpose-to help competence committees make decisions-which may be problematic. Using inductive, thematic analysis we analyzed written comments generated by 21 resident-AA dyads in one large internal medicine program who met over a 2 year period to determine what residents write when asked to reflect, how this aligns with what the AAs report, and what changes occur over time (total 109 resident self-reflections and 105 AA reports). Residents commented more on their developing autonomy, progress and improvement than AAs, who commented far more on performance measures. Over time, residents' writing shifted away from intrinsic roles, patient care and improvement towards what AAs focused on, including getting EPAs (entrustable professional activities), studying and exams. For EPAs, the emphasis was on getting sufficient numbers rather than reflecting on what residents were learning. Our findings challenge the practice of dual-purposing documents, by questioning the blurring of formative and summative intent, the structure of forms and their multiple conflicting purposes, and assumptions about the advising relationship over time. Our study suggests a need to re-evaluate how reflective documents are used in CBME programs. Further research should explore whether and how documentation can best be used to support resident growth and development.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- Department of Medicine, Mount Sinai Hospital, Toronto, ON, Canada.
- Wilson Centre for Research in Education, University Health Network, Toronto, ON, Canada.
| | - Lynfa Stroud
- Department of Medicine, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Ryan Brydges
- Wilson Centre for Research in Education, University Health Network, Toronto, ON, Canada
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto, Toronto, ON, Canada
| | - Lindsay Melvin
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Department of Medicine, University Health Network, Toronto, ON, Canada
| | - Rose Hatala
- Department of Medicine, University of British Columbia, Vancouver, BC, Canada
- Centre for Health Education Scholarship, University of British Columbia, Vancouver, Canada
| |
Collapse
|
6
|
Osborne KC, Barbagallo C, Aldaoud A, Guille J, Campbell A, Anderson M, Pearce J. Reforming Medical Physics and Radiopharmaceutical Science Training Through a Programmatic Approach to Assessment. JOURNAL OF MEDICAL EDUCATION AND CURRICULAR DEVELOPMENT 2024; 11:23821205241271539. [PMID: 39246600 PMCID: PMC11378185 DOI: 10.1177/23821205241271539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Accepted: 06/20/2024] [Indexed: 09/10/2024]
Abstract
OBJECTIVES Programmatic assessment approaches can be extended to the design of allied health professions training, to enhance the learning of trainees. The Australasian College of Physical Scientists and Engineers in Medicine worked with assessment specialists at the Australian Council for Educational Research and Amplexa Consulting, to revise their medical physics and radiopharmaceutical science training programs. One of the central aims of the revisions was to produce a training program that provides standardized training support to their registrars throughout the 3 years, better supporting their registrars to successfully complete the program in the time frame through providing timely and constructive feedback on the registrar's progression. METHODS We used the principles of programmatic assessment to revise the assessment methods and progression decisions in the three training programs. RESULTS We revised the 3-year training programs for diagnostic imaging medical physics, radiation oncology medical physics and radiopharmaceutical science in Australia and New Zealand, incorporating clear stages of training and associated progression points. CONCLUSIONS We discuss the advantages and difficulties that have arisen with this implementation. We found 5 key elements necessary for implementing programmatic assessment in these specialized contexts: embracing blurred boundaries between assessment of and for learning, adapting the approach to each specialized context, change management, engaging subject matter experts, and clear communication to registrars/trainees.
Collapse
Affiliation(s)
- Kristy C Osborne
- Education Research, Policy and Development Division, Australian Council for Educational Research, Camberwell, VIC, Australia
| | - Cathryn Barbagallo
- National Training Education and Assessment Program, Australasian College of Physical Scientists and Engineers in Medicine, Mascot, NSW, Australia
| | - Ammar Aldaoud
- Assessment and Psychometric Research Division, Australian Council for Educational Research, Camberwell, VIC, Australia
| | - Jennifer Guille
- National Training Education and Assessment Program, Australasian College of Physical Scientists and Engineers in Medicine, Mascot, NSW, Australia
| | - Andrew Campbell
- National Training Education and Assessment Program, Australasian College of Physical Scientists and Engineers in Medicine, Mascot, NSW, Australia
| | | | - Jacob Pearce
- Education Research, Policy and Development Division, Australian Council for Educational Research, Camberwell, VIC, Australia
| |
Collapse
|
7
|
Greenfield J, Qua K, Prayson RA, Bierer SB. "It Changed How I Think"-Impact of Programmatic Assessment Upon Practicing Physicians: A Qualitative Study. MEDICAL SCIENCE EDUCATOR 2023; 33:963-974. [PMID: 37546195 PMCID: PMC10403454 DOI: 10.1007/s40670-023-01829-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/23/2023] [Indexed: 08/08/2023]
Abstract
Programmatic assessment is a systematic approach used to document and assess learner performance. It offers learners frequent formative feedback from a variety of contexts and uses both high- and low-stakes assessments to determine student progress. Existing research has explored learner and faculty perceptions of programmatic assessment, reporting favorable impact on faculty understanding of the importance of assessment stakes and feedback to learners while students report the ability to establish and navigate towards goals and reflect on their performance. The Cleveland Clinic Lerner College of Medicine (CCLCM) of Case Western Reserve University adopted programmatic assessment methods at its inception. With more than 18 years' experience with programmatic assessment and a portfolio-based assessment system, CCLCM is well-positioned to explore its graduates' perceptions of their programmatic assessment experiences during and after medical school. In 2020, the investigators interviewed 26 of the 339 physician graduates. Participants were purposefully sampled to represent multiple class cohorts (2009-2019), clinical specialties, and practice locations. The investigators analyzed interview transcripts using thematic analysis informed by the frameworks of self-determination theory and professional identity formation. The authors identified themes and support each with participant quotes from the interviews. Based on findings, the investigators compiled a series of recommendations for other institutions who have already or plan to incorporate elements of programmatic assessment into their curricula. The authors concluded by discussing future directions for research and additional avenues of inquiry.
Collapse
Affiliation(s)
- Jessica Greenfield
- University of Virginia School of Medicine, Room 2008A Pinn Hall, Box 800866, Charlottesville, VA 22908-0366 USA
| | - Kelli Qua
- Case Western Reserve University School of Medicine, Cleveland, OH USA
| | - Richard A. Prayson
- Department of Anatomic Pathology, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland Clinic, Cleveland, OH USA
| | - S. Beth Bierer
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH USA
| |
Collapse
|
8
|
Loosveld LM, Driessen EW, Theys M, Van Gerven PWM, Vanassche E. Combining Support and Assessment in Health Professions Education: Mentors' and Mentees' Experiences in a Programmatic Assessment Context. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:271-281. [PMID: 37426357 PMCID: PMC10327863 DOI: 10.5334/pme.1004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 06/23/2023] [Indexed: 07/11/2023]
Abstract
Introduction Mentors in programmatic assessment support mentees with low-stakes feedback, which often also serves as input for high-stakes decision making. That process potentially causes tensions in the mentor-mentee relationship. This study explored how undergraduate mentors and mentees in health professions education experience combining developmental support and assessment, and what this means for their relationship. Methods The authors chose a pragmatic qualitative research approach and conducted semi-structured vignette-based interviews with 24 mentors and 11 mentees that included learners from medicine and the biomedical sciences. Data were analyzed thematically. Results How participants combined developmental support and assessment varied. In some mentor-mentee relationships it worked well, in others it caused tensions. Tensions were also created by unintended consequences of design decisions at the program level. Dimensions impacted by experienced tensions were: relationship quality, dependence, trust, and nature and focus of mentoring conversations. Mentors and mentees mentioned applying various strategies to alleviate tensions: transparency and expectation management, distinguishing between developmental support and assessment, and justifying assessment responsibility. Discussion Combining the responsibility for developmental support and assessment within an individual worked well in some mentor-mentee relationships, but caused tensions in others. On the program level, clear decisions should be made regarding the design of programmatic assessment: what is the program of assessment and how are responsibilities divided between all involved? If tensions arise, mentors and mentees can try to alleviate these, but continuous mutual calibration of expectations between mentors and mentees remains of key importance.
Collapse
Affiliation(s)
- Lianne M. Loosveld
- Department of Educational Development & Research, School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, the Netherlands
| | - Erik W. Driessen
- Department of Educational Development & Research, School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, the Netherlands
| | - Mattias Theys
- Department of Educational Development & Research, School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, the Netherlands
| | - Pascal W. M. Van Gerven
- Department of Educational Development & Research, School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, the Netherlands
| | - Eline Vanassche
- Faculty of Psychology and Educational Sciences, KU Leuven Kulak, Etienne Sabbelaan 51, P.O. Box 7654, 8500 Kortrijk, Belgium
| |
Collapse
|
9
|
Pearce J. What do student experiences of programmatic assessment tell us about scoring programmatic assessment data? MEDICAL EDUCATION 2022; 56:872-875. [PMID: 35698736 DOI: 10.1111/medu.14852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 06/06/2022] [Indexed: 06/15/2023]
Affiliation(s)
- Jacob Pearce
- Australian Council for Educational Research - Tertiary Education (Assessment), Camberwell, Victoria, Australia
| |
Collapse
|
10
|
Concordance of Narrative Comments with Supervision Ratings Provided During Entrustable Professional Activity Assessments. J Gen Intern Med 2022; 37:2200-2207. [PMID: 35710663 PMCID: PMC9296736 DOI: 10.1007/s11606-022-07509-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 03/24/2022] [Indexed: 10/18/2022]
Abstract
BACKGROUND Use of EPA-based entrustment-supervision ratings to determine a learner's readiness to assume patient care responsibilities is expanding. OBJECTIVE In this study, we investigate the correlation between narrative comments and supervision ratings assigned during ad hoc assessments of medical students' performance of EPA tasks. DESIGN Data from assessments completed for students enrolled in the clerkship phase over 2 academic years were used to extract a stratified random sample of 100 narrative comments for review by an expert panel. PARTICIPANTS A review panel, comprised of faculty with specific expertise related to their roles within the EPA program, provided a "gold standard" supervision rating using the comments provided by the original assessor. MAIN MEASURES Interrater reliability (IRR) between members of review panel and correlation coefficients (CC) between expert ratings and supervision ratings from original assessors. KEY RESULTS IRR among members of the expert panel ranged from .536 for comments associated with focused history taking to .833 for complete physical exam. CC (Kendall's correlation coefficient W) between panel members' assignment of supervision ratings and the ratings provided by the original assessors for history taking, physical examination, and oral presentation comments were .668, .697, and .735 respectively. The supervision ratings of the expert panel had the highest degree of correlation with ratings provided during assessments done by master assessors, faculty trained to assess students across clinical contexts. Correlation between supervision ratings provided with the narrative comments at the time of observation and supervision ratings assigned by the expert panel differed by clinical discipline, perhaps reflecting the value placed on, and perhaps the comfort level with, assessment of the task in a given specialty. CONCLUSIONS To realize the full educational and catalytic effect of EPA assessments, assessors must apply established performance expectations and provide high-quality narrative comments aligned with the criteria.
Collapse
|
11
|
Barman L, McGrath C, Josephsson S, Silén C, Bolander Laksov K. Safeguarding fairness in assessments-How teachers develop joint practices. MEDICAL EDUCATION 2022; 56:651-659. [PMID: 35263464 PMCID: PMC9310582 DOI: 10.1111/medu.14789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 02/03/2022] [Accepted: 02/26/2022] [Indexed: 06/14/2023]
Abstract
INTRODUCTION In light of reforms demanding increased transparency of student performance assessments, this study offers an in-depth perspective of how teachers develop their assessment practice. Much is known about factors that influence assessments, and different solutions claim to improve the validity and reliability of assessments of students' clinical competency. However, little is known about how teachers go about improving their assessment practices. This study aims to contribute empirical findings about how teachers' assessment practice may change when shared criteria for assessing students' clinical competency are developed and implemented. METHODS Using a narrative-in-action research approach grounded in narrative theory about human sense-making, one group including nine health professions teachers was studied over a period of 1 year. Drawing upon data from observations, interviews, formal documents and written reflections from these teachers, we performed a narrative analysis to reveal how these teachers made sense of experiences associated with the development and implementation of joint grading criteria for assessing students' clinical performances. RESULTS The findings present a narrative showing how a shared assessment practice took years to develop and was based on the teachers changed approach to scrutiny. The teachers became highly motivated to use grading criteria to ensure fairness in assessments, but more importantly, to fulfil their moral obligation towards patients. The narrative also demonstrates how these teachers reasoned about dilemmas that arose when they applied standardised assessment criteria. DISCUSSION The narrative analysis shows clearly how teachers' development and application of assessment standards are embedded in local practices. Our findings highlight the importance of teachers' joint discussions on how to interpret criteria applied in formative and summative assessments of students' performances. In particular, teachers' different approaches to assessing 'pieces of skills' versus making holistic judgements on students' performances, regardless of whether the grading criteria are clear and well-articulated on paper, should be acknowledged. Understanding the journey that these teachers made gives new perspectives as to how faculty can be supported when assessments of professionalism and clinical competency are developed.
Collapse
Affiliation(s)
- Linda Barman
- Department of Learning in Engineering SciencesKTH Royal Institute of TechnologyStockholmSweden
| | - Cormac McGrath
- Department of EducationStockholm UniversityStockholmSweden
| | - Staffan Josephsson
- Department of Neurobiology, Care Sciences and SocietyKarolinska InstitutetStockholmSweden
| | - Charlotte Silén
- Department of Learning, Informatics, Management and EthicsKarolinska InstitutetStockholmSweden
| | - Klara Bolander Laksov
- Department of EducationStockholm UniversityStockholmSweden
- Department of Learning, Informatics, Management and EthicsKarolinska InstitutetStockholmSweden
| |
Collapse
|
12
|
Swails JL, Gadgil MA, Goodrum H, Gupta R, Rahbar MH, Bernstam EV. Role of faculty characteristics in failing to fail in clinical clerkships. MEDICAL EDUCATION 2022; 56:634-640. [PMID: 34983083 DOI: 10.1111/medu.14725] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 12/21/2021] [Accepted: 12/27/2021] [Indexed: 06/14/2023]
Abstract
INTRODUCTION In the context of competency-based medical education, poor student performance must be accurately documented to allow learners to improve and to protect the public. However, faculty may be reluctant to provide evaluations that could be perceived as negative, and clerkship directors report that some students pass who should have failed. Student perception of faculty may be considered in faculty promotion, teaching awards, and leadership positions. Therefore, faculty of lower academic rank may perceive themselves to be more vulnerable and, therefore, be less likely to document poor student performance. This study investigated faculty characteristics associated with low performance evaluations (LPEs). METHOD The authors analysed individual faculty evaluations of medical students who completed the third-year clerkships over 15 years using a generalised mixed regression model to assess the association of evaluator academic rank with likelihood of an LPE. Other available factors related to experience or academic vulnerability were incorporated including faculty age, race, ethnicity, and gender. RESULTS The authors identified 50 120 evaluations by 585 faculty on 3447 students between January 2007 and April 2021. Faculty were more likely to give LPEs at the midpoint (4.9%), compared with the final (1.6%), evaluation (odds ratio [OR] = 4.004, 95% confidence interval [CI] [3.59, 4.53]; p < 0.001). The likelihood of LPE decreased significantly during the 15-year study period (OR = 0.94 [0.90, 0.97]; p < 0.01). Full professors were significantly more likely to give an LPE than assistant professors (OR = 1.62 [1.08, 2.43]; p = 0.02). Women were more likely to give LPEs than men (OR = 1.88 [1.37, 2.58]; p 0.01). Other faculty characteristics including race and experience were not associated with LPE. CONCLUSIONS The number of LPEs decreased over time, and senior faculty were more likely to document poor medical student performance compared with assistant professors.
Collapse
Affiliation(s)
- Jennifer L Swails
- Department of Internal Medicine, Mc Govern Medical School, University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Meghana A Gadgil
- Division of Hospital Medicine, San Francisco General Hospital, San Francisco, California, USA
- Division of Health Policy and Management, School of Public Health, University of California, Berkeley, Berkeley, California, USA
| | - Heath Goodrum
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Resmi Gupta
- Division of Clinical and Translational Sciences, Department of Internal Medicine, McGovern Medical School, Houston, Texas, USA
| | - Mohammad H Rahbar
- Division of Clinical and Translational Sciences, Department of Internal Medicine, McGovern Medical School, Houston, Texas, USA
- Department of Epidemiology, Human Genetics, and Environmental Sciences, School of Public Health, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Elmer V Bernstam
- Department of Internal Medicine, Mc Govern Medical School, University of Texas Health Science Center at Houston, Houston, Texas, USA
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, Texas, USA
| |
Collapse
|
13
|
de Jonge LPJWM, Minkels FNE, Govaerts MJB, Muris JWM, Kramer AWM, van der Vleuten CPM, Timmerman AA. Supervisory dyads' communication and alignment regarding the use of workplace-based observations: a qualitative study in general practice residency. BMC MEDICAL EDUCATION 2022; 22:330. [PMID: 35484573 PMCID: PMC9052511 DOI: 10.1186/s12909-022-03395-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Accepted: 04/21/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND In medical residency, performance observations are considered an important strategy to monitor competence development, provide feedback and warrant patient safety. The aim of this study was to gain insight into whether and how supervisor-resident dyads build a working repertoire regarding the use of observations, and how they discuss and align goals and approaches to observation in particular. METHODS We used a qualitative, social constructivist approach to explore if and how supervisory dyads work towards alignment of goals and preferred approaches to performance observations. We conducted semi-structured interviews with supervisor-resident dyads, performing a template analysis of the data thus obtained. RESULTS The supervisory dyads did not frequently communicate about the use of observations, except at the start of training and unless they were triggered by internal or external factors. Their working repertoire regarding the use of observations seemed to be primarily driven by patient safety goals and institutional assessment requirements rather than by providing developmental feedback. Although intended as formative, the institutional test was perceived as summative by supervisors and residents, and led to teaching to the test rather than educating for purposes of competence development. CONCLUSIONS To unlock the full educational potential of performance observations, and to foster the development of an educational alliance, it is essential that supervisory dyads and the training institute communicate clearly about these observations and the role of assessment practices of- and for learning, in order to align their goals and respective approaches.
Collapse
Affiliation(s)
- Laury P J W M de Jonge
- Department of General Practice, Maastricht University, P.O. Box 616, 6200, MD, Maastricht, The Netherlands.
| | - Floor N E Minkels
- Department of General Practice, Maastricht University, P.O. Box 616, 6200, MD, Maastricht, The Netherlands
| | - Marjan J B Govaerts
- Department of Educational Research and Development, Maastricht University, Maastricht, The Netherlands
| | - Jean W M Muris
- Department of General Practice, Maastricht University, P.O. Box 616, 6200, MD, Maastricht, The Netherlands
| | - Anneke W M Kramer
- Department of Family Medicine, Leiden University, Leiden, The Netherlands
| | - Cees P M van der Vleuten
- Department of Educational Research and Development, Maastricht University, Maastricht, The Netherlands
| | - Angelique A Timmerman
- Department of General Practice, Maastricht University, P.O. Box 616, 6200, MD, Maastricht, The Netherlands
| |
Collapse
|
14
|
Castanelli DJ, Weller JM, Molloy E, Bearman M. Trust, power and learning in workplace-based assessment: The trainee perspective. MEDICAL EDUCATION 2022; 56:280-291. [PMID: 34433230 PMCID: PMC9292503 DOI: 10.1111/medu.14631] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 08/09/2021] [Accepted: 08/17/2021] [Indexed: 05/22/2023]
Abstract
For trainees to participate meaningfully in workplace-based assessment (WBA), they must have trust in their assessor. However, the trainee's dependent position complicates such trust. Understanding how power and trust influence WBAs may help us make them more effective learning opportunities. We conducted semi-structured interviews with 17 postgraduate anaesthesia trainees across Australia and New Zealand. Sensitised by notions of power, we used constructivist grounded theory methodology to examine trainees' experiences with trusting their supervisors in WBAs. In our trainee accounts, we found that supervisors held significant power to mediate access to learning opportunities and influence trainee progress in training. All episodes where supervisors could observe trainees, from simply working together to formal WBAs, were seen to generate assessment information with potential consequences. In response, trainees actively acquiesced to a deferential role, which helped them access desirable expertise and minimise the risk of reputational harm. Trainees granted trust based on how they anticipated a supervisor would use the power inherent in their role. Trainees learned to ration exposure of their authentic practice to supervisors in proportion to their trust in them. Trainees were more trusting and open to learning when supervisors used their power for the trainee's benefit and avoided WBAs with supervisors they perceived as less trustworthy. If assessment for learning is to flourish, then the trainee-supervisor power dynamic must evolve. Enhancing supervisor behaviour through reflection and professional development to better reward trainee trust would invite more trainee participation in assessment for learning. Modifying the assessment system design to nudge the power balance towards the trainee may also help. Modifications could include designated formative and summative assessments or empowering trainees to select which assessments count towards progress decisions. Attending to power and trust in WBA may stimulate progress towards the previously aspirational goal of assessment for learning in the workplace.
Collapse
Affiliation(s)
- Damian J. Castanelli
- School of Clinical Sciences at Monash HealthMonash UniversityClaytonVictoriaAustralia
- Department of Anaesthesia and Perioperative MedicineMonash HealthClaytonVictoriaAustralia
- Centre for Research and Assessment in Digital Learning (CRADLE)Deakin UniversityGeelongVictoriaAustralia
| | - Jennifer M. Weller
- Centre for Medical and Health Sciences Education, School of MedicineUniversity of AucklandAucklandNew Zealand
- Department of AnaesthesiaAucklandAucklandNew Zealand
| | - Elizabeth Molloy
- Department of Medical Education, Melbourne Medical SchoolUniversity of MelbourneMelbourneVictoriaAustralia
| | - Margaret Bearman
- Centre for Research and Assessment in Digital Learning (CRADLE)Deakin UniversityGeelongVictoriaAustralia
| |
Collapse
|
15
|
Torre D, Rice NE, Ryan A, Bok H, Dawson LJ, Bierer B, Wilkinson TJ, Tait GR, Laughlin T, Veerapen K, Heeneman S, Freeman A, van der Vleuten C. Ottawa 2020 consensus statements for programmatic assessment - 2. Implementation and practice. MEDICAL TEACHER 2021; 43:1149-1160. [PMID: 34330202 DOI: 10.1080/0142159x.2021.1956681] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
INTRODUCTION Programmatic assessment is a longitudinal, developmental approach that fosters and harnesses the learning function of assessment. Yet the implementation, a critical step to translate theory into practice, can be challenging. As part of the Ottawa 2020 consensus statement on programmatic assessment, we sought to provide descriptions of the implementation of the 12 principles of programmatic assessment and to gain insight into enablers and barriers across different institutions and contexts. METHODS After the 2020 Ottawa conference, we surveyed 15 Health Profession Education programmes from six different countries about the implementation of the 12 principles of programmatic assessment. Survey responses were analysed using a deductive thematic analysis. RESULTS AND DISCUSSION A wide range of implementations were reported although the principles remained, for the most part, faithful to the original enunciation and rationale. Enablers included strong leadership support, ongoing faculty development, providing students with clear expectations about assessment, simultaneous curriculum renewal and organisational commitment to change. Most barriers were related to the need for a paradigm shift in the culture of assessment. Descriptions of implementations in relation to the theoretical principles, across multiple educational contexts, coupled with explanations of enablers and barriers, provided new insights and a clearer understanding of the strategic and operational considerations in the implementation of programmatic assessment. Future research is needed to further explore how contextual and cultural factors affect implementation.
Collapse
Affiliation(s)
- Dario Torre
- Department of Medicine, Uniformed Services University of Health Sciences, Bethesda, MD, USA
| | - Neil E Rice
- College of Medicine and Health, University of Exeter Medical School, Exeter, UK
| | - Anna Ryan
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Melbourne, Australia
| | - Harold Bok
- Department of Population Health Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Luke J Dawson
- School of Dentistry, University of Liverpool, Liverpool, UK
| | - Beth Bierer
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH, USA
| | - Tim J Wilkinson
- Education unit, University of Otago, Christchurch, New Zealand
| | - Glendon R Tait
- MD Program, Dept. of Psychiatry, and The Wilson Centre, University of Toronto, Toronto, Canada
| | - Tom Laughlin
- Department of Family Medicine, Dalhousie University, Halifax, Canada
| | - Kiran Veerapen
- Faculty of Medicine, University of British Columbia, Vancouver, Canada
| | - Sylvia Heeneman
- Department of Educational Development and Research, School of Health Profession Education, Maastricht University, Maastricht, The Netherlands
| | - Adrian Freeman
- College of Medicine and Health, University of Exeter Medical School, Exeter, UK
| | - Cees van der Vleuten
- Department of Educational Development and Research, School of Health Profession Education, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
16
|
Heeneman S, de Jong LH, Dawson LJ, Wilkinson TJ, Ryan A, Tait GR, Rice N, Torre D, Freeman A, van der Vleuten CPM. Ottawa 2020 consensus statement for programmatic assessment - 1. Agreement on the principles. MEDICAL TEACHER 2021; 43:1139-1148. [PMID: 34344274 DOI: 10.1080/0142159x.2021.1957088] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
INTRODUCTION In the Ottawa 2018 Consensus framework for good assessment, a set of criteria was presented for systems of assessment. Currently, programmatic assessment is being established in an increasing number of programmes. In this Ottawa 2020 consensus statement for programmatic assessment insights from practice and research are used to define the principles of programmatic assessment. METHODS For fifteen programmes in health professions education affiliated with members of an expert group (n = 20), an inventory was completed for the perceived components, rationale, and importance of a programmatic assessment design. Input from attendees of a programmatic assessment workshop and symposium at the 2020 Ottawa conference was included. The outcome is discussed in concurrence with current theory and research. RESULTS AND DISCUSSION Twelve principles are presented that are considered as important and recognisable facets of programmatic assessment. Overall these principles were used in the curriculum and assessment design, albeit with a range of approaches and rigor, suggesting that programmatic assessment is an achievable education and assessment model, embedded both in practice and research. Knowledge on and sharing how programmatic assessment is being operationalized may help support educators charting their own implementation journey of programmatic assessment in their respective programmes.
Collapse
Affiliation(s)
- Sylvia Heeneman
- Department of Pathology, School of Health Profession Education, Maastricht University, Maastricht, The Netherlands
| | - Lubberta H de Jong
- Department of Population Health Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Luke J Dawson
- School of Dentistry, University of Liverpool, Liverpool, UK
| | - Tim J Wilkinson
- Education Unit, University of Otago, Christchurch, New Zealand
| | - Anna Ryan
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Melbourne, Australia
| | - Glendon R Tait
- MD Program, Department of Psychiatry, and The Wilson Centre, University of Toronto, Toronto, Canada
| | - Neil Rice
- College of Medicine and Health, University of Exeter Medical School, Exeter, UK
| | - Dario Torre
- Department of Medicine, Uniformed Services University of Health Sciences, Bethesda, MD, USA
| | - Adrian Freeman
- College of Medicine and Health, University of Exeter Medical School, Exeter, UK
| | - Cees P M van der Vleuten
- Department of Educational Development and Research, School of Health Profession Education, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
17
|
Pearce J, Tavares W. A philosophical history of programmatic assessment: tracing shifting configurations. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2021; 26:1291-1310. [PMID: 33893881 DOI: 10.1007/s10459-021-10050-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Accepted: 04/09/2021] [Indexed: 06/12/2023]
Abstract
Programmatic assessment is now well entrenched in medical education, allowing us to reflect on when it first emerged and how it evolved into the form we know today. Drawing upon the intellectual tradition of historical epistemology, we provide a philosophically-oriented historiographical study of programmatic assessment. Our goal is to trace its relatively short historical trajectory by describing shifting configurations in its scene of inquiry-focusing on questions, practices, and philosophical presuppositions. We identify three historical phases: emergence, evolution and entrenchment. For each, we describe the configurations of the scene; examine underlying philosophical presuppositions driving changes; and detail upshots in assessment practice. We find that programmatic assessment emerged in response to positivist 'turmoil' prior to 2005, driven by utility considerations and implicit pragmatist undertones. Once introduced, it evolved with notions of diversity and learning being underscored, and a constructivist ontology developing at its core. More recently, programmatic assessment has become entrenched as its own sub-discipline. Rich narratives have been emphasised, but philosophical underpinnings have been blurred. We hope to shed new light on current assessment practices in the medical education community by interrogating the history of programmatic assessment from this philosophical vantage point. Making philosophical presuppositions explicit highlights the perspectival nature of aspects of programmatic assessment, and suggest reasons for perceived benefits as well as potential tensions, contradictions and vulnerabilities in the approach today. We conclude by offering some reflections on important points to emerge from our historical study, and suggest 'what next' for programmatic assessment in light of this endeavour.
Collapse
Affiliation(s)
- J Pearce
- Tertiary Education (Assessment), Australian Council for Educational Research, 19 Prospect Hill Road, Camberwell, VIC, 3124, Australia.
| | - W Tavares
- The Wilson Centre and Post-MD Education. University Health Network and University of Toronto, Toronto, ON, Canada
| |
Collapse
|
18
|
Ginsburg S, Watling CJ, Schumacher DJ, Gingerich A, Hatala R. Numbers Encapsulate, Words Elaborate: Toward the Best Use of Comments for Assessment and Feedback on Entrustment Ratings. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S81-S86. [PMID: 34183607 DOI: 10.1097/acm.0000000000004089] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The adoption of entrustment ratings in medical education is based on a seemingly simple premise: to align workplace-based supervision with resident assessment. Yet it has been difficult to operationalize this concept. Entrustment rating forms combine numeric scales with comments and are embedded in a programmatic assessment framework, which encourages the collection of a large quantity of data. The implicit assumption that more is better has led to an untamable volume of data that competency committees must grapple with. In this article, the authors explore the roles of numbers and words on entrustment rating forms, focusing on the intended and optimal use(s) of each, with a focus on the words. They also unpack the problematic issue of dual-purposing words for both assessment and feedback. Words have enormous potential to elaborate, to contextualize, and to instruct; to realize this potential, educators must be crystal clear about their use. The authors set forth a number of possible ways to reconcile these tensions by more explicitly aligning words to purpose. For example, educators could focus written comments solely on assessment; create assessment encounters distinct from feedback encounters; or use different words collected from the same encounter to serve distinct feedback and assessment purposes. Finally, the authors address the tyranny of documentation created by programmatic assessment and urge caution in yielding to the temptation to reduce words to numbers to make them manageable. Instead, they encourage educators to preserve some educational encounters purely for feedback, and to consider that not all words need to become data.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- S. Ginsburg is professor of medicine, Department of Medicine, Sinai Health System and Faculty of Medicine, University of Toronto, scientist, Wilson Centre for Research in Education, University of Toronto, Toronto, Ontario, Canada, and Canada Research Chair in Health Professions Education; ORCID: http://orcid.org/0000-0002-4595-6650
| | - Christopher J Watling
- C.J. Watling is professor and director, Centre for Education Research and Innovation, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada; ORCID: https://orcid.org/0000-0001-9686-795X
| | - Daniel J Schumacher
- D.J. Schumacher is associate professor of pediatrics, Cincinnati Children's Hospital Medical Center and University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0001-5507-8452
| | - Andrea Gingerich
- A. Gingerich is assistant professor, Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada; ORCID: https://orcid.org/0000-0001-5765-3975
| | - Rose Hatala
- R. Hatala is professor, Department of Medicine, and director, Clinical Educator Fellowship, Center for Health Education Scholarship, University of British Columbia, Vancouver, British Columbia, Canada; ORCID: https://orcid.org/0000-0003-0521-2590
| |
Collapse
|
19
|
Kinnear B, Warm EJ, Caretta-Weyer H, Holmboe ES, Turner DA, van der Vleuten C, Schumacher DJ. Entrustment Unpacked: Aligning Purposes, Stakes, and Processes to Enhance Learner Assessment. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S56-S63. [PMID: 34183603 DOI: 10.1097/acm.0000000000004108] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Educators use entrustment, a common framework in competency-based medical education, in multiple ways, including frontline assessment instruments, learner feedback tools, and group decision making within promotions or competence committees. Within these multiple contexts, entrustment decisions can vary in purpose (i.e., intended use), stakes (i.e., perceived risk or consequences), and process (i.e., how entrustment is rendered). Each of these characteristics can be conceptualized as having 2 distinct poles: (1) purpose has formative and summative, (2) stakes has low and high, and (3) process has ad hoc and structured. For each characteristic, entrustment decisions often do not fall squarely at one pole or the other, but rather lie somewhere along a spectrum. While distinct, these continua can, and sometimes should, influence one another, and can be manipulated to optimally integrate entrustment within a program of assessment. In this article, the authors describe each of these continua and depict how key alignments between them can help optimize value when using entrustment in programmatic assessment within competency-based medical education. As they think through these continua, the authors will begin and end with a case study to demonstrate the practical application as it might occur in the clinical learning environment.
Collapse
Affiliation(s)
- Benjamin Kinnear
- B. Kinnear is associate professor of internal medicine and pediatrics, Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0003-0052-4130
| | - Eric J Warm
- E.J. Warm is professor of internal medicine and program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-6088-2434
| | - Holly Caretta-Weyer
- H. Caretta-Weyer is assistant professor of emergency medicine, Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, California; ORCID: https://orcid.org/0000-0002-9783-5797
| | - Eric S Holmboe
- E.S. Holmboe is chief, research, milestones development and evaluation officer, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0003-0108-6021
| | - David A Turner
- D.A. Turner is vice president, Competency-Based Medical Education, American Board of Pediatrics, Chapel Hill, North Carolina
| | - Cees van der Vleuten
- C. van der Vleuten is professor of education, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0001-6802-3119
| | - Daniel J Schumacher
- D.J. Schumacher is associate professor of pediatrics, Cincinnati Children's Hospital Medical Center/University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0001-5507-8452
| |
Collapse
|
20
|
Jamieson J, Hay M, Gibson S, Palermo C. Implementing programmatic assessment transforms supervisor attitudes: An explanatory sequential mixed methods study. MEDICAL TEACHER 2021; 43:709-717. [PMID: 33705668 DOI: 10.1080/0142159x.2021.1893678] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
INTRODUCTION Programmatic assessment (PA) is an increasingly popular approach to competency-based assessment (CBA), yet evaluation evidence is limited. This study aimed to identify and explore supervisor attitudes before and after implementing a novel PA using a sequential explanatory mixed methods design. In phase one, a survey was used to identify supervisor perspectives on work-based placements, PA and CBA. Survey results were then applied to develop focus group questions to further explore supervisor attitudes. RESULTS PA was found to improve supervisor-student relationships by removing high-stakes assessment decisions and creating greater capacity for feedback and teaching, leading to a productive learning environment. Assessment was perceived as an important role and supervisors wanted to feel valued and heard within PA. Trust was conceptualised as a triad between supervisor, student and university, and enabled supervisors to engage with PA which was important for success. Supervisor learning of PA was experiential and often supported by students, highlighting the need for hands-on training. CONCLUSION Participants reported a high level of agreement with PA and CBA principles which may have made them amenable to educational change. Further research is needed to explore the experience of all stakeholders and to understand how worldviews and culture influence assessment initiatives.
Collapse
Affiliation(s)
- Janica Jamieson
- Department of Nutrition and Dietetics, Monash University, Melbourne, Australia
- School of Medical and Health Sciences, Edith Cowan University, Perth, Australia
| | - Margaret Hay
- Department of Nutrition and Dietetics, Monash University, Melbourne, Australia
| | - Simone Gibson
- Department of Nutrition and Dietetics, Monash University, Melbourne, Australia
| | - Claire Palermo
- Department of Nutrition and Dietetics, Monash University, Melbourne, Australia
| |
Collapse
|
21
|
Pinilla S, Kyrou A, Klöppel S, Strik W, Nissen C, Huwendiek S. Workplace-based assessments of entrustable professional activities in a psychiatry core clerkship: an observational study. BMC MEDICAL EDUCATION 2021; 21:223. [PMID: 33882926 PMCID: PMC8059233 DOI: 10.1186/s12909-021-02637-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Accepted: 03/27/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND Entrustable professional activities (EPAs) in competency-based, undergraduate medical education (UME) have led to new formative workplace-based assessments (WBA) using entrustment-supervision scales in clerkships. We conducted an observational, prospective cohort study to explore the usefulness of a WBA designed to assess core EPAs in a psychiatry clerkship. METHODS We analyzed changes in self-entrustment ratings of students and the supervisors' ratings per EPA. Timing and frequencies of learner-initiated WBAs based on a prospective entrustment-supervision scale and resultant narrative feedback were analyzed quantitatively and qualitatively. Predictors for indirect supervision levels were explored via regression analysis, and narrative feedback was coded using thematic content analysis. Students evaluated the WBA after each clerkship rotation. RESULTS EPA 1 ("Take a patient's history"), EPA 2 ("Assess physical & mental status") and EPA 8 ("Document & present a clinical encounter") were most frequently used for learner-initiated WBAs throughout the clerkship rotations in a sample of 83 students. Clinical residents signed off on the majority of the WBAs (71%). EPAs 1, 2, and 8 showed the largest increases in self-entrustment and received most of the indirect supervision level ratings. We found a moderate, positive correlation between self-entrusted supervision levels at the end of the clerkship and the number of documented entrustment-supervision ratings per EPA (p < 0.0001). The number of entrustment ratings explained 6.5% of the variance in the supervisors' ratings for EPA 1. Narrative feedback was documented for 79% (n = 214) of the WBAs. Most narratives addressed the Medical Expert role (77%, n = 208) and used reinforcement (59%, n = 161) as a feedback strategy. Students perceived the feedback as beneficial. CONCLUSIONS Using formative WBAs with an entrustment-supervision scale and prompts for written feedback facilitated targeted, high-quality feedback and effectively supported students' development toward self-entrusted, indirect supervision levels.
Collapse
Affiliation(s)
- Severin Pinilla
- University Hospital of Old Age Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland.
- Department for Assessment and Evaluation, Institute for Medical Education, University of Bern, Bern, Switzerland.
| | - Alexandra Kyrou
- University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| | - Stefan Klöppel
- University Hospital of Old Age Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| | - Werner Strik
- University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| | - Christoph Nissen
- University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| | - Sören Huwendiek
- Department for Assessment and Evaluation, Institute for Medical Education, University of Bern, Bern, Switzerland
| |
Collapse
|
22
|
Scheepers RA. Supporting the balance between well-being and performance in medical education. MEDICAL EDUCATION 2020; 54:499-501. [PMID: 32259314 PMCID: PMC7318574 DOI: 10.1111/medu.14173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Revised: 03/26/2020] [Accepted: 03/30/2020] [Indexed: 06/11/2023]
Abstract
The author reflects on how clinician teachers can support the balance between well‐being and performance for trainees as stress‐related threats to performance can hamper patient care quality.
Collapse
Affiliation(s)
- Renée A. Scheepers
- Research Group Socio‐Medical SciencesErasmus School of Health Policy and ManagementErasmus University of RotterdamRotterdamthe Netherlands
| |
Collapse
|