1
|
Drumm BT, Bree R, Griffin CS, O'Leary N. Diversifying laboratory assessment modes broadens engagement with practical competencies in life science students. ADVANCES IN PHYSIOLOGY EDUCATION 2024; 48:527-546. [PMID: 38721652 DOI: 10.1152/advan.00257.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 05/02/2024] [Accepted: 05/02/2024] [Indexed: 06/19/2024]
Abstract
Laboratory practicals in life science subjects are traditionally assessed by written reports that reflect disciplinary norms for documenting experimental activities. However, the exclusive application of this assessment has the potential to engage only a narrow range of competencies. In this study, we explored how multiple modes of laboratory assessment might affect student perceptions of learned skills in a life science module. We hypothesized that while a mixture of assessments may not impact student summative performance, it might positively influence student perceptions of different skills that varied assessments allowed them to practice. This was informed by universal design for learning and teaching for understanding frameworks. In our study, in a third-year Bioscience program, written reports were complemented with group presentations and online quizzes via Moodle. Anonymous surveys evaluated whether this expanded portfolio of assessments promoted awareness of, and engagement with, a broader range of practical competencies. Aspects that influenced student preferences in assessment mode included time limitations, time investment, ability to practice new skills, links with lecture material, and experience of assessment anxiety. In particular, presentations were highlighted as promoting collaboration and communication and the quiz as an effective means of diversifying assessment schedules. A key takeaway from students was that while reports were important, an overreliance on them was detrimental. This study suggests that undergraduate life science students can benefit significantly from a holistic assessment strategy that complements reports with performance-based approaches that incorporate broader competencies and allow for greater student engagement and expression in undergraduate modules.NEW & NOTEWORTHY This study suggests that undergraduate life science students can benefit significantly from a holistic assessment strategy that complements reports with performance-based approaches that incorporate broader competencies and allow for greater student engagement and expression in undergraduate modules.
Collapse
Affiliation(s)
- Bernard T Drumm
- Department of Life and Health Science, Dundalk Institute of Technology, Dundalk, Louth, Ireland
| | - Ronan Bree
- Department of Life and Health Science, Dundalk Institute of Technology, Dundalk, Louth, Ireland
| | - Caoimhin S Griffin
- Department of Life and Health Science, Dundalk Institute of Technology, Dundalk, Louth, Ireland
| | - Niall O'Leary
- School of Microbiology and Environmental Research Institute, University College Cork, Cork, Ireland
| |
Collapse
|
2
|
Ruczynski LI, Schouwenberg BJ, Custers E, Fluit CR, van de Pol MH. The influence of a digital clinical reasoning test on medical student learning behavior during clinical clerkships. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024; 29:935-947. [PMID: 37851160 PMCID: PMC11208212 DOI: 10.1007/s10459-023-10288-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 09/24/2023] [Indexed: 10/19/2023]
Abstract
Recently, a new digital clinical reasoning test (DCRT) was developed to evaluate students' clinical-reasoning skills. Although an assessment tool may be soundly constructed, it may still prove inadequate in practice by failing to function as intended. Therefore, more insight is needed into the effects of the DCRT in practice. Individual semi-structured interviews and template analysis were used to collect and process qualitative data. The template, based on the interview guide, contained six themes: (1) DCRT itself, (2) test debriefing, (3) reflection, (4) practice/workplace, (5) DCRT versus practice and (6) 'other'. Thirteen students were interviewed. The DCRT encourages students to engage more in formal education, self-study and workplace learning during their clerkships, particularly for those who received insufficient results. Although the faculty emphasizes the different purposes of the DCRT (assessment of/as/for learning), most students perceive the DCRT as an assessment of learning. This affects their motivation and the role they assign to it in their learning process. Although students appreciate the debriefing and reflection report for improvement, they struggle to fill the identified knowledge gaps due to the timing of receiving their results. Some students are supported by the DCRT in exhibiting lifelong learning behavior. This study has identified several ways in which the DCRT influences students' learning practices in a way that can benefit their clinical-reasoning skills. Additionally, it stresses the importance of ensuring the alignment of theoretical principles with real-world practice, both in the development and utilization of assessment tools and their content. Further research is needed to investigate the long-term impact of the DCRT on young physicians' working practice.
Collapse
Affiliation(s)
- Larissa Ia Ruczynski
- Research on Learning and Education, Radboudumc Health Academy, Radboud University Medical Center, Gerard van Swietenlaan 2 (route 51), 6525 GB, Nijmegen, Netherlands.
| | - Bas Jjw Schouwenberg
- Department of Pharmacology and Toxicology, Radboud University Medical Center Nijmegen, Nijmegen, the Netherlands
- Department of Internal Medicine, Radboud University Medical Center Nijmegen, Nijmegen, the Netherlands
| | - Eugène Custers
- Department of Online Learning and Instruction, Faculty of Educational Sciences, Open Universiteit, Heerlen, Netherlands
| | - Cornelia Rmg Fluit
- Research on Learning and Education, Radboudumc Health Academy, Radboud University Medical Center, Nijmegen, Netherlands
| | - Marjolein Hj van de Pol
- Department of Primary and Community care, Radboud University Medical Center Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
3
|
Rienits H. The other side of the mark sheet: lessons learnt when medical students assess peers in formative clinical examinations. Front Med (Lausanne) 2024; 11:1395466. [PMID: 38903805 PMCID: PMC11187237 DOI: 10.3389/fmed.2024.1395466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 05/27/2024] [Indexed: 06/22/2024] Open
Abstract
This study aimed to investigate the experience of medical students assessing their cohort peers in formative clinical assessment. The exercise was designed to provide students with a formative experience prior to their summative assessment, and to determine what students could learn by being on the "other side of the mark sheet." Students were grateful for the experience learning both from the assessment practice, and from the individual written feedback provided immediately afterwards. They also described how much they learnt from seeing the assessment from the assessor's viewpoint, with many students commenting that they learnt more from being the "assessor" than from being the "student" in the process. Students were asked how they felt about being assessed by their peers, with some describing the experience as being more intimidating and stressful than when compared to assessment by clinicians. An interesting aspect of this study is that it also demonstrates some findings which suggest that the students' current learning context appears to have an effect on their attitudes to their peers as assessors. It is possible the competitive cultural milieu of the teaching hospital environment may have a negative effect on medical student collegiality and peer support.
Collapse
Affiliation(s)
- Helen Rienits
- Graduate School of Medicine, Faculty of Science, Medicine and Health, University of Wollongong, Wollongong, NSW, Australia
| |
Collapse
|
4
|
Lim YS, Willey JM. Evaluation and refinement of Self-Directed Learning Readiness Scale for medical students. KOREAN JOURNAL OF MEDICAL EDUCATION 2024; 36:175-188. [PMID: 38835310 DOI: 10.3946/kjme.2024.294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 04/22/2024] [Indexed: 06/06/2024]
Abstract
PURPOSE This study evaluated the underlying subdomain structure of the Self-Directed Learning Readiness Scale (SDLRS) for medical students and refined the instrument to measure the subdomains to provide evidence for construct validity. Developing self-directed learners is a well-recognized goal amongst medical educators. The SDLRS has been frequently used, however, lack of construct validity makes it difficult to interpret results. METHODS To identify the valid subdomains of the SDLRS, items were calibrated with the graded response model (GRM) and results were used to construct a 30-item short form. Short-form validity was evaluated by examining the correspondence between the total scores from the short form and the original instrument for individual students. RESULTS A five-subdomain model explained the SDLRS item response data reasonably well. These included: (1) initiative and independence in learning, (2) self-concept as an effective learner, (3) openness to learning opportunity, (4) love of learning, and (5) acceptance for one's own learning. The unidimensional GRM for each subdomain fits the data better than multi-dimensional models. The total scores from the refined short form and the original form were correlated at 0.98 and the mean difference was 1.33, providing evidence for validation. Nearly 91% of 179 respondents were accurately classified within the low, average, and high readiness groups. CONCLUSION Sufficient evidence was obtained for the validity and reliability of the refined 30-item short-form targeting five subdomains to measure medical students' readiness to engage in self-directed learning.
Collapse
Affiliation(s)
- Youn Seon Lim
- Quantitative and Mixed Methods Research Methodologies, Educational Studies, University of Cincinnati, Cincinnati, OH, USA
| | - Joanne M Willey
- Department of Science Education, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| |
Collapse
|
5
|
Cavaleiro I, de Carvalho Filho MA. Harnessing student feedback to transform teachers: Role of emotions and relationships. MEDICAL EDUCATION 2024; 58:750-760. [PMID: 37984443 DOI: 10.1111/medu.15264] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 09/21/2023] [Accepted: 10/14/2023] [Indexed: 11/22/2023]
Abstract
INTRODUCTION Feedback is crucial to promote learning and improve performance. However, we lack a nuanced understanding of how medical teachers reflect on and internalise (or not) student feedback (SF). This study aims to fill this gap by exploring how teachers make sense of SF to improve their performance and nurture their personal and professional development. METHODS In this cross-sectional qualitative study based on a constructivist paradigm, 14 medical teachers individually drew a Rich Picture (RP) of a feedback experience in which they received informal or formal feedback from students, resulting in a personal or professional change. After the drawing, we interviewed the participants to deepen our understanding of teachers' experiences. We analysed the drawings and interview transcripts using an iterative process of thematic analysis. RESULTS SF that culminated in personal or professional change is a highly emotional experience for teachers, often with long-lasting consequences. It may threaten or reassure their self-concept and professional identity, generating feedback avoidance or feedback-seeking behaviour. SF is particularly powerful in transforming teaching practices when teachers feel connected to students through an honest and constructive relationship. Remarkably, some teachers intentionally build relationships with certain (selected) students to get 'qualified' feedback. SF acceptance also increases when teachers are open to receiving feedback and there is an institutional culture that values feedback. Finally, medical teachers believe that formal (planned) feedback is relevant to improve the curriculum, while informal (spontaneous) feedback is important for promoting teachers' personal and professional development. DISCUSSION SF has the potential to become a transformative learning experience for teachers. The student-teacher relationship and teachers' emotional reactions affect the way teachers make sense of and internalise SF and enact behavioural change. Understanding the complexity surrounding SF is vital for supporting teachers in seizing opportunities for growth and in nurturing a meaningful relationship with the act of teaching.
Collapse
Affiliation(s)
- Inês Cavaleiro
- School of Medicine, University of Minho, Braga, Portugal
| | - Marco Antonio de Carvalho Filho
- Wenckebach Institute (WIOO) - Lifelong Learning, Education, and Assessment Research Network (LEARN), University Medical Center Groningen, Groningen, The Netherlands
| |
Collapse
|
6
|
Torre D, Daniel M, Ratcliffe T, Durning SJ, Holmboe E, Schuwirth L. Programmatic Assessment of Clinical Reasoning: New Opportunities to Meet an Ongoing Challenge. TEACHING AND LEARNING IN MEDICINE 2024:1-9. [PMID: 38794865 DOI: 10.1080/10401334.2024.2333921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 02/29/2024] [Indexed: 05/26/2024]
Abstract
Issue: Clinical reasoning is essential to physicians' competence, yet assessment of clinical reasoning remains a significant challenge. Clinical reasoning is a complex, evolving, non-linear, context-driven, and content-specific construct which arguably cannot be assessed at one point in time or with a single method. This has posed challenges for educators for many decades, despite significant development of individual assessment methods. Evidence: Programmatic assessment is a systematic assessment approach that is gaining momentum across health professions education. Programmatic assessment, and in particular assessment for learning, is well-suited to address the challenges with clinical reasoning assessment. Several key principles of programmatic assessment are particularly well-aligned with developing a system to assess clinical reasoning: longitudinality, triangulation, use of a mix of assessment methods, proportionality, implementation of intermediate evaluations/reviews with faculty coaches, use of assessment for feedback, and increase in learners' agency. Repeated exposure and measurement are critical to develop a clinical reasoning assessment narrative, thus the assessment approach should optimally be longitudinal, providing multiple opportunities for growth and development. Triangulation provides a lens to assess the multidimensionality and contextuality of clinical reasoning and that of its different, yet related components, using a mix of different assessment methods. Proportionality ensures the richness of information on which to draw conclusions is commensurate with the stakes of the decision. Coaching facilitates the development of a feedback culture and allows to assess growth over time, while enhancing learners' agency. Implications: A programmatic assessment model of clinical reasoning that is developmentally oriented, optimizes learning though feedback and coaching, uses multiple assessment methods, and provides opportunity for meaningful triangulation of data can help address some of the challenges of clinical reasoning assessment.
Collapse
Affiliation(s)
- Dario Torre
- Department of Medical Education, University of Central Florida, Orlando, FL, USA
| | - Michelle Daniel
- Department of Emergency Medicine, University of California, San Diego, CA, USA
| | - Temple Ratcliffe
- Department of Medicine, The Joe R and Teresa Lozano Long School of Medicine at University of Texas Health, Texas, USA
| | - Steven J Durning
- Center for Heath Profession Education, Uniformed Services University Center for Neuroscience and Regenerative Medicine, Bethesda, Maryland, USA
| | - Eric Holmboe
- Milestones Development and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, IL, USA
| | | |
Collapse
|
7
|
Nichol H, Turnnidge J, Dalgarno N, Trier J. Navigating the paradox: Exploring resident experiences of vulnerability. MEDICAL EDUCATION 2024. [PMID: 38757457 DOI: 10.1111/medu.15426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 04/20/2024] [Accepted: 04/24/2024] [Indexed: 05/18/2024]
Abstract
INTRODUCTION Learning and growth in postgraduate medical education (PGME) often require vulnerability, defined as a state of openness to uncertainty, risk, and emotional exposure. However, vulnerability can threaten a resident's credibility and professional identity. Despite this tension, studies examining vulnerability in PGME are limited. As such, this study aims to explore residents' experiences of vulnerability, including the factors that influence vulnerability in PGME. METHODS Using a constructivist grounded theory approach, individual semi-structured interviews were conducted with 15 residents from 10 different specialities. Interview transcripts were coded and analysed iteratively. Themes were identified and relationships among themes were examined to develop a theory describing vulnerability in PGME. RESULTS Residents characterised vulnerability as a paradox represented by two overarching themes. 'Experiencing the tensions of vulnerability' explores the polarities between being a fallible, authentic learner and an infallible, competent professional. 'Navigating the vulnerability paradox' outlines the factors influencing the experience of vulnerability and its associated outcomes at the intrapersonal, interpersonal, and systems levels. Residents described needing to have the bandwidth to face the risks and emotional labour of vulnerability. Opportunities to build connections with social agents, including clinical teachers and peers, facilitated vulnerability. The sociocultural context shaped both the experience and outcomes of vulnerability as residents faced the symbolic mask of professionalism. CONCLUSION Residents experience vulnerability as a paradox shaped by intrapersonal, interpersonal, and systems level factors. These findings capture the nuance and complexity of vulnerability in PGME and offer insight into creating supportive learning environments that leverage the benefits of vulnerability while acknowledging its risks. There is a need to translate this understanding into systems-based change to create supportive PGME environments, which value and celebrate vulnerability.
Collapse
Affiliation(s)
- Heather Nichol
- Department of Physical Medicine and Rehabilitation, Queen's University, Ontario, Canada
| | - Jennifer Turnnidge
- Office of Professional Development and Educational Scholarship, Queen's University, Ontario, Canada
| | - Nancy Dalgarno
- Office of Professional Development and Educational Scholarship, Queen's University, Ontario, Canada
- Providence Care Hospital, Ontario, Canada
| | - Jessica Trier
- Department of Physical Medicine and Rehabilitation, Queen's University, Ontario, Canada
- Providence Care Hospital, Ontario, Canada
| |
Collapse
|
8
|
Andreou V, Peters S, Eggermont J, Schoenmakers B. Co-designing Entrustable Professional Activities in General Practitioner's training: a participatory research study. BMC MEDICAL EDUCATION 2024; 24:549. [PMID: 38760773 PMCID: PMC11100052 DOI: 10.1186/s12909-024-05530-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/25/2023] [Accepted: 05/07/2024] [Indexed: 05/19/2024]
Abstract
BACKGROUND In medical education, Entrustable Professional Activities (EPAs) have been gaining momentum for the last decade. Such novel educational interventions necessitate accommodating competing needs, those of curriculum designers, and those of users in practice, in order to be successfully implemented. METHODS We employed a participatory research design, engaging diverse stakeholders in designing an EPA framework. This iterative approach allowed for continuous refinement, shaping a comprehensive blueprint comprising 60 EPAs. Our approach involved two iterative cycles. In the first cycle, we utilized a modified-Delphi methodology with clinical competence committee (CCC) members, asking them whether each EPA should be included. In the second cycle, we used semi-structured interviews with General Practitioner (GP) trainers and trainees to explore their perceptions about the framework and refine it accordingly. RESULTS During the first cycle, 14 CCC members agreed that all the 60 EPAs should be included in the framework. Regarding the formulation of each EPAs, 20 comments were given and 16 adaptations were made to enhance clarity. In the second cycle, the semi-structured interviews with trainers and trainees echoed the same findings, emphasizing the need of the EPA framework for improving workplace-based assessment, and its relevance to real-world clinical scenarios. However, trainees and trainers expressed concerns regarding implementation challenges, such as the large number of EPAs to be assessed, and perception of EPAs as potentially high-stakes. CONCLUSION Accommodating competing stakeholders' needs during the design process can significantly enhance the EPA implementation. Recognizing users as experts in their own experiences empowers them, enabling a priori identification of implementation barriers and potential pitfalls. By embracing a collaborative approach, wherein diverse stakeholders contribute their unique viewpoints, we can only create effective educational interventions to complex assessment challenges.
Collapse
Affiliation(s)
- Vasiliki Andreou
- Academic Centre for General Practice, Department of Public Health and Primary Care, KU Leuven, Leuven, Belgium.
- Department of Public Health and Primary Care, KU Leuven, Box 7001, Kapucijnenvoer 7, Leuven, 3000, Belgium.
| | - Sanne Peters
- Academic Centre for General Practice, Department of Public Health and Primary Care, KU Leuven, Leuven, Belgium
- School of Health Sciences, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, Melbourne, Australia
| | - Jan Eggermont
- Department of Cellular and Molecular Medicine, KU Leuven, Leuven, Belgium
| | - Birgitte Schoenmakers
- Academic Centre for General Practice, Department of Public Health and Primary Care, KU Leuven, Leuven, Belgium
| |
Collapse
|
9
|
Kalun P, Braund H, McGuire N, McEwen L, Mann S, Trier J, Schultz K, Curtis R, McGuire A, Pereira I, Dagnone D. Was it all worth it? A graduating resident perspective on CBME. MEDICAL TEACHER 2024:1-9. [PMID: 38742827 DOI: 10.1080/0142159x.2024.2339408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 04/02/2024] [Indexed: 05/16/2024]
Abstract
BACKGROUND Our institution simultaneously transitioned all postgraduate specialty training programs to competency-based medical education (CBME) curricula. We explored experiences of CBME-trained residents graduating from five-year programs to inform the continued evolution of CBME in Canada. METHODS We utilized qualitative description to explore residents' experiences and inform continued CBME improvement. Data were collected from fifteen residents from various specialties through focus groups, interviews, and written responses. The data were analyzed inductively, using conventional content analysis. RESULTS We identified five overarching themes. Three themes provided insight into residents' experiences with CBME, describing discrepancies between the intentions of CBME and how it was enacted, challenges with implementation, and variation in residents' experiences. Two themes - adaptations and recommendations - could inform meaningful refinements for CBME going forward. CONCLUSIONS Residents graduating from CBME training programs offered a balanced perspective, including criticism and recognition of the potential value of CBME when implemented as intended. Their experiences provide a better understanding of residents' needs within CBME curricula, including greater balance and flexibility within programs of assessment and curricula. Many challenges that residents faced with CBME could be alleviated by greater accountability at program, institutional, and national levels. We conclude with actionable recommendations for addressing residents' needs in CBME.
Collapse
Affiliation(s)
- Portia Kalun
- Queen's Health Sciences, Queen's University, Kingston, Canada
| | - Heather Braund
- Queen's Health Sciences, Queen's University, Kingston, Canada
| | - Natalie McGuire
- Queen's Health Sciences, Queen's University, Kingston, Canada
| | - Laura McEwen
- Queen's Health Sciences, Queen's University, Kingston, Canada
| | - Steve Mann
- Queen's Health Sciences, Queen's University, Kingston, Canada
| | - Jessica Trier
- Queen's Health Sciences, Queen's University, Kingston, Canada
- Department of Physical Medicine and Rehabilitation, Queen's University, Kingston, Canada
- Providence Care Hospital, Kingston, Canada
| | - Karen Schultz
- Queen's Health Sciences, Queen's University, Kingston, Canada
| | - Rachel Curtis
- Queen's Health Sciences, Queen's University, Kingston, Canada
- Department of Ophthalmology, Queen's University, Kingston, Canada
| | - Andrew McGuire
- Queen's Health Sciences, Queen's University, Kingston, Canada
| | - Ian Pereira
- Queen's Health Sciences, Queen's University, Kingston, Canada
| | - Damon Dagnone
- Queen's Health Sciences, Queen's University, Kingston, Canada
- Department of Emergency Medicine, Queen's University, Kingston, Canada
| |
Collapse
|
10
|
Woodworth GE, Goldstein ZT, Ambardekar AP, Arthur ME, Bailey CF, Booth GJ, Carney PA, Chen F, Duncan MJ, Fromer IR, Hallman MR, Hoang T, Isaak R, Klesius LL, Ladlie BL, Mitchell SA, Miller Juve AK, Mitchell JD, McGrath BJ, Shepler JA, Sims CR, Spofford CM, Tanaka PP, Maniker RB. Development and Pilot Testing of a Programmatic System for Competency Assessment in US Anesthesiology Residency Training. Anesth Analg 2024; 138:1081-1093. [PMID: 37801598 DOI: 10.1213/ane.0000000000006667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
BACKGROUND In 2018, a set of entrustable professional activities (EPAs) and procedural skills assessments were developed for anesthesiology training, but they did not assess all the Accreditation Council for Graduate Medical Education (ACGME) milestones. The aims of this study were to (1) remap the 2018 EPA and procedural skills assessments to the revised ACGME Anesthesiology Milestones 2.0, (2) develop new assessments that combined with the original assessments to create a system of assessment that addresses all level 1 to 4 milestones, and (3) provide evidence for the validity of the assessments. METHODS Using a modified Delphi process, a panel of anesthesiology education experts remapped the original assessments developed in 2018 to the Anesthesiology Milestones 2.0 and developed new assessments to create a system that assessed all level 1 through 4 milestones. Following a 24-month pilot at 7 institutions, the number of EPA and procedural skill assessments and mean scores were computed at the end of the academic year. Milestone achievement and subcompetency data for assessments from a single institution were compared to scores assigned by the institution's clinical competency committee (CCC). RESULTS New assessment development, 2 months of testing and feedback, and revisions resulted in 5 new EPAs, 11 nontechnical skills assessments (NTSAs), and 6 objective structured clinical examinations (OSCEs). Combined with the original 20 EPAs and procedural skills assessments, the new system of assessment addresses 99% of level 1 to 4 Anesthesiology Milestones 2.0. During the 24-month pilot, aggregate mean EPA and procedural skill scores significantly increased with year in training. System subcompetency scores correlated significantly with 15 of 23 (65.2%) corresponding CCC scores at a single institution, but 8 correlations (36.4%) were <30.0, illustrating poor correlation. CONCLUSIONS A panel of experts developed a set of EPAs, procedural skill assessment, NTSAs, and OSCEs to form a programmatic system of assessment for anesthesiology residency training in the United States. The method used to develop and pilot test the assessments, the progression of assessment scores with time in training, and the correlation of assessment scores with CCC scoring of milestone achievement provide evidence for the validity of the assessments.
Collapse
Affiliation(s)
- Glenn E Woodworth
- From the Department of Anesthesiology and Perioperative Medicine, Oregon Health & Science University, Portland, Oregon
| | - Zachary T Goldstein
- Department of Anesthesiology, Cedars Sinai Medical Center, Los Angeles, California
| | - Aditee P Ambardekar
- Department of Anesthesiology and Pain Management, University of Texas, Southwestern Medical Center, Dallas, Texas
| | - Mary E Arthur
- Department of Anesthesiology and Perioperative Medicine, Medical College of Georgia at Augusta University, Augusta, Georgia
| | - Caryl F Bailey
- Department of Anesthesiology and Perioperative Medicine, Medical College of Georgia at Augusta University, Augusta, Georgia
| | - Gregory J Booth
- Uniformed Services University of the Health Sciences, Department of Anesthesiology and Pain Medicine, Naval Medical Center Portsmouth, Portsmouth, Virginia
| | - Patricia A Carney
- Division of Hospital Medicine, Department of Family Medicine and Internal Medicine, Oregon Health & Science University, Portland, Oregon
| | - Fei Chen
- Department of Anesthesiology, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina
| | - Michael J Duncan
- Department of Anesthesiology, University of Missouri-Kansas City, Kansas City, Missouri
| | - Ilana R Fromer
- Department of Anesthesiology, University of Minnesota, Minneapolis, Minnesota
| | - Matthew R Hallman
- Department of Anesthesiology and Pain Medicine, University of Washington, Seattle, Washington
| | - Thomas Hoang
- From the Department of Anesthesiology and Perioperative Medicine, Oregon Health & Science University, Portland, Oregon
| | - Robert Isaak
- Department of Anesthesiology, University of North Carolina, Chapel Hill, North Carolina
| | - Lisa L Klesius
- Department of Anesthesiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin
| | - Beth L Ladlie
- Department of Anesthesiology, Mayo Clinic, Rochester, Minnesota
| | | | - Amy K Miller Juve
- From the Department of Anesthesiology and Perioperative Medicine, Oregon Health & Science University, Portland, Oregon
| | - John D Mitchell
- Department of Anesthesiology, Critical Care, and Perioperative Medicine, Henry Ford Health, Detroit, Michigan
| | - Brian J McGrath
- Department of Anesthesiology, University of Florida College of Medicine-Jacksonville, Jacksonville, Florida
| | - John A Shepler
- Department of Anesthesiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin
| | - Charles R Sims
- Department of Anesthesiology & Perioperative Medicine, Mayo Clinic, Rochester, Minnesota
| | - Christina M Spofford
- Department of Anesthesiology, Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Pedro P Tanaka
- Department of Anesthesiology, Stanford University, Stanford, California
| | - Robert B Maniker
- Department of Anesthesiology, Columbia University, New York, New York
| |
Collapse
|
11
|
Braund H, Dalgarno N, O’Dell R, Taylor DR. Making assessment a team sport: a qualitative study of facilitated group feedback in internal medicine residency. CANADIAN MEDICAL EDUCATION JOURNAL 2024; 15:14-26. [PMID: 38827914 PMCID: PMC11139793 DOI: 10.36834/cmej.75250] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/05/2024]
Abstract
Purpose Competency-based medical education relies on feedback from workplace-based assessment (WBA) to direct learning. Unfortunately, WBAs often lack rich narrative feedback and show bias towards Medical Expert aspects of care. Building on research examining interactive assessment approaches, the Queen's University Internal Medicine residency program introduced a facilitated, team-based assessment initiative ("Feedback Fridays") in July 2017, aimed at improving holistic assessment of resident performance on the inpatient medicine teaching units. In this study, we aim to explore how Feedback Fridays contributed to formative assessment of Internal Medicine residents within our current model of competency-based training. Method A total of 53 residents participated in facilitated, biweekly group assessment sessions during the 2017 and 2018 academic year. Each session was a 30-minute facilitated assessment discussion done with one inpatient team, which included medical students, residents, and their supervising attending. Feedback from the discussion was collected, summarized, and documented in narrative form in electronic WBA forms by the program's assessment officer for the residents. For research purposes, verbatim transcripts of feedback sessions were analyzed thematically. Results The researchers identified four major themes for feedback: communication, intra- and inter-personal awareness, leadership and teamwork, and learning opportunities. Although feedback related to a broad range of activities, it showed strong emphasis on competencies within the intrinsic CanMEDS roles. Additionally, a clear formative focus in the feedback was another important finding. Conclusions The introduction of facilitated team-based assessment in the Queen's Internal Medicine program filled an important gap in WBA by providing learners with detailed feedback across all CanMEDS roles and by providing constructive recommendations for identified areas for improvement.
Collapse
Affiliation(s)
- Heather Braund
- Office of Professional Development and Educational Scholarship, Ontario, Canada
- Faculty of Education, Queen’s University, Ontario, Canada
| | - Nancy Dalgarno
- Office of Professional Development and Educational Scholarship, Ontario, Canada
- Department of Biomedical and Molecular Sciences, Faculty of Health Sciences, Queen’s University, Ontario, Canada
| | - Rachel O’Dell
- Department of Internal Medicine, Faculty of Health Sciences, Queen’s University, Ontario, Canada
| | - David R Taylor
- Academy for Teachers and Educators, Department of Medicine, Queen’s University, Ontario, Canada
| |
Collapse
|
12
|
Becker M, Shields RK, Sass KJ. Psychometric Analysis of an Integrated Clinical Education Tool for Physical Therapists. JOURNAL, PHYSICAL THERAPY EDUCATION 2024:00001416-990000000-00108. [PMID: 38684094 DOI: 10.1097/jte.0000000000000341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 01/02/2024] [Indexed: 05/02/2024]
Abstract
INTRODUCTION Integrated clinical education (ICE) courses require opportunities for practice, assessment of performance, and specific feedback. The purposes of this study were to 1) analyze the internal consistency of a tool for evaluating students during ICE courses, 2) examine the responsiveness of the tool between midterm and final assessments, and 3) develop a model to predict the final score from midterm assessments and explore relationships among the 6 domains. REVIEW OF LITERATURE Several clinical education assessment tools have been developed for terminal clinical experiences, but few have focused on the needs of learners during the ICE. SUBJECTS Eighty-five student assessments were collected from 2 consecutive cohorts of physical therapist students in a first full-time ICE course. METHODS The tool contained 29 items within 6 domains. Items were rated on a 5-point scale from dependent to indirect supervision. Cronbach's alpha was used to analyze the internal consistency of the tool, whereas responsiveness was examined with paired t-test and Cohen's d. A best subsets regression model was used to determine the best combination of midterm variables that predicted the final total scores. Coefficients of determination (R2) were calculated to explore the relationships among domains. RESULTS The tool was found to have high internal consistency at midterm and final assessment (α = 0.97 and 0.98, respectively). Mean scores increased over time for each domain score and for the total score (P < .001; d = 1.5). Scores in 3 midterm domains predicted more than 57% of the variance in the final total score. DISCUSSION AND CONCLUSION Results support the use of this tool to measure student performance and growth in a first full-time ICE course. Targeted measurement of students' abilities in ICE courses assists with differentiating formative and summative learning needed to achieve academic success.
Collapse
Affiliation(s)
- Marcie Becker
- Marcie Becker is the clinical assistant professor/codirector of clinical education in the Department of Physical Therapy and Rehabilitation Science at the University of Iowa
- Richard K. Shields is the chair/department executive officer in the Department of Physical Therapy and Rehabilitation Science, University of Iowa, 1-252 Medical Education Building, Iowa City, IA . Please address all correspondence to Richard K. Shields
- Kelly J. Sass is the clinical assistant professor/codirector of clinical education in the Department of Physical Therapy and Rehabilitation Science at the University of Iowa
| | - Richard K Shields
- Marcie Becker is the clinical assistant professor/codirector of clinical education in the Department of Physical Therapy and Rehabilitation Science at the University of Iowa
- Richard K. Shields is the chair/department executive officer in the Department of Physical Therapy and Rehabilitation Science, University of Iowa, 1-252 Medical Education Building, Iowa City, IA . Please address all correspondence to Richard K. Shields
- Kelly J. Sass is the clinical assistant professor/codirector of clinical education in the Department of Physical Therapy and Rehabilitation Science at the University of Iowa
| | - Kelly J Sass
- Marcie Becker is the clinical assistant professor/codirector of clinical education in the Department of Physical Therapy and Rehabilitation Science at the University of Iowa
- Richard K. Shields is the chair/department executive officer in the Department of Physical Therapy and Rehabilitation Science, University of Iowa, 1-252 Medical Education Building, Iowa City, IA . Please address all correspondence to Richard K. Shields
- Kelly J. Sass is the clinical assistant professor/codirector of clinical education in the Department of Physical Therapy and Rehabilitation Science at the University of Iowa
| |
Collapse
|
13
|
Fuentes-Cimma J, Sluijsmans D, Riquelme A, Villagran I, Isbej L, Olivares-Labbe MT, Heeneman S. Designing feedback processes in the workplace-based learning of undergraduate health professions education: a scoping review. BMC MEDICAL EDUCATION 2024; 24:440. [PMID: 38654360 PMCID: PMC11036781 DOI: 10.1186/s12909-024-05439-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 04/17/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND Feedback processes are crucial for learning, guiding improvement, and enhancing performance. In workplace-based learning settings, diverse teaching and assessment activities are advocated to be designed and implemented, generating feedback that students use, with proper guidance, to close the gap between current and desired performance levels. Since productive feedback processes rely on observed information regarding a student's performance, it is imperative to establish structured feedback activities within undergraduate workplace-based learning settings. However, these settings are characterized by their unpredictable nature, which can either promote learning or present challenges in offering structured learning opportunities for students. This scoping review maps literature on how feedback processes are organised in undergraduate clinical workplace-based learning settings, providing insight into the design and use of feedback. METHODS A scoping review was conducted. Studies were identified from seven databases and ten relevant journals in medical education. The screening process was performed independently in duplicate with the support of the StArt program. Data were organized in a data chart and analyzed using thematic analysis. The feedback loop with a sociocultural perspective was used as a theoretical framework. RESULTS The search yielded 4,877 papers, and 61 were included in the review. Two themes were identified in the qualitative analysis: (1) The organization of the feedback processes in workplace-based learning settings, and (2) Sociocultural factors influencing the organization of feedback processes. The literature describes multiple teaching and assessment activities that generate feedback information. Most papers described experiences and perceptions of diverse teaching and assessment feedback activities. Few studies described how feedback processes improve performance. Sociocultural factors such as establishing a feedback culture, enabling stable and trustworthy relationships, and enhancing student feedback agency are crucial for productive feedback processes. CONCLUSIONS This review identified concrete ideas regarding how feedback could be organized within the clinical workplace to promote feedback processes. The feedback encounter should be organized to allow follow-up of the feedback, i.e., working on required learning and performance goals at the next occasion. The educational programs should design feedback processes by appropriately planning subsequent tasks and activities. More insight is needed in designing a full-loop feedback process, in which specific attention is needed in effective feedforward practices.
Collapse
Affiliation(s)
- Javiera Fuentes-Cimma
- Department of Health Sciences, Faculty of Medicine, Pontificia Universidad Católica de Chile, Avenida Vicuña Mackenna 4860, Macul, Santiago, Chile.
- School of Health Professions Education, Maastricht University, Maastricht, Netherlands.
| | | | - Arnoldo Riquelme
- Centre for Medical and Health Profession Education, Department of Gastroenterology, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Ignacio Villagran
- Department of Health Sciences, Faculty of Medicine, Pontificia Universidad Católica de Chile, Avenida Vicuña Mackenna 4860, Macul, Santiago, Chile
| | - Lorena Isbej
- School of Health Professions Education, Maastricht University, Maastricht, Netherlands
- School of Dentistry, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile
| | | | - Sylvia Heeneman
- Department of Pathology, Faculty of Health, Medicine and Health Sciences, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
14
|
van Wijk EV, van Blankenstein FM, Donkers J, Janse RJ, Bustraan J, Adelmeijer LGM, Dubois EA, Dekker FW, Langers AMJ. Does 'summative' count? The influence of the awarding of study credits on feedback use and test-taking motivation in medical progress testing. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024:10.1007/s10459-024-10324-4. [PMID: 38502460 DOI: 10.1007/s10459-024-10324-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 03/03/2024] [Indexed: 03/21/2024]
Abstract
Despite the increasing implementation of formative assessment in medical education, its' effect on learning behaviour remains questionable. This effect may depend on how students value formative, and summative assessments differently. Informed by Expectancy Value Theory, we compared test preparation, feedback use, and test-taking motivation of medical students who either took a purely formative progress test (formative PT-group) or a progress test that yielded study credits (summative PT-group). In a mixed-methods study design, we triangulated quantitative questionnaire data (n = 264), logging data of an online PT feedback system (n = 618), and qualitative interview data (n = 21) to compare feedback use, and test-taking motivation between the formative PT-group (n = 316), and the summative PT-group (n = 302). Self-reported, and actual feedback consultation was higher in the summative PT-group. Test preparation, and active feedback use were relatively low and similar in both groups. Both quantitative, and qualitative results showed that the motivation to prepare and consult feedback relates to how students value the assessment. In the interview data, a link could be made with goal orientation theory, as performance-oriented students perceived the formative PT as not important due to the lack of study credits. This led to low test-taking effort, and feedback consultation after the formative PT. In contrast, learning-oriented students valued the formative PT, and used it for self-study or self-assessment to gain feedback. Our results indicate that most students are less motivated to put effort in the test, and use feedback when there are no direct consequences. A supportive assessment environment that emphasizes recognition of the value of formative testing is required to motivate students to use feedback for learning.
Collapse
Affiliation(s)
- Elise V van Wijk
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Floris M van Blankenstein
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Jeroen Donkers
- School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Roemer J Janse
- Department of Clinical Epidemiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Jacqueline Bustraan
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Liesbeth G M Adelmeijer
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Eline A Dubois
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Friedo W Dekker
- Department of Clinical Epidemiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Alexandra M J Langers
- Department of Gastroenterology and Hepatology, Leiden University Medical Center, Albinusdreef 2, 2333 ZA, Leiden, The Netherlands.
| |
Collapse
|
15
|
van Wijk EV, van Blankenstein FM, Janse RJ, Dubois EA, Langers AMJ. Understanding students' feedback use in medical progress testing: A qualitative interview study. MEDICAL EDUCATION 2024. [PMID: 38462812 DOI: 10.1111/medu.15378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 02/08/2024] [Accepted: 02/20/2024] [Indexed: 03/12/2024]
Abstract
BACKGROUND Active engagement with feedback is crucial for feedback to be effective and improve students' learning and achievement. Medical students are provided feedback on their development in the progress test (PT), which has been implemented in various medical curricula, although its format, integration and feedback differ across institutions. Existing research on engagement with feedback in the context of PT is not sufficient to make a definitive judgement on what works and which barriers exist. Therefore, we conducted an interview study to explore students' feedback use in medical progress testing. METHODS All Dutch medical students participate in a national, curriculum-independent PT four times a year. This mandatory test, composed of multiple-choice questions, provides students with written feedback on their scores. Furthermore, an answer key is available to review their answers. Semi-structured interviews were conducted with 21 preclinical and clinical medical students who participated in the PT. Template analysis was performed on the qualitative data using a priori themes based on previous research on feedback use. RESULTS Template analysis revealed that students faced challenges in crucial internal psychological processes that impact feedback use, including 'awareness', 'cognizance', 'agency' and 'volition'. Factors such as stakes, available time, feedback timing and feedback presentation contributed to these difficulties, ultimately hindering feedback use. Notably, feedback engagement was higher during clinical rotations, and students were interested in the feedback when seeking insights into their performance level and career perspectives. CONCLUSION Our study enhanced the understanding of students' feedback utilisation in medical progress testing by identifying key processes and factors that impact feedback use. By recognising and addressing barriers in feedback use, we can improve both student and teacher feedback literacy, thereby transforming the PT into a more valuable learning tool.
Collapse
Affiliation(s)
- Elise V van Wijk
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Floris M van Blankenstein
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Roemer J Janse
- Department of Clinical Epidemiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Eline A Dubois
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Alexandra M J Langers
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
- Department of Gastroenterology and Hepatology, Leiden University Medical Center, Leiden, The Netherlands
| |
Collapse
|
16
|
Ginsburg S, Stroud L, Brydges R, Melvin L, Hatala R. Dual purposes by design: exploring alignment between residents' and academic advisors' documents in a longitudinal program. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024:10.1007/s10459-024-10318-2. [PMID: 38438699 DOI: 10.1007/s10459-024-10318-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 02/04/2024] [Indexed: 03/06/2024]
Abstract
Longitudinal academic advising (AA) and coaching programs are increasingly implemented in competency based medical education (CBME) to help residents reflect and act on the voluminous assessment data they receive. Documents created by residents for purposes of reflection are often used for a second, summative purpose-to help competence committees make decisions-which may be problematic. Using inductive, thematic analysis we analyzed written comments generated by 21 resident-AA dyads in one large internal medicine program who met over a 2 year period to determine what residents write when asked to reflect, how this aligns with what the AAs report, and what changes occur over time (total 109 resident self-reflections and 105 AA reports). Residents commented more on their developing autonomy, progress and improvement than AAs, who commented far more on performance measures. Over time, residents' writing shifted away from intrinsic roles, patient care and improvement towards what AAs focused on, including getting EPAs (entrustable professional activities), studying and exams. For EPAs, the emphasis was on getting sufficient numbers rather than reflecting on what residents were learning. Our findings challenge the practice of dual-purposing documents, by questioning the blurring of formative and summative intent, the structure of forms and their multiple conflicting purposes, and assumptions about the advising relationship over time. Our study suggests a need to re-evaluate how reflective documents are used in CBME programs. Further research should explore whether and how documentation can best be used to support resident growth and development.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- Department of Medicine, Mount Sinai Hospital, Toronto, ON, Canada.
- Wilson Centre for Research in Education, University Health Network, Toronto, ON, Canada.
| | - Lynfa Stroud
- Department of Medicine, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Ryan Brydges
- Wilson Centre for Research in Education, University Health Network, Toronto, ON, Canada
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto, Toronto, ON, Canada
| | - Lindsay Melvin
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Department of Medicine, University Health Network, Toronto, ON, Canada
| | - Rose Hatala
- Department of Medicine, University of British Columbia, Vancouver, BC, Canada
- Centre for Health Education Scholarship, University of British Columbia, Vancouver, Canada
| |
Collapse
|
17
|
Sahi N, Humphrey-Murto S, Brennan EE, O'Brien M, Hall AK. Current use of simulation for EPA assessment in emergency medicine. CAN J EMERG MED 2024; 26:179-187. [PMID: 38374281 DOI: 10.1007/s43678-024-00649-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 01/12/2024] [Indexed: 02/21/2024]
Abstract
OBJECTIVE Approximately five years ago, the Royal College emergency medicine programs in Canada implemented a competency-based paradigm and introduced the use of Entrustable Professional Activities (EPAs) for assessment of units of professional activity to assess trainees. Many competency-based medical education (CBME) based curricula, involve assessing for entrustment through observations of EPAs. While EPAs are frequently assessed in clinical settings, simulation is also used. This study aimed to characterize the use of simulation for EPA assessment. METHODS A study interview guide was jointly developed by all study authors and followed best practices for survey development. A national interview was conducted with program directors or assistant program directors across all the Royal College emergency medicine programs across Canada. Interviews were conducted over Microsoft Teams, interviews were recorded and transcribed, using Microsoft Teams transcribing service. Sample transcripts were analyzed for theme development. Themes were then reviewed by co-authors to ensure they were representative of the participants' views. RESULTS A 64.7% response rate was achieved. Simulation has been widely adopted by EM training programs. All interviewees demonstrated support for the use of simulation for EPA assessment for many reasons, however, PDs acknowledged limitations and thematic analysis revealed certain themes and tensions for using simulation for EPA assessment. Thematic analysis revealed six major themes: widespread support for the use of simulation for EPA assessment, concerns regarding the potential for EPA assessment to become a "tick- box" exercise, logistical barriers limiting the use of simulation for EPA assessment, varied perceptions about the authenticity of using simulation for EPA assessment, the potential for simulation for EPA assessment to compromise learner psychological safety, and suggestions for the optimization of use of simulation for EPA assessment. CONCLUSIONS Our findings offer insight for other programs and specialties on how simulation for EPA assessment can best be utilized. Programs should use these findings when considering using simulation for EPA assessment.
Collapse
Affiliation(s)
- Nidhi Sahi
- Department of Innovation in Medical Education (DIME), University of Ottawa, Ottawa, ON, Canada.
| | - Susan Humphrey-Murto
- Department of Medicine, University of Ottawa, Ottawa, ON, Canada
- Tier 2 Research Chair in Medical Education and Fellowship Director, Medical Education Research, University of Ottawa, Ottawa, ON, Canada
| | - Erin E Brennan
- Department of Emergency Medicine, Queen's University, Kingston, ON, Canada
| | - Michael O'Brien
- Emergency Medicine, The Ottawa Hospital, Ottawa, ON, Canada
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, ON, Canada
| | - Andrew K Hall
- Department of Emergency Medicine, University of Ottawa, Ottawa, ON, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
| |
Collapse
|
18
|
Valestrand EA, Kvernenes M, Kinsella EA, Hunskaar S, Schei E. Transforming self-experienced vulnerability into professional strength: a dialogical narrative analysis of medical students' reflective writing. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024:10.1007/s10459-024-10317-3. [PMID: 38401015 DOI: 10.1007/s10459-024-10317-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 02/04/2024] [Indexed: 02/26/2024]
Abstract
Medical students' efforts to learn person-centered thinking and behavior can fall short due to the dissonance between person-centered clinical ideals and the prevailing epistemological stereotypes of medicine, where physicians' life events, relations, and emotions seem irrelevant to their professional competence. This paper explores how reflecting on personal life experiences and considering the relevance for one's future professional practice can inform first-year medical students' initial explorations of professional identities. In this narrative inquiry, we undertook a dialogical narrative analysis of 68 essays in which first-year medical students reflected on how personal experiences from before medical school may influence them as future doctors. Students wrote the texts at the end of a 6-month course involving 20 patient encounters, introduction to person-centered theory, peer group discussions, and reflective writing. The analysis targeted medical students' processes of interweaving and delineating personal and professional identities. The analysis yielded four categories. (1) How medical students told their stories of illness, suffering, and relational struggles in an interplay with context that provided them with new perspectives on their own experiences. Students formed identities with a person-centered orientation to medical work by: (2) recognizing and identifying with patients' vulnerability, (3) experiencing the healing function of sharing stories, and (4) transforming personal experiences into professional strength. Innovative approaches to medical education that encourage and support medical students to revisit, reflect on, and reinterpret their emotionally charged life experiences have the potential to shape professional identities in ways that support person-centered orientations to medical work.
Collapse
Affiliation(s)
- Eivind Alexander Valestrand
- Center for Medical Education, Faculty of Medicine, University of Bergen, Bergen, Norway.
- Department of Global Public Health and Primary Care, Faculty of Medicine, University of Bergen, Bergen, Norway.
| | - Monika Kvernenes
- Center for Medical Education, Faculty of Medicine, University of Bergen, Bergen, Norway
| | - Elizabeth Anne Kinsella
- Institute of Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada
| | - Steinar Hunskaar
- Department of Global Public Health and Primary Care, Faculty of Medicine, University of Bergen, Bergen, Norway
| | - Edvin Schei
- Department of Global Public Health and Primary Care, Faculty of Medicine, University of Bergen, Bergen, Norway
| |
Collapse
|
19
|
Richardson D, Landreville JM, Trier J, Cheung WJ, Bhanji F, Hall AK, Frank JR, Oswald A. Coaching in Competence by Design: A New Model of Coaching in the Moment and Coaching Over Time to Support Large Scale Implementation. PERSPECTIVES ON MEDICAL EDUCATION 2024; 13:33-43. [PMID: 38343553 PMCID: PMC10854464 DOI: 10.5334/pme.959] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 11/23/2023] [Indexed: 02/15/2024]
Abstract
Coaching is an increasingly popular means to provide individualized, learner-centered, developmental guidance to trainees in competency based medical education (CBME) curricula. Aligned with CBME's core components, coaching can assist in leveraging the full potential of this educational approach. With its focus on growth and improvement, coaching helps trainees develop clinical acumen and self-regulated learning skills. Developing a shared mental model for coaching in the medical education context is crucial to facilitate integration and subsequent evaluation of success. This paper describes the Royal College of Physicians and Surgeons of Canada's coaching model, one that is theory based, evidence informed, principle driven and iteratively and developed by a multidisciplinary team. The coaching model was specifically designed, fit for purpose to the postgraduate medical education (PGME) context and implemented as part of Competence by Design (CBD), a new competency based PGME program. This coaching model differentiates two coaching roles, which reflect different contexts in which postgraduate trainees learn and develop skills. Both roles are supported by the RX-OCR process: developing Relationship/Rapport, setting eXpectations, Observing, a Coaching conversation, and Recording/Reflecting. The CBD Coaching Model and its associated RX-OCR faculty development tool support the implementation of coaching in CBME. Coaching in the moment and coaching over time offer important mechanisms by which CBD brings value to trainees. For sustained change to occur and for learners and coaches to experience the model's intended benefits, ongoing professional development efforts are needed. Early post implementation reflections and lessons learned are provided.
Collapse
Affiliation(s)
- Denyse Richardson
- Department of Physical Medicine and Rehabilitation, Queen’s University, Kingston, ON, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
| | | | - Jessica Trier
- Department of Physical Medicine and Rehabilitation, Queen’s University, Kingston, ON, Canada
| | - Warren J. Cheung
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Department of Emergency Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Farhan Bhanji
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Education, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
| | - Andrew K. Hall
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Department of Emergency Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Jason R. Frank
- University of Ottawa Faculty of Medicine, Ottawa, ON, Canada
| | - Anna Oswald
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Division of Rheumatology, Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, ON, Canada
- Competency Based Medical Education, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
20
|
Oswald A, Dubois D, Snell L, Anderson R, Karpinski J, Hall AK, Frank JR, Cheung WJ. Implementing Competence Committees on a National Scale: Design and Lessons Learned. PERSPECTIVES ON MEDICAL EDUCATION 2024; 13:56-67. [PMID: 38343555 PMCID: PMC10854462 DOI: 10.5334/pme.961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 07/03/2023] [Indexed: 02/15/2024]
Abstract
Competence committees (CCs) are a recent innovation to improve assessment decision-making in health professions education. CCs enable a group of trained, dedicated educators to review a portfolio of observations about a learner's progress toward competence and make systematic assessment decisions. CCs are aligned with competency based medical education (CBME) and programmatic assessment. While there is an emerging literature on CCs, little has been published on their system-wide implementation. National-scale implementation of CCs is complex, owing to the culture change that underlies this shift in assessment paradigm and the logistics and skills needed to enable it. We present the Royal College of Physicians and Surgeons of Canada's experience implementing a national CC model, the challenges the Royal College faced, and some strategies to address them. With large scale CC implementation, managing the tension between standardization and flexibility is a fundamental issue that needs to be anticipated and addressed, with careful consideration of individual program needs, resources, and engagement of invested groups. If implementation is to take place in a wide variety of contexts, an approach that uses multiple engagement and communication strategies to allow for local adaptations is needed. Large-scale implementation of CCs, like any transformative initiative, does not occur at a single point but is an evolutionary process requiring both upfront resources and ongoing support. As such, it is important to consider embedding a plan for program evaluation at the outset. We hope these shared lessons will be of value to other educators who are considering a large-scale CBME CC implementation.
Collapse
Affiliation(s)
- Anna Oswald
- Division of Rheumatology, Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
- Competency Based Medical Education, University of Alberta, Edmonton, AB, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- 8-130 Clinical Sciences building, 11350-83 Avenue, Edmonton, AB, Canada
| | - Daniel Dubois
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Department of Anesthesiology and Pain Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Linda Snell
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Institute of Health Sciences Education and Department of Medicine, McGill University, Montreal, QC, Canada
| | - Robert Anderson
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Northern Ontario School of Medicine University, Sudbury, ON, Canada
| | - Jolanta Karpinski
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Department of Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Andrew K. Hall
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Dept. of Emergency Medicine, University of Ottawa, Canada
| | - Jason R. Frank
- Centre for Innovation in Medical Education, Faculty of Medicine, University of Ottawa, Canada
| | - Warren J. Cheung
- Dept. of Emergency Medicine, University of Ottawa, Canada
- Royal College of Physicians and Surgeons of Canada, 1053 Carling Avenue, Rm F660, Ottawa, Canada
| |
Collapse
|
21
|
McLeod K, Woodward-Kron R, Rashid P, Archer J, Nestel D. "I'm on an island": A qualitative study of underperforming surgical trainee perspectives on remediation. Am J Surg 2024:S0002-9610(24)00035-7. [PMID: 38350749 DOI: 10.1016/j.amjsurg.2024.01.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 12/14/2023] [Accepted: 01/28/2024] [Indexed: 02/15/2024]
Abstract
BACKGROUND There is a significant gap in the literature regarding trainees' perceptions of remediation. This study aims to explore surgical trainees' experiences and perspectives of remediation. METHODS This qualitative study used semi-structured interviews with 11 doctors who have experienced formal remediation as a surgical trainee. Reflexive thematic analysis was used for data analysis. RESULTS In this study, trainees perceived remediation as a harrowing and isolating experience, with long-lasting emotions. There was a perceived lack of clarity regarding explanations of underperformance and subjective goals. Remediation was viewed as a 'performance' and tick-box exercise with superficial plans, with challenging trainee/supervisor dynamics. CONCLUSIONS These findings about trainees' perspectives on remediation show a need for trainees to be better emotionally supported during remediation and that remediation plans must be improved to address deficits. Integrating the perspectives and experiences of surgical trainees who have undergone remediation should help improve remediation outcomes and patient care.
Collapse
Affiliation(s)
- Kathryn McLeod
- Department of Urological Surgery, Barwon Health, University Hospital, Geelong, Australia; School of Medicine, Deakin University, Geelong, Australia; Department of Surgery (Austin), University of Melbourne, Heidelberg, Australia.
| | - Robyn Woodward-Kron
- Department of Medical Education, The University of Melbourne, Melbourne, Australia
| | - Prem Rashid
- Department of Urology, Port Macquarie Base Hospital, Rural Clinical School, The University of New South Wales, Port Macquarie, Australia
| | - Julian Archer
- School of Medicine and Dentistry, Griffith University, Gold Coast, Australia
| | - Debra Nestel
- Department of Surgery (Austin), University of Melbourne, Heidelberg, Australia
| |
Collapse
|
22
|
Goldenberg MG. Surgical Artificial Intelligence in Urology: Educational Applications. Urol Clin North Am 2024; 51:105-115. [PMID: 37945096 DOI: 10.1016/j.ucl.2023.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Surgical education has seen immense change recently. Increased demand for iterative evaluation of trainees from medical school to independent practice has led to the generation of an overwhelming amount of data related to an individual's competency. Artificial intelligence has been proposed as a solution to automate and standardize the ability of stakeholders to assess the technical and nontechnical abilities of a surgical trainee. In both the simulation and clinical environments, evidence supports the use of machine learning algorithms to both evaluate trainee skill and provide real-time and automated feedback, enabling a shortened learning curve for many key procedural skills and ensuring patient safety.
Collapse
Affiliation(s)
- Mitchell G Goldenberg
- Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, 1441 Eastlake Avenue, Suite 7416, Los Angeles, CA 90033, USA.
| |
Collapse
|
23
|
Klein R, Snyder ED, Koch J, Volerman A, Alba-Nguyen S, Julian KA, Thompson V, Ufere NN, Burnett-Bowie SAM, Kumar A, White BAA, Park YS, Palamara K. Analysis of narrative assessments of internal medicine resident performance: are there differences associated with gender or race and ethnicity? BMC MEDICAL EDUCATION 2024; 24:72. [PMID: 38233807 PMCID: PMC10795394 DOI: 10.1186/s12909-023-04970-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 12/13/2023] [Indexed: 01/19/2024]
Abstract
BACKGROUND Equitable assessment is critical in competency-based medical education. This study explores differences in key characteristics of qualitative assessments (i.e., narrative comments or assessment feedback) of internal medicine postgraduate resident performance associated with gender and race and ethnicity. METHODS Analysis of narrative comments included in faculty assessments of resident performance from six internal medicine residency programs was conducted. Content analysis was used to assess two key characteristics of comments- valence (overall positive or negative orientation) and specificity (detailed nature and actionability of comment) - via a blinded, multi-analyst approach. Differences in comment valence and specificity with gender and race and ethnicity were assessed using multilevel regression, controlling for multiple covariates including quantitative competency ratings. RESULTS Data included 3,383 evaluations with narrative comments by 597 faculty of 698 residents, including 45% of comments about women residents and 13.2% about residents who identified with race and ethnicities underrepresented in medicine. Most comments were moderately specific and positive. Comments about women residents were more positive (estimate 0.06, p 0.045) but less specific (estimate - 0.07, p 0.002) compared to men. Women residents were more likely to receive non-specific, weakly specific or no comments (adjusted OR 1.29, p 0.012) and less likely to receive highly specific comments (adjusted OR 0.71, p 0.003) or comments with specific examples of things done well or areas for growth (adjusted OR 0.74, p 0.003) than men. Gendered differences in comment specificity and valence were most notable early in training. Comment specificity and valence did not differ with resident race and ethnicity (specificity: estimate 0.03, p 0.32; valence: estimate - 0.05, p 0.26) or faculty gender (specificity: estimate 0.06, p 0.15; valence: estimate 0.02 p 0.54). CONCLUSION There were significant differences in the specificity and valence of qualitative assessments associated with resident gender with women receiving more praising but less specific and actionable comments. This suggests a lost opportunity for well-rounded assessment feedback to the disadvantage of women.
Collapse
Affiliation(s)
- Robin Klein
- Department of Medicine, Division of General Internal Medicine, Emory University School of Medicine, 80 Jesse Hill Jr Dr SE, Atlanta, GA, 30303, USA.
| | - Erin D Snyder
- Department of Medicine, Division of General Internal Medicine, University of Alabama Birmingham School of Medicine, Birmingham, AL, USA
| | - Jennifer Koch
- Department of Medicine, University of Louisville, Louisville, KY, USA
| | - Anna Volerman
- Departments of Medicine and Pediatrics, University of Chicago, Chicago, IL, USA
| | - Sarah Alba-Nguyen
- Department of Medicine, Division of Hospital Medicine, University of California, San Francisco, CA, USA
| | - Katherine A Julian
- Department of Medicine, Division of General Internal Medicine, University of California, San Francisco, CA, USA
| | - Vanessa Thompson
- Department of Medicine, Division of General Internal Medicine, University of California, San Francisco, CA, USA
| | - Nneka N Ufere
- Department of Medicine, Division of Gastroenterology, Massachusetts General Hospital, Boston, MA, USA
| | | | - Anshul Kumar
- Massachusetts General Hospital Institute of Health Professions, Boston, MA, USA
| | - Bobbie Ann A White
- Massachusetts General Hospital Institute of Health Professions, Boston, MA, USA
| | - Yoon Soo Park
- Department of Medical Education, University of Illinois Chicago, Chicago, IL, USA
| | - Kerri Palamara
- Department of Medicine, Massachusetts General Hospital, Boston, MA, USA
| |
Collapse
|
24
|
Braund H, Patel V, Dalgarno N, Mann S. Exploring residents' perceptions of competency-based medical education across Canada: A national survey study. MEDEDPUBLISH 2024; 14:2. [PMID: 38487752 PMCID: PMC10933567 DOI: 10.12688/mep.19247.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/20/2023] [Indexed: 03/17/2024] Open
Abstract
Background: As competency-based medical education (CBME) is implemented across Canada, little is known about residents' perceptions of this model. This study examined how Canadian residents understand CBME and their lived experiences with implementation. Methods: We administered a survey in 2018 with Likert-type and open-ended questions to 375 residents across Canada, of whom 270 were from traditional programs ("pre-CBME") and 105 were in a CBME program. We used the Mann-Whitney test to examine differences across samples, and analyzed qualitative data thematically. Results: Three themes were identified across both groups: program outcome concerns, changes, and emotional responses. In relation to program concerns, both groups were concerned about the administrative burden, challenges with the assessment process, and feedback quality. Only pre-CBME residents were concerned about faculty engagement and buy-in. In terms of changes, both groups discussed a more formalized assessment process with mixed reactions. Residents in the pre-CBME sample reported greater concerns for faculty time constraints, assessment completion, and quality of learning experiences, whilst those in CBME programs reported being more proactive in their learning and greater selfreflection. Residents expressed strong emotional narrative responses including greater stress and frustration in a CBME environment. Conclusion: Findings demonstrate that residents have mixed feelings and experiences regarding CBME. Their positive experiences align with the aim of developing more self-directed learners. However, the concerns suggest the need to address specific shortcomings to increase buy-in, while the emotional responses associated with CBME may require a cultural shift within residency programs to guard against burnout.
Collapse
Affiliation(s)
- Heather Braund
- Professional Development & Educational Scholarship, Queen's University, Kingston, Ontario, K7L 1B9,, Canada
| | - Vivesh Patel
- Faculty of Health Sciences, Queen's University, Kingston, Ontario, K7L 2Y1, Canada
| | - Nancy Dalgarno
- Professional Development & Educational Scholarship, Queen's University, Kingston, Ontario, K7L 1B9,, Canada
| | - Steve Mann
- Department of Surgery, Queen's University, Kingston, Ontario, K7L 2V7, Canada
| |
Collapse
|
25
|
Klein R, Snyder ED, Koch J, Volerman A, Alba-Nguyen S, Julian KA, Thompson V, Ufere NN, Burnett-Bowie SAM, Kumar A, White BAA, Park YS, Palamara K. Exploring gender and thematic differences in qualitative assessments of internal medicine resident performance. BMC MEDICAL EDUCATION 2023; 23:932. [PMID: 38066551 PMCID: PMC10709833 DOI: 10.1186/s12909-023-04917-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Accepted: 11/30/2023] [Indexed: 12/18/2023]
Abstract
INTRODUCTION Evidence suggests gender disparities in medical education assessment, including differences in ratings of competency and narrative comments provided in resident performance assessments. This study explores how gender manifests within the content of qualitative assessments (i.e., narrative comments or performance feedback) of resident performance. METHODS Qualitative content analysis was used to explore gender-based differences in narrative comments included in faculty assessments of resident performance during inpatient medicine rotations at six Internal Medicine residency programs, 2016-2017. A blinded, multi-analyst approach was employed to identify themes across comments. Patterns in themes with resident gender and post-graduate year (PGY) were explored, focusing on PGY2 and PGY3 when residents are serving in the team leader role. RESULTS Data included 3,383 evaluations with narrative comments of 385 men (55.2%) and 313 women residents (44.8%). There were thematic differences in narrative comments received by men and women residents and how these themes manifested within comments changed with training time. Compared to men, comments about women had a persistent relationship-orientation and emphasized confidence over training including as interns and in PGY2 and PGY3, when serving as team leader. The relationship-orientation was characterized not only by the residents' communal attributes but also their interpersonal and communication skills, including efforts supporting others and establishing the tone for the team. Comments about women residents often highlighted confidence, including recommendations around behaviors that convey confidence in decision-making and team leadership. DISCUSSION There were gender-based thematic differences in qualitative assessments. Comments about women resident team leaders highlight relationship building skills and urge confidence and actions that convey confidence as team leader. Persistent attention to communal skills suggests gendered expectations for women resident team leaders and a lost opportunity for well-rounded feedback to the disadvantage of women residents. These findings may inform interventions to promote equitable assessment, such as providing feedback across the competencies.
Collapse
Affiliation(s)
- Robin Klein
- Emory University School of Medicine, Atlanta, GA, USA.
| | - Erin D Snyder
- Department of Medicine, Division of General Internal Medicine, University of Alabama Birmingham School of Medicine, Birmingham, AL, USA
| | - Jennifer Koch
- Department of Medicine, University of Louisville, Louisville, KY, USA
| | - Anna Volerman
- Departments of Medicine and Pediatrics, University of Chicago, Chicago, IL, USA
| | - Sarah Alba-Nguyen
- Department of Medicine, Division of Hospital Medicine, University of California, San Francisco, San Francisco, CA, USA
| | - Katherine A Julian
- Department of Medicine, Division of General Internal Medicine, University of California, San Francisco, San Francisco, CA, USA
| | - Vanessa Thompson
- Department of Medicine, Division of General Internal Medicine, University of California, San Francisco, San Francisco, CA, USA
| | - Nneka N Ufere
- Department of Medicine, Division of Gastroenterology, Massachusetts General Hospital, Boston, MA, USA
| | | | - Anshul Kumar
- Massachusetts General Hospital Institute of Health Professions, Boston, MA, USA
| | | | - Yoon Soo Park
- Department of Medical Education, University of Illinois, Chicago, IL, USA
| | - Kerri Palamara
- Department of Medicine, Massachusetts General Hospital, Boston, MA, USA
| |
Collapse
|
26
|
Appelhaus S, Werner S, Grosse P, Kämmer JE. Feedback, fairness, and validity: effects of disclosing and reusing multiple-choice questions in medical schools. MEDICAL EDUCATION ONLINE 2023; 28:2143298. [PMID: 36350605 PMCID: PMC9662023 DOI: 10.1080/10872981.2022.2143298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 10/30/2022] [Accepted: 10/31/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND Disclosure of items used in multiple-choice-question (MCQ) exams may decrease student anxiety and improve transparency, feedback, and test-enhanced learning but potentially compromises the reliability and fairness of exams if items are eventually reused. Evidence regarding whether disclosure and reuse of test items change item psychometrics is scarce and inconclusive. METHODS We retrospectively analysed difficulty and discrimination coefficients of 10,148 MCQ items used between fall 2017 and fall 2019 in a large European medical school in which items were disclosed from fall 2017 onwards. We categorised items as 'new'; 'reused, not disclosed'; or 'reused, disclosed'. For reused items, we calculated the difference from their first ever use, that is, when they were new. Differences between categories and terms were analysed with one-way analyses of variance and independent-samples t tests. RESULTS The proportion of reused, disclosed items grew from 0% to 48.4%; mean difficulty coefficients increased from 0.70 to 0.76; that is, items became easier, P < .001, ηp2 = 0.011. On average, reused, disclosed items were significantly easier (M = 0.83) than reused, not disclosed items (M = 0.71) and entirely new items (M = 0.66), P < .001, ηp2 = 0.087. Mean discrimination coefficients increased from 0.21 to 0.23; that is, item became slightly more discriminating, P = .002, ηp2 = 0.002. CONCLUSIONS Disclosing test items provides the opportunity to enhance feedback and transparency in MCQ exams but potentially at the expense of decreased item reliability. Discrimination was positively affected. Our study may help weigh advantages and disadvantages of using previously disclosed items.
Collapse
Affiliation(s)
- Stefan Appelhaus
- Institute of Medical Sociology and Rehabilitation Science, Charité—Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
- Department of Radiology and Nuclear Medicine, Universitätsmedizin Mannheim, Heidelberg University, Mannheim, Germany
| | - Susanne Werner
- Assessment Unit, Charité—Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
| | - Pascal Grosse
- Dean of Students Office and Department of Neurology, Charité—Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
| | - Juliane E. Kämmer
- Department of Emergency Medicine, University of Bern, Bern, Switzerland
| |
Collapse
|
27
|
Tavares W, Kinnear B, Schumacher DJ, Forte M. "Rater training" re-imagined for work-based assessment in medical education. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:1697-1709. [PMID: 37140661 DOI: 10.1007/s10459-023-10237-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 04/30/2023] [Indexed: 05/05/2023]
Abstract
In this perspective, the authors critically examine "rater training" as it has been conceptualized and used in medical education. By "rater training," they mean the educational events intended to improve rater performance and contributions during assessment events. Historically, rater training programs have focused on modifying faculty behaviours to achieve psychometric ideals (e.g., reliability, inter-rater reliability, accuracy). The authors argue these ideals may now be poorly aligned with contemporary research informing work-based assessment, introducing a compatibility threat, with no clear direction on how to proceed. To address this issue, the authors provide a brief historical review of "rater training" and provide an analysis of the literature examining the effectiveness of rater training programs. They focus mainly on what has served to define effectiveness or improvements. They then draw on philosophical and conceptual shifts in assessment to demonstrate why the function, effectiveness aims, and structure of rater training requires reimagining. These include shifting competencies for assessors, viewing assessment as a complex cognitive task enacted in a social context, evolving views on biases, and reprioritizing which validity evidence should be most sought in medical education. The authors aim to advance the discussion on rater training by challenging implicit incompatibility issues and stimulating ways to overcome them. They propose that "rater training" (a moniker they suggest be reserved for strong psychometric aims) be augmented with "assessor readiness" programs that link to contemporary assessment science and enact the principle of compatibility between that science and ways of engaging with advances in real-world faculty-learner contexts.
Collapse
Affiliation(s)
- Walter Tavares
- Department of Health and Society, Wilson Centre, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada.
| | - Benjamin Kinnear
- Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Daniel J Schumacher
- Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Milena Forte
- Department of Family and Community Medicine, Temerty Faculty of Medicine, Mount Sinai Hospital, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
28
|
Somerville SG, Harrison NM, Lewis SA. Twelve tips for the pre-brief to promote psychological safety in simulation-based education. MEDICAL TEACHER 2023; 45:1349-1356. [PMID: 37210674 DOI: 10.1080/0142159x.2023.2214305] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
It is recognised that simulation-based education can be stressful, and this can impact negatively on learning. A fundamental aspect of facilitating simulation is creating a safe educational environment. Edmondson's seminal work on creating psychological safety among interpersonal teams has been embraced by the healthcare simulation community. Psychological safety is an underpinning philosophy for creating simulation experiences in which learners can develop within a stimulating and challenging yet supportive social atmosphere. Through careful design and thoughtful delivery, the introductory phase of simulation, the pre-briefing, can effectively prepare learners for simulation, reduce learner anxiety, and promote psychological safety, to enhance learning experiences. These twelve tips provide guidance for conducting a pre-brief and promoting a psychologically safe environment for simulation-based education.
Collapse
Affiliation(s)
| | - Neil Malcolm Harrison
- Clinical Skills Centre, Dundee Institute for Healthcare Simulation, School of Medicine, University of Dundee, Dundee, Scotland
| | - Steven Anthony Lewis
- Clinical Skills Centre, Dundee Institute for Healthcare Simulation, School of Medicine, University of Dundee, Dundee, Scotland
| |
Collapse
|
29
|
Lecordier M, Tissot C, Bonnardot L, Hitier M. Surgical training strategies for physicians practicing in an isolated environment: an example from Antarctica. International survey of 13 countries with active winter stations. Int J Circumpolar Health 2023; 82:2236761. [PMID: 37499127 PMCID: PMC10375923 DOI: 10.1080/22423982.2023.2236761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 07/10/2023] [Accepted: 07/11/2023] [Indexed: 07/29/2023] Open
Abstract
For 60 years, human presence in Antarctica has required particularly demanding medical skills. Nevertheless, the preparation of physicians working in this extreme environment remains unknown and deserves clarification. This study aimed to summarise data on the surgical training given to physicians by different countries. In April 2020, we conducted a questionnaire-based study of 14 countries wintering in Antarctica. Responses were descriptively analysed. Regarding the profiles of physicians recruited by the wintering countries, 30% to 55% were non-surgeon doctors compared with 45% to 70% for surgeons depending on the year. Of the 13 countries answering the questionnaire, nine organised practical surgical training and six used theoretical material. All countries reported practical training for dental surgery, while only five countries provided training in four other surgical specialities (orthopaedic, digestive, thoracic, and ear, throat, and nose). All 13 countries reported using a telemedicine system. These results revealed heterogeneous training strategies among the recruited physicians, reflecting the difficulties of practice on this extreme continent. Future work may assess the effectiveness of each strategy. A better understanding of surgical epidemiology and a detailed referencing of the equipment available at the bases would help better define the contours of surgical care in Antarctica.
Collapse
Affiliation(s)
| | - Cécile Tissot
- Faculty of Medicine and Health Sciences, UBO, Brest, France
| | - Laurent Bonnardot
- Department of Medical Ethics and Legal Medicine, Paris Descartes University, Paris, EA, France
| | - Martin Hitier
- Department of Otolaryngology Head & Neck Surgery, Normandie Univ, Caen, France
- Department of Anatomy, Inserm, Caen, France
| |
Collapse
|
30
|
Scarff CE, Bearman M, Chiavaroli N, Trumble S. Assessor discomfort and failure to fail in clinical performance assessments. BMC MEDICAL EDUCATION 2023; 23:901. [PMID: 38012637 PMCID: PMC10680261 DOI: 10.1186/s12909-023-04688-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 09/14/2023] [Indexed: 11/29/2023]
Abstract
BACKGROUND Assessment of trainee performance in the workplace is critical to ensuring high standards of clinical care. However, some supervisors find the task to be challenging, and may feel unable to deliver their true judgement on a trainee's performance. They may 'keep MUM' (that is, keep mum about undesirable messages) and fail to fail an underperforming trainee. In this study, we explore the effect of discomfort on assessors. METHODS Using a survey method, supervisors of trainees in the Australasian College of Dermatologists were asked to self-report experiences of discomfort in various aspects of trainee workplace assessment and for their engagement in MUM behaviours including failure to fail. RESULTS Sixty-one responses were received from 135 eligible assessors. 12.5% of assessors self-reported they had failed to fail a trainee and 18% admitted they had grade inflated a trainee's score on a clinical performance assessment in the previous 12-month period. Assessors who reported higher levels of discomfort in the clinical performance assessment context were significantly more likely to report previously failing to fail a trainee. The study did not reveal significant associations with assessor demographics and self-reports of discomfort or MUM behaviours. CONCLUSIONS This study reveals the impact of assessor discomfort on the accuracy of assessment information and feedback to trainees, including as a contributing factor to the failure to fail phenomenon. Addressing assessor experience of discomfort offers one opportunity to impact on the complex and multifactorial issue that failure to fail represents.
Collapse
Affiliation(s)
- Catherine E Scarff
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Room N722, Level 7 North Medical Building Grattan Street, Melbourne, VIC, Australia.
| | - Margaret Bearman
- Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University, Melbourne, VIC, Australia
| | - Neville Chiavaroli
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Room N722, Level 7 North Medical Building Grattan Street, Melbourne, VIC, Australia
- Australian Council for Educational Research, Camberwell, Australia
| | - Stephen Trumble
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Room N722, Level 7 North Medical Building Grattan Street, Melbourne, VIC, Australia
| |
Collapse
|
31
|
McGuire N, Acai A, Sonnadara RR. The McMaster Narrative Comment Rating Tool: Development and Initial Validity Evidence. TEACHING AND LEARNING IN MEDICINE 2023:1-13. [PMID: 37964518 DOI: 10.1080/10401334.2023.2276799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Accepted: 10/05/2023] [Indexed: 11/16/2023]
Abstract
CONSTRUCT The McMaster Narrative Comment Rating Tool aims to capture critical features reflecting the quality of written narrative comments provided in the medical education context: valence/tone of language, degree of correction versus reinforcement, specificity, actionability, and overall usefulness. BACKGROUND Despite their role in competency-based medical education, not all narrative comments contribute meaningfully to the development of learners' competence. To develop solutions to mitigate this problem, robust measures of narrative comment quality are needed. While some tools exist, most were created in specialty-specific contexts, have focused on one or two features of feedback, or have focused on faculty perceptions of feedback, excluding learners from the validation process. In this study, we aimed to develop a detailed, broadly applicable narrative comment quality assessment tool that drew upon features of high-quality assessment and feedback and could be used by a variety of raters to inform future research, including applications related to automated analysis of narrative comment quality. APPROACH In Phase 1, we used the literature to identify five critical features of feedback. We then developed rating scales for each of the features, and collected 670 competency-based assessments completed by first-year surgical residents in the first six-weeks of training. Residents were from nine different programs at a Canadian institution. In Phase 2, we randomly selected 50 assessments with written feedback from the dataset. Two education researchers used the scale to independently score the written comments and refine the rating tool. In Phase 3, 10 raters, including two medical education researchers, two medical students, two residents, two clinical faculty members, and two laypersons from the community, used the tool to independently and blindly rate written comments from another 50 randomly selected assessments from the dataset. We compared scores between and across rater pairs to assess reliability. FINDINGS Single and average measures intraclass correlation (ICC) scores ranged from moderate to excellent (ICCs = .51-.83 and .91-.98) across all categories and rater pairs. All tool domains were significantly correlated (p's <.05), apart from valence, which was only significantly correlated with degree of correction versus reinforcement. CONCLUSION Our findings suggest that the McMaster Narrative Comment Rating Tool can reliably be used by multiple raters, across a variety of rater types, and in different surgical contexts. As such, it has the potential to support faculty development initiatives on assessment and feedback, and may be used as a tool to conduct research on different assessment strategies, including automated analysis of narrative comments.
Collapse
Affiliation(s)
- Natalie McGuire
- Office of Professional Development and Educational Scholarship, Queen's University, Kingston, Ontario, Canada
| | - Anita Acai
- Department of Psychiatry and Behavioural Neurosciences and McMaster Education Research, Innovation and Theory (MERIT) Program, McMaster University, and St. Joseph's Education Research Centre (SERC), St. Joseph's Healthcare Hamilton, Hamilton, Canada
| | - Ranil R Sonnadara
- Office of Education Science, Department of Surgery, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
32
|
Sukhera J, Ölveczky D, Colbert-Getz J, Fernandez A, Ho MJ, Ryan MS, Young ME. Digging Deeper, Zooming Out: Reimagining Legacies in Medical Education. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S6-S9. [PMID: 37983391 DOI: 10.1097/acm.0000000000005372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/02/2023]
Abstract
Although the wide-scale disruption precipitated by the COVID-19 pandemic has somewhat subsided, there are many questions about the implications of such disruptions for the road ahead. This year's Research in Medical Education (RIME) supplement may provide a window of insight. Now, more than ever, researchers are poised to question long-held assumptions while reimagining long-established legacies. Themes regarding the boundaries of professional identity, approaches to difficult conversations, challenges of power and hierarchy, intricacies of selection processes, and complexities of learning climates appear to be the most salient and critical to understand. In this commentary, the authors use the relationship between legacies and assumptions as a framework to gain a deeper understanding about the past, present, and future of RIME.
Collapse
Affiliation(s)
- Javeed Sukhera
- J. Sukhera is chair/chief of psychiatry, Hartford Hospital and the Institute of Living, and associate clinical professor of psychiatry, Yale School of Medicine, Hartford, Connecticut; ORCID: https://orcid.org/0000-0001-8146-4947
| | - Daniele Ölveczky
- D. Ölveczky is assistant professor of medicine and codirector, Health Equity and Anti-Racism Theme, Harvard Medical School, and physician director, Office of Diversity, Equity and Inclusion, Beth Israel Deaconess Medical Center, Boston, Massachusetts; ORCID: https://orcid.org/0000-0001-8972-4483
| | - Jorie Colbert-Getz
- J. Colbert-Getz is assistant dean of education quality improvement and associate professor, Department of Internal Medicine, Spencer Fox Eccles School of Medicine at the University of Utah, Salt Lake City, Utah; ORCID: https://orcid.org/0000-0001-7419-7588
| | - Andres Fernandez
- A. Fernandez is assistant professor, Department of Neurology, Thomas Jefferson University, Philadelphia, Pennsylvania, and a PhD student, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands; ORCID: https://orcid.org/0000-0001-5389-6232
| | - Ming-Jung Ho
- M.-J. Ho is professor of family medicine, associate director, Center for Innovation and Leadership in Education, and director of education research, MedStar Health, Georgetown University, Washington, DC; ORCID: https://orcid.org/0000-0003-1415-8282
| | - Michael S Ryan
- M.S. Ryan is associate dean for assessment, evaluation, research and scholarly innovation, and professor, Department of Pediatrics, University of Virginia School of Medicine, Charlottesville, Virginia, and a PhD student, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands; ORCID: https://orcid.org/0000-0003-3266-9289
| | - Meredith E Young
- M.E. Young is associate professor, Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada; ORCID: https://orcid.org/0000-0002-2036-2119
| |
Collapse
|
33
|
Norcini J. On Purpose: The Case for Alignment in Assessment. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:1240-1242. [PMID: 37556812 DOI: 10.1097/acm.0000000000005430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
In this issue, Ryan and colleagues underscore the need for criterion-based assessments in the context of competency-based curricula in undergraduate medical education (UME). They also point out that the same scores are often interpreted from a norm-referenced perspective to support the admissions process for residency training. This problem is not unique to UME because in graduate medical education (GME), the same assessments are often used for both decision making and providing feedback. Unfortunately, an assessment with 2 purposes is neither optimal nor efficient for either purpose and may be accompanied by significant side effects. One approach to addressing these challenges is to develop a system of assessment that addresses both purposes but where each component is focused on a single purpose. This leads to alignment and transparency from purpose to test content and from test content to score interpretation and/or feedback. It ensures that the test material is optimized for the task, that individual assessments are constructed to enhance the validity of their scores, and that undesirable side effects are limited.
Collapse
Affiliation(s)
- John Norcini
- J. Norcini is research professor, Department of Psychiatry, SUNY Upstate Medical University, Syracuse, New York; ORCID: https://orcid.org/0000-0002-8464-4115
| |
Collapse
|
34
|
Liao KC, Ajjawi R, Peng CH, Jenq CC, Monrouxe LV. Striving to thrive or striving to survive: Professional identity constructions of medical trainees in clinical assessment activities. MEDICAL EDUCATION 2023; 57:1102-1116. [PMID: 37394612 DOI: 10.1111/medu.15152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 05/15/2023] [Accepted: 05/30/2023] [Indexed: 07/04/2023]
Abstract
CONTEXT Assessment plays a key role in competence development and the shaping of future professionals. Despite its presumed positive impacts on learning, unintended consequences of assessment have drawn increasing attention in the literature. Considering professional identities and how these can be dynamically constructed through social interactions, as in assessment contexts, our study sought to understand how assessment influences the construction of professional identities in medical trainees. METHODS Within social constructionism, we adopted a discursive, narrative approach to investigate the different positions trainees narrate for themselves and their assessors in clinical assessment contexts and the impact of these positions on their constructed identities. We purposively recruited 28 medical trainees (23 students and five postgraduate trainees), who took part in entry, follow-up and exit interviews of this study and submitted longitudinal audio/written diaries across nine-months of their training programs. Thematic framework and positioning analyses (focusing on how characters are linguistically positioned in narratives) were applied using an interdisciplinary teamwork approach. RESULTS We identified two key narrative plotlines, striving to thrive and striving to survive, across trainees' assessment narratives from 60 interviews and 133 diaries. Elements of growth, development, and improvement were identified as trainees narrated striving to thrive in assessment. Neglect, oppression and perfunctory narratives were elaborated as trainees narrated striving to survive from assessment. Nine main character tropes adopted by trainees with six key assessor character tropes were identified. Bringing these together we present our analysis of two exemplary narratives with elaboration of their wider social implications. CONCLUSION Adopting a discursive approach enabled us to better understand not only what identities are constructed by trainees in assessment contexts but also how they are constructed in relation to broader medical education discourses. The findings are informative for educators to reflect on, rectify and reconstruct assessment practices for better facilitating trainee identity construction.
Collapse
Affiliation(s)
- Kuo-Chen Liao
- Division of Geriatrics and General Internal Medicine, Department of Internal Medicine, Chang Gung Memorial Hospital (CGMH), Linkou, Taiwan (ROC)
- Chang Gung Medical Education Research Centre, CGMH, Linkou, Taiwan (ROC)
- School of Medicine, College of Medicine, Chang Gung University, Taoyuan City, Taiwan (ROC)
| | - Rola Ajjawi
- Centre for Research in Assessment and Digital Learning, Deakin University, Melbourne, Victoria, Australia
| | - Chang-Hsuan Peng
- Chang Gung Medical Education Research Centre, CGMH, Linkou, Taiwan (ROC)
- School of Medicine, College of Medicine, Chang Gung University, Taoyuan City, Taiwan (ROC)
| | - Chang-Chyi Jenq
- Chang Gung Medical Education Research Centre, CGMH, Linkou, Taiwan (ROC)
- Department of Nephrology, CGMH, Linkou, Taiwan (ROC)
- Medical Humanities Center, CGMH, Linkou, Taiwan (ROC)
- Department of Medical Humanities and Social Sciences, School of Medicine, College of Medicine, Chang Gung University, Taoyuan City, Taiwan (ROC)
| | - Lynn V Monrouxe
- Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
35
|
Szulewski A, Braund H, Dagnone DJ, McEwen L, Dalgarno N, Schultz KW, Hall AK. The Assessment Burden in Competency-Based Medical Education: How Programs Are Adapting. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:1261-1267. [PMID: 37343164 DOI: 10.1097/acm.0000000000005305] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/23/2023]
Abstract
Residents and faculty have described a burden of assessment related to the implementation of competency-based medical education (CBME), which may undermine its benefits. Although this concerning signal has been identified, little has been done to identify adaptations to address this problem. Grounded in an analysis of an early Canadian pan-institutional CBME adopter's experience, this article describes postgraduate programs' adaptations related to the challenges of assessment in CBME. From June 2019-September 2022, 8 residency programs underwent a standardized Rapid Evaluation guided by the Core Components Framework (CCF). Sixty interviews and 18 focus groups were held with invested partners. Transcripts were analyzed abductively using CCF, and ideal implementation was compared with enacted implementation. These findings were then shared back with program leaders, adaptations were subsequently developed, and technical reports were generated for each program. Researchers reviewed the technical reports to identify themes related to the burden of assessment with a subsequent focus on identifying adaptations across programs. Three themes were identified: (1) disparate mental models of assessment processes in CBME, (2) challenges in workplace-based assessment processes, and (3) challenges in performance review and decision making. Theme 1 included entrustment interpretation and lack of shared mindset for performance standards. Adaptations included revising entrustment scales, faculty development, and formalizing resident membership. Theme 2 involved direct observation, timeliness of assessment completion, and feedback quality. Adaptations included alternative assessment strategies beyond entrustable professional activity forms and proactive assessment planning. Theme 3 related to resident data monitoring and competence committee decision making. Adaptations included adding resident representatives to the competence committee and assessment platform enhancements. These adaptations represent responses to the concerning signal of significant burden of assessment within CBME being experienced broadly. The authors hope other programs may learn from their institution's experience and navigate the CBME-related assessment burden their invested partners may be facing.
Collapse
Affiliation(s)
- Adam Szulewski
- A. Szulewski is associate professor, Departments of Emergency Medicine and Psychology, and educational scholarship lead, Postgraduate Medical Education, Queen's University, Kingston, Ontario, Canada; ORCID: https://orcid.org/0000-0002-3076-6221
| | - Heather Braund
- H. Braund is associate director of scholarship and simulation education, Office of Professional Development and Educational Scholarship, and assistant (adjunct) professor, Department of Biomedical and Molecular Sciences and School of Medicine, Queen's University, Kingston, Ontario, Canada; ORCID: https://orcid.org/0000-0002-9749-7193
| | - Damon J Dagnone
- D.J. Dagnone is associate professor, Department of Emergency Medicine, Queen's University, Kingston, Ontario, Canada; ORCID: https://orcid.org/0000-0001-6963-7948
| | - Laura McEwen
- L. McEwen is director of assessment and evaluation of postgraduate medical education and assistant professor, Department of Pediatrics, Postgraduate Medical Education, Queen's University, Kingston, Ontario, Canada; ORCID: https://orcid.org/0000-0003-2457-5311
| | - Nancy Dalgarno
- N. Dalgarno is director of education scholarship, Office of Professional Development and Educational Scholarship, and assistant professor (adjunct), Department of Biomedical and Molecular Sciences and Master of Health Professions Education, Queen's University, Kingston, Ontario, Canada; ORCID: https://orcid.org/0000-0001-7932-9949
| | - Karen W Schultz
- K.W. Schultz is professor, Department of Family Medicine, and associate dean of postgraduate medical education, Queen's University, Kingston, Ontario, Canada; ORCID: https://orcid.org/0000-0003-0208-3981
| | - Andrew K Hall
- A.K. Hall is associate professor and vice chair of education, Department of Emergency Medicine, University of Ottawa, and clinician educator, Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada; ORCID: https://orcid.org/0000-0003-1227-5397
| |
Collapse
|
36
|
Perez S, Schwartz A, Hauer KE, Karani R, Hirshfield LE, McNamara M, Henry D, Lupton KL, Woods M, Teherani A. Developing Evidence for Equitable Assessment Characteristics Based on Clinical Learner Preferences Using Discrete Choice Experiments. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S108-S115. [PMID: 37983403 DOI: 10.1097/acm.0000000000005360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
PURPOSE Medical education is only beginning to explore the factors that contribute to equitable assessment in clinical settings. Increasing knowledge about equitable assessment ensures a quality medical education experience that produces an excellent, diverse physician workforce equipped to address the health care disparities facing patients and communities. Through the lens of the Anti-Deficit Achievement framework, the authors aimed to obtain evidence for a model for equitable assessment in clinical training. METHOD A discrete choice experiment approach was used which included an instrument with 6 attributes each at 2 levels to reveal learner preferences for the inclusion of each attribute in equitable assessment. Self-identified underrepresented in medicine (UIM) and not underrepresented in medicine (non-UIM) (N = 306) fourth-year medical students and senior residents in medicine, pediatrics, and surgery at 9 institutions across the United States completed the instrument. A mixed-effects logit model was used to determine attributes learners valued most. RESULTS Participants valued the inclusion of all assessment attributes provided except for peer comparison. The most valued attribute of an equitable assessment was how learner identity, background, and trajectory were appreciated by clinical supervisors. The next most valued attributes were assessment of growth, supervisor bias training, narrative assessments, and assessment of learner's patient care, with participants willing to trade off any of the attributes to get several others. There were no significant differences in value placed on assessment attributes between UIM and non-UIM learners. Residents valued clinical supervisors valuing learner identity, background, and trajectory and clinical supervisor bias training more so than medical students. CONCLUSIONS This study offers support for the components of an antideficit-focused model for equity in assessment and informs efforts to promote UIM learner success and guide equity, diversity, and inclusion initiatives in medical education.
Collapse
Affiliation(s)
- Sandra Perez
- S. Perez is a resident, Department of Pathology, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Alan Schwartz
- A. Schwartz is the Michael Reese Endowed Professor of Medical Education, Department of Medical Education, and research professor, Department of Pediatrics, University of Illinois at Chicago, Chicago, Illinois, and director, Association of Pediatric Program Directors Longitudinal Educational Assessment Research Network (APPD LEARN), McLean, Virginia; ORCID: http://orcid.org/0000-0003-3809-6637
| | - Karen E Hauer
- K.E. Hauer is professor, Department of Medicine, and associate dean for competency assessment and professional standards, University of California, San Francisco, School of Medicine, San Francisco, California; ORCID: https://orcid.org/0000-0002-8812-4045
| | - Reena Karani
- R. Karani is professor, Departments of Medicine, Medical Education, and Geriatrics and Palliative Medicine, and director, Institute for Medical Education, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Laura E Hirshfield
- L.E. Hirshfield is the Dr. Georges Bordage Medical Education Faculty Scholar, associate professor, PhD program codirector, and associate director of graduate studies, Department of Medical Education, University of Illinois College of Medicine, Chicago, Illinois; ORCID: https://orcid.org/0000-0003-0894-2994
| | - Margaret McNamara
- M. McNamara is professor, Department of Pediatrics, and pediatric residency program director, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Duncan Henry
- D. Henry is associate professor, Department of Pediatrics, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Katherine L Lupton
- K.L. Lupton is professor, Department of Medicine, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Majka Woods
- M. Woods holds the Dibrell Family Professorship in the Art of Medicine, and is assistant professor, Department of Surgery, and vice dean for academic affairs, John Sealy School of Medicine at the University of Texas Medical Branch, Galveston, Texas
| | - Arianne Teherani
- A. Teherani is professor, Department of Medicine, education scientist, Center for Faculty Educators, director of program evaluation and education continuous quality improvement, and founding codirector, University of California Center for Climate Health and Equity, University of California, San Francisco, School of Medicine, San Francisco, California; ORCID: http://orcid.org/0000-0003-2936-9832
| |
Collapse
|
37
|
Shaw T, LaDonna KA, Hauer KE, Khalife R, Sheu L, Wood TJ, Montgomery A, Rauscher S, Aggarwal S, Humphrey-Murto S. Having a Bad Day Is Not an Option: Learner Perspectives on Learner Handover. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S58-S64. [PMID: 37983397 DOI: 10.1097/acm.0000000000005433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/12/2023]
Abstract
PURPOSE Learner handover is the sharing of learner-related information between supervisors involved in their education. The practice allows learners to build upon previous assessments and can support the growth-oriented focus of competency-based medical education. However, learner handover also carries the risk of biasing future assessments and breaching learner confidentiality. Little is known about learner handover's educational impact, and what is known is largely informed by faculty and institutional perspectives. The purpose of this study was to explore learner handover from the learner perspective. METHOD Constructivist grounded theory was used to explore learners' perspectives and beliefs around learner handover. Twenty-nine semistructured interviews were completed with medical students and residents from the University of Ottawa and University of California, San Francisco. Interviews took place between April and December 2020. Using the constant comparative approach, themes were identified through an iterative process. RESULTS Learners were generally unaware of specific learner handover practices, although most recognized circumstances where both formal and informal handovers may occur. Learners appreciated the potential for learner handover to tailor education, guide entrustment and supervision decisions, and support patient safety, but worried about its potential to bias future assessments and breach confidentiality. Furthermore, learners were concerned that information-sharing may be more akin to gossip rather than focused on their educational needs and feared unfair scrutiny and irreversible long-term career consequences from one shared mediocre performance. Altogether, these concerns fueled an overwhelming pressure to perform. CONCLUSIONS While learners recognized the rationale for learner handover, they feared the possible inadvertent short- and long-term impact on their training and future careers. Designing policies that support transparency and build awareness around learner handover may mitigate unintended consequences that can threaten learning and the learner-supervisor relationship, ensuring learner handover benefits the learner as intended.
Collapse
Affiliation(s)
- Tammy Shaw
- T. Shaw is assistant professor, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Kori A LaDonna
- K.A. LaDonna is associate professor, Department of Innovation in Medical Education and Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Karen E Hauer
- K.E. Hauer is associate dean for competency assessment and professional standards and professor, Department of Medicine, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Roy Khalife
- R. Khalife is assistant professor, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Leslie Sheu
- L. Sheu is a physician, Private Medical, San Francisco, California
| | - Timothy J Wood
- T.J. Wood is professor, Department of Innovation in Medical Education, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Anne Montgomery
- A. Montgomery is associate program director, Washington Regional, University of Arkansas for Medical Sciences, Little Rock, Arkansas
| | - Scott Rauscher
- S. Rauscher is project coordinator, Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| | - Simran Aggarwal
- S. Aggarwal is a first-year resident in pediatrics, McMaster University, Hamilton, Ontario, Canada
| | - Susan Humphrey-Murto
- S. Humphrey-Murto is associate professor, Department of Medicine and Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
38
|
Ng FYC, Thirunavukarasu AJ, Cheng H, Tan TF, Gutierrez L, Lan Y, Ong JCL, Chong YS, Ngiam KY, Ho D, Wong TY, Kwek K, Doshi-Velez F, Lucey C, Coffman T, Ting DSW. Artificial intelligence education: An evidence-based medicine approach for consumers, translators, and developers. Cell Rep Med 2023; 4:101230. [PMID: 37852174 PMCID: PMC10591047 DOI: 10.1016/j.xcrm.2023.101230] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 09/04/2023] [Accepted: 09/15/2023] [Indexed: 10/20/2023]
Abstract
Current and future healthcare professionals are generally not trained to cope with the proliferation of artificial intelligence (AI) technology in healthcare. To design a curriculum that caters to variable baseline knowledge and skills, clinicians may be conceptualized as "consumers", "translators", or "developers". The changes required of medical education because of AI innovation are linked to those brought about by evidence-based medicine (EBM). We outline a core curriculum for AI education of future consumers, translators, and developers, emphasizing the links between AI and EBM, with suggestions for how teaching may be integrated into existing curricula. We consider the key barriers to implementation of AI in the medical curriculum: time, resources, variable interest, and knowledge retention. By improving AI literacy rates and fostering a translator- and developer-enriched workforce, innovation may be accelerated for the benefit of patients and practitioners.
Collapse
Affiliation(s)
- Faye Yu Ci Ng
- Artificial Intelligence and Digital Innovation, Singapore Eye Research Institute, Singapore National Eye Center, Singapore Health Service, Singapore, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Arun James Thirunavukarasu
- Artificial Intelligence and Digital Innovation, Singapore Eye Research Institute, Singapore National Eye Center, Singapore Health Service, Singapore, Singapore; University of Cambridge School of Clinical Medicine, Cambridge, UK; Oxford University Clinical Academic Graduate School, University of Oxford, Oxford, UK
| | - Haoran Cheng
- Artificial Intelligence and Digital Innovation, Singapore Eye Research Institute, Singapore National Eye Center, Singapore Health Service, Singapore, Singapore; Rollins School of Public Health, Emory University, Atlanta, GA, USA; Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Ting Fang Tan
- Artificial Intelligence and Digital Innovation, Singapore Eye Research Institute, Singapore National Eye Center, Singapore Health Service, Singapore, Singapore
| | - Laura Gutierrez
- Artificial Intelligence and Digital Innovation, Singapore Eye Research Institute, Singapore National Eye Center, Singapore Health Service, Singapore, Singapore
| | - Yanyan Lan
- Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China
| | | | - Yap Seng Chong
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore; Dean's Office, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Kee Yuan Ngiam
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore; Biomedical Engineering, School of Engineering, National University of Singapore, Singapore, Singapore
| | - Dean Ho
- Biomedical Engineering, School of Engineering, National University of Singapore, Singapore, Singapore; Insitute for Digital Medicine (WisDM), N.1 Institute for Health, National University of Singapore, Singapore, Singapore; Department of Pharmacology, National University of Singapore, Singapore, Singapore
| | - Tien Yin Wong
- Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Kenneth Kwek
- Chief Executive Office, Singapore General Hospital, SingHealth, Singapore, Singapore
| | - Finale Doshi-Velez
- Harvard Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Catherine Lucey
- Executive Vice Chancellor and Provost Office, University of California, San Francisco, San Francisco, CA, USA
| | - Thomas Coffman
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Daniel Shu Wei Ting
- Artificial Intelligence and Digital Innovation, Singapore Eye Research Institute, Singapore National Eye Center, Singapore Health Service, Singapore, Singapore; Duke-NUS Medical School, National University of Singapore, Singapore, Singapore; Byers Eye Institute, Stanford University, Palo Alto, CA, USA.
| |
Collapse
|
39
|
Cordovani L, Tran C, Wong A, Jack SM, Monteiro S. Undergraduate Learners' Receptiveness to Feedback in Medical Schools: A Scoping Review. MEDICAL SCIENCE EDUCATOR 2023; 33:1253-1269. [PMID: 37886291 PMCID: PMC10597920 DOI: 10.1007/s40670-023-01858-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/10/2023] [Indexed: 10/28/2023]
Abstract
Feedback from educators to learners is considered an important element of effective learning in medical school. While early studies were focused on the processes of providing feedback, recent work has showed that factors related to how learners receive feedback seems to be equally important. Considering that the literature on this topic is new in medical education, and studies are diverse and methodologically variable, we sought to conduct a scoping review to map the articles on receptiveness to feedback, to provide an overview of its related factors, to identify the types of research conducted in this area, and to document knowledge gaps in the existing literature. Using the Joanna Briggs Institute scoping review methodology, we searched four databases (CINAHL, Ovid, PubMed, and Web of Science) and screened 9120 abstracts, resulting in 98 articles for our final analysis. In this sample, 80% of studies on the feedback receiver were published in the last 10 years, and there is a vast variation in the studies' methodologies. The main factors that affect medical students' receptiveness to feedback are students' characteristics, feedback content, educators' credibility, and the learning environment. Feedback literacy is a very recent and rarely used term in medical education; therefore, an important area for further investigation. Lastly, we identified some gaps in the literature that might guide future research, such as studying receptiveness to feedback based on academic seniority and feedback literacy's long-term impacts on learning. Supplementary Information The online version contains supplementary material available at 10.1007/s40670-023-01858-0.
Collapse
Affiliation(s)
- Ligia Cordovani
- Department of Health Research Methods, Evidence, Impact, McMaster University, Hamilton, ON Canada
| | - Cody Tran
- School of Medicine, McMaster University, Hamilton, ON Canada
| | - Anne Wong
- Department of Anesthesia, McMaster University, Hamilton, ON Canada
| | - Susan M. Jack
- School of Nursing, McMaster University, Hamilton, ON Canada
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON Canada
| | - Sandra Monteiro
- Department of Medicine, McMaster University, Hamilton, ON Canada
| |
Collapse
|
40
|
Costello LL, Cho DD, Daniel RC, Dida J, Pritchard J, Pardhan K. Emergency medicine resident perceptions of simulation-based training and assessment in competence by design. CAN J EMERG MED 2023; 25:828-835. [PMID: 37665550 DOI: 10.1007/s43678-023-00577-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 08/09/2023] [Indexed: 09/05/2023]
Abstract
OBJECTIVES With the launch of competence by design (CBD) in emergency medicine (EM) in Canada, there are growing recommendations on the use of simulation for the training and assessment of residents. Many of these recommendations have been suggested by educational leaders and often exclude the resident stakeholder. This study sought to explore their experiences and perceptions of simulation in CBD. METHODS Qualitative data were collected from November 2020 to May 2021 at McMaster University and the University of Toronto after receiving ethics approval from both sites. Eligible participants included EM residents who were interviewed by a trained interviewer using a semi-structured interview guide. All interviews were recorded, transcribed, coded, and collapsed into themes. Data analysis was guided by constructivist grounded theory. RESULTS A total of seventeen residents participated. Thematic analysis revealed three major themes: 1) impact of CBD on resident views of simulation; 2) simulation's role in obtaining entrustable professional activities (EPAs) and filling educational gaps; and 3) conflicting feelings on the use of high-stakes simulation-based assessment in CBD. CONCLUSIONS EM residents strongly support using simulation in CBD and acknowledge its ability to bridge educational gaps and fulfill specific EPAs. However, this study suggests some unintended consequences of CBD and conflicting views around simulation-based assessment that challenge resident perceptions of simulation as a safe learning space. As CBD evolves, educational leaders should consider these impacts when making future curricular changes or recommendations.
Collapse
Affiliation(s)
- Lorne L Costello
- Division of Emergency Medicine, Department of Medicine, University of Toronto, Toronto, ON, Canada.
- Department of Emergency Services, Sunnybrook Health Sciences Centre, Toronto, ON, Canada.
| | - Dennis D Cho
- Division of Emergency Medicine, Department of Medicine, University of Toronto, Toronto, ON, Canada
- Department of Emergency Medicine, University Health Network, Toronto, ON, Canada
| | - Ryan C Daniel
- Department of Otolaryngology-Head & Neck Surgery, University of Toronto, Toronto, ON, Canada
| | - Joana Dida
- Division of Emergency Medicine, Department of Medicine, McMaster University, Hamilton, ON, Canada
| | - Jodie Pritchard
- Department of Emergency Medicine, Queen's University, Kingston, ON, Canada
| | - Kaif Pardhan
- Division of Emergency Medicine, Department of Medicine, University of Toronto, Toronto, ON, Canada
- Department of Emergency Services, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
- Division of Pediatric Emergency Medicine, Department of Pediatrics, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
41
|
Sebok-Syer SS, Lingard L, Panza M, Van Hooren TA, Rassbach CE. Supportive and collaborative interdependence: Distinguishing residents' contributions within health care teams. MEDICAL EDUCATION 2023; 57:921-931. [PMID: 36822577 DOI: 10.1111/medu.15064] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 02/04/2023] [Accepted: 02/21/2023] [Indexed: 06/18/2023]
Abstract
INTRODUCTION Individual assessments disregard team contributions, while team assessments disregard an individual's contributions. Interdependence has been put forth as a conceptual bridge between our educational traditions of assessing individual performance and our imminent challenge of assessing team-based performance without losing sight of the individual. The purpose of this study was to develop a more refined conceptualisation of interdependence to inform the creation of measures that can assess the interdependence of residents within health care teams. METHODS Following a constructivist grounded theory approach, we conducted 49 semi-structured interviews with various members of health care teams (e.g. physicians, nurses, pharmacists, social workers and patients) across two different clinical specialties-Emergency Medicine and Paediatrics-at two separate sites. Data collection and analysis occurred iteratively. Constant comparative inductive analysis was used, and coding consisted of three stages: initial, focused and theoretical. RESULTS We asked participants to reflect upon interdependence and describe how it exists in their clinical setting. All participants acknowledged the existence of interdependence, but they did not view it as part of a linear spectrum where interdependence becomes independence. Our analysis refined the conceptualisation of interdependence to include two types: supportive and collaborative. Supportive interdependence occurs within health care teams when one member demonstrates insufficient expertise to perform within their scope of practice. Collaborative interdependence, on the other hand, was not triggered by lack of experience/expertise within an individual's scope of practice, but rather recognition that patient care requires contributions from other team members. CONCLUSION In order to assess a team's collective performance without losing sight of the individual, we need to capture interdependent performances and characterise the nature of such interdependence. Moving away from a linear trajectory where independence is seen as the end goal can also help support efforts to measure an individual's competence as an interdependent member of a health care team.
Collapse
Affiliation(s)
| | - Lorelei Lingard
- Department of Medicine and Centre for Education Research and Innovation, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada
| | - Michael Panza
- Centre for Education Research and Innovation, Western University, London, Ontario, Canada
| | - Tamara A Van Hooren
- Department of Pediatrics, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | | |
Collapse
|
42
|
Birkeli CN, Normand C, Rø KI, Kvernenes M. Educational supervision in internal medicine residency training - a scoping review. BMC MEDICAL EDUCATION 2023; 23:644. [PMID: 37679738 PMCID: PMC10486128 DOI: 10.1186/s12909-023-04629-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 08/29/2023] [Indexed: 09/09/2023]
Abstract
BACKGROUND Although supervision is an important part of residency training, its scope and how it relates to other types of support, such as mentoring, precepting and feedback, remain unclear. While clinical supervision consists of ongoing instructions and feedback in the workplace setting, educational supervision is a formalized component of postgraduate medical educational and supports the process that facilitates a trainee's progression throughout their training. Since medical specialties have different supervisory traditions, this study focuses on educational supervision in internal medicine. Our aim was to investigate what is known about educational supervision practices in internal medicine and the role of educational supervision in supporting residents' learning. METHODS We conducted a scoping review of the literature on educational supervision in residency training in internal medicine based on Levac et al.'s modification of Arksey and O'Malley's six-step framework. The literature search was performed in the following databases: Medline, Embase, Web of Science and the Educational Resources Information Center. In addition, we conducted a handsearch in Medical Teacher and Google Scholar. We followed the PRISMA guidelines for systematic research. RESULTS Eighteen of the 3,284 identified articles were included in the analysis. We found few empirical studies describing how educational supervision is conducted and what effect routine educational supervision has on residents' learning. Our findings suggest that the terminology can be confusing and that educational supervision practices in internal medicine has a weak theoretical foundation. CONCLUSION The distinction between educational supervision and other support structures, such as mentoring and feedback, has not been clearly defined in the research literature. We argue that shared terminology is needed to better understand current educational practices and to facilitate clear communication about how to help residents learn.
Collapse
Affiliation(s)
- Cecilie Normann Birkeli
- Institute for Studies of the Medical Profession, P.O. Box 1153, Oslo, NO-0107, Norway.
- Center for Medical Education, Department of Clinical Medicine, Faculty of Medicine, University of Bergen, Bergen, Norway.
| | - Camilla Normand
- Department of Quality and Health Technology, University of Stavanger, Stavanger, Norway
- Department of Internal Medicine, Stavanger University Hospital, Stavanger, Norway
| | - Karin Isaksson Rø
- Institute for Studies of the Medical Profession, P.O. Box 1153, Oslo, NO-0107, Norway
| | - Monika Kvernenes
- Center for Medical Education, Department of Clinical Medicine, Faculty of Medicine, University of Bergen, Bergen, Norway
| |
Collapse
|
43
|
Fisher K, Fielding A, Ralston A, Holliday E, Ball J, Tran M, Davey A, Tapley A, Magin P. Exam prediction and the general Practice Registrar Competency Assessment Grid (GPR-CAG). EDUCATION FOR PRIMARY CARE 2023; 34:268-276. [PMID: 38011869 DOI: 10.1080/14739879.2023.2269884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 10/09/2023] [Indexed: 11/29/2023]
Abstract
BACKGROUND In GP training, identifying early predictors of poor summative examination performance can be challenging. We aimed to establish whether external clinical teaching visit (ECTV) performance, measured using a validated instrument (GP Registrar Competency Assessment Grid, GPR-CAG) is predictive of Royal Australian College of General Practitioners (RACGP) Fellowship examination performance. METHODS A retrospective cohort study including GP registrars in New South Wales/Australian Capital Territory with ECTV data recorded during their first training term (GPT1), between 2014 and 2018, who attempted at least one Fellowship examination. Independent variables of interest included the four GPR-CAG factors assessed in GPT1 ('patient-centredness/caring', 'formulating hypotheses/management plans', 'professional responsibilities', 'physical examination skills'). Outcomes of interest included individual scores of the three summative examinations (Applied Knowledge Test (AKT); Key Feature Problem (KFP); and the Objective Structured Clinical Examination (OSCE)) and overall Pass/Fail status. Univariable and multivariable regression analyses were performed. RESULTS Univariably, there were statistically significant associations (p < 0.01) between all four GPR-CAG factors and all four summative examination outcomes, except for 'formulating hypotheses/management plans' and OSCE score (p = 0.07). On multivariable analysis, each factor was significantly associated (p < 0.05) with at least one exam outcome, and 'physical examination skills' was significantly associated (p < 0.05) with all four exam outcomes. DISCUSSION ECTV performance, via GPR-CAG scores, is predictive of RACGP Fellowship exam performance. The univariable findings highlight the pragmatic utility of ECTVs in flagging registrars who are at-risk of poor exam performance, facilitating early intervention. The multivariable associations of GPR-CAG scores and examination performance suggest that these scores provide predictive ability beyond that of other known predictors.
Collapse
Affiliation(s)
- Katie Fisher
- School of Medicine and Public Health, University of Newcastle, University Drive, Callaghan, NSW, Australia
- NSW and ACT Research and Evaluation Unit, GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, NSW, Australia
| | - Alison Fielding
- School of Medicine and Public Health, University of Newcastle, University Drive, Callaghan, NSW, Australia
- NSW and ACT Research and Evaluation Unit, GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, NSW, Australia
| | - Anna Ralston
- School of Medicine and Public Health, University of Newcastle, University Drive, Callaghan, NSW, Australia
- NSW and ACT Research and Evaluation Unit, GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, NSW, Australia
| | - Elizabeth Holliday
- School of Medicine and Public Health, University of Newcastle, University Drive, Callaghan, NSW, Australia
| | - Jean Ball
- Clinical Research Design IT and Statistical Support, Hunter Medical Research Institute, New Lambton Heights, NSW, Australia
| | - Michael Tran
- School of Population Health, University of New South Wales, Sydney, Australia
| | - Andrew Davey
- School of Medicine and Public Health, University of Newcastle, University Drive, Callaghan, NSW, Australia
- NSW and ACT Research and Evaluation Unit, GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, NSW, Australia
| | - Amanda Tapley
- School of Medicine and Public Health, University of Newcastle, University Drive, Callaghan, NSW, Australia
- NSW and ACT Research and Evaluation Unit, GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, NSW, Australia
| | - Parker Magin
- School of Medicine and Public Health, University of Newcastle, University Drive, Callaghan, NSW, Australia
- NSW and ACT Research and Evaluation Unit, GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, NSW, Australia
| |
Collapse
|
44
|
van Ede AE, Claessen RJM, van Gils M, Gorgels WJMJ, Reuzel RPB, Smeets AGJM, van Gurp PJM. How to coach student professional development during times of challenges and uncertainties. BMC MEDICAL EDUCATION 2023; 23:600. [PMID: 37608301 PMCID: PMC10463913 DOI: 10.1186/s12909-023-04588-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 08/13/2023] [Indexed: 08/24/2023]
Abstract
BACKGROUND What we teach our (bio)medical students today may differ from the future context under which they will operate as health professionals. This shifting and highly demanding profession requires that we equip these students with adaptive competencies for their future careers. We aimed to develop a framework to promote and facilitate professional development from day one, guided by self-awareness and self-directed learning. APPROACH Based on self-directed, transformative and experiential learning, patient involvement and teamwork, we developed a 3-year longitudinal personal-professional development (LPPD) program in the (bio)medical sciences undergraduate curriculum to stimulate self-driven professional development in a variable context. Through group meetings and individual coach consultations, students address topics such as self-awareness, self-directed and lifelong learning, collaboration, well-being and resilience. To drive learning students receive extensive narrative feedback on an essay assignment. EVALUATION Experiences and outcomes were evaluated with questionnaires and in-depth interviews. Students and coaches value personal and professional development in a safe learning environment that encourages self-exploration, diversity and connection. Over time, students show more self-awareness and self-directedness and increasingly apply trained skills, resulting in professional identity formation. Students need more clarification to understand the concept of assessment as learning. IMPLICATIONS With the generic content of a longitudinal program embedded in a meaningful environment, the personal and professional development of students can be facilitated and stimulated to face future challenges. When translating to other curricula, we suggest considering the complexity of professional development and the time expenditure needed for students to explore, experiment and practice. An early start and thorough integration are recommended.
Collapse
Affiliation(s)
- Annelies E van Ede
- Department of Rheumatology, Radboud University Nijmegen Medical Centre NL, Nijmegen, Netherlands.
| | - Roy J M Claessen
- Department of Internal Medicine, Radboud University Nijmegen Medical Centre NL, Nijmegen, Netherlands
| | - Merel van Gils
- Radboud Health Academy, Radboud University Nijmegen Medical Centre NL, Nijmegen, Netherlands
| | - Wim J M J Gorgels
- Department of Primary and Community Care, Radboud University Nijmegen Medical Centre NL, Nijmegen, Netherlands
| | - Rob P B Reuzel
- Department of Health Evidence, Radboud University Nijmegen Medical Centre NL, Nijmegen, Netherlands
| | - Annemieke G J M Smeets
- Department of Pathology, Radboud University Nijmegen Medical Centre NL, Nijmegen, Netherlands
| | - Petra J M van Gurp
- Department of Internal Medicine, Radboud University Nijmegen Medical Centre NL, Nijmegen, Netherlands
| |
Collapse
|
45
|
Greenfield J, Qua K, Prayson RA, Bierer SB. "It Changed How I Think"-Impact of Programmatic Assessment Upon Practicing Physicians: A Qualitative Study. MEDICAL SCIENCE EDUCATOR 2023; 33:963-974. [PMID: 37546195 PMCID: PMC10403454 DOI: 10.1007/s40670-023-01829-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/23/2023] [Indexed: 08/08/2023]
Abstract
Programmatic assessment is a systematic approach used to document and assess learner performance. It offers learners frequent formative feedback from a variety of contexts and uses both high- and low-stakes assessments to determine student progress. Existing research has explored learner and faculty perceptions of programmatic assessment, reporting favorable impact on faculty understanding of the importance of assessment stakes and feedback to learners while students report the ability to establish and navigate towards goals and reflect on their performance. The Cleveland Clinic Lerner College of Medicine (CCLCM) of Case Western Reserve University adopted programmatic assessment methods at its inception. With more than 18 years' experience with programmatic assessment and a portfolio-based assessment system, CCLCM is well-positioned to explore its graduates' perceptions of their programmatic assessment experiences during and after medical school. In 2020, the investigators interviewed 26 of the 339 physician graduates. Participants were purposefully sampled to represent multiple class cohorts (2009-2019), clinical specialties, and practice locations. The investigators analyzed interview transcripts using thematic analysis informed by the frameworks of self-determination theory and professional identity formation. The authors identified themes and support each with participant quotes from the interviews. Based on findings, the investigators compiled a series of recommendations for other institutions who have already or plan to incorporate elements of programmatic assessment into their curricula. The authors concluded by discussing future directions for research and additional avenues of inquiry.
Collapse
Affiliation(s)
- Jessica Greenfield
- University of Virginia School of Medicine, Room 2008A Pinn Hall, Box 800866, Charlottesville, VA 22908-0366 USA
| | - Kelli Qua
- Case Western Reserve University School of Medicine, Cleveland, OH USA
| | - Richard A. Prayson
- Department of Anatomic Pathology, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland Clinic, Cleveland, OH USA
| | - S. Beth Bierer
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH USA
| |
Collapse
|
46
|
Laverdure M, Gomez-Garibello C, Snell L. Residents as Medical Coaches. JOURNAL OF SURGICAL EDUCATION 2023; 80:1067-1074. [PMID: 37271599 DOI: 10.1016/j.jsurg.2023.05.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 04/25/2023] [Accepted: 05/08/2023] [Indexed: 06/06/2023]
Abstract
OBJECTIVES With the recent implementation of Competency-based Medical Education (CBME) and emphasis on direct observation of learners, there is an increased interest in the concept of clinical coaching. While there is considerable literature on the role of attending physicians as coaches, little data is available on the role of residents as coaches, and residents' perceptions about effective coaching. We aimed to identify distinct characteristics of residents' coaching, to examine residents' perceptions on what they valued most in clinical coaches, and to explore trainees' ideas about how to optimize this role. DESIGN We performed an exploratory qualitative study, using 45 minutes semi-structured interviews. We did a thematic analysis of the interview transcripts using both inductive and deductive coding. PARTICIPANTS We invited and interviewed 5 surgical and 5 nonsurgical residents, and 3 surgical and 3 nonsurgical attending staff. Residents were recruited from all post graduate levels and from a variety of programs. SETTING Our study was done in a large tertiary teaching hospital. RESULTS Residents perceived that they have a significant role as coaches for junior learners, different from the attending's role. The proximity between the coach and the coaches leads to a different supervisor-learner rapport. This was of benefit as learners described feeling more comfortable making mistakes and seeking feedback, which potentiates effective coaching. Residents reported feeling that it was easier to coach their recently-acquired skills as the subtleties of the tasks and the troubleshooting were fresher in memory. Residents expressed appreciating a coach who values autonomy and does not intervene except when patient safety is at risk. Strategies identified to further optimize residents' role as coaches include placing coaching as a priority, ensuring dedicated time, and offering teaching sessions on coaching. CONCLUSIONS Residents have distinct roles as coaches, driven by their recent experience being coached and as near peers. More research is needed to evaluate concrete measures to optimize residents' role as coaches and to improve their coaching skills.
Collapse
Affiliation(s)
- Morgane Laverdure
- Department of Medicine, McGill University Health Centre, Montreal, Quebec, Canada.
| | | | - Linda Snell
- Department of Medicine, McGill University Health Centre, Montreal, Quebec, Canada; Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
47
|
Miller KA, Nagler J, Wolff M, Schumacher DJ, Pusic MV. It Takes a Village: Optimal Graduate Medical Education Requires a Deliberately Developmental Organization. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:282-293. [PMID: 37520509 PMCID: PMC10377742 DOI: 10.5334/pme.936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 07/06/2023] [Indexed: 08/01/2023]
Abstract
Coaching is proposed as a means of improving the learning culture of medicine. By fostering trusting teacher-learner relationships, learners are encouraged to embrace feedback and make the most of failure. This paper posits that a cultural shift is necessary to fully harness the potential of coaching in graduate medical education. We introduce the deliberately developmental organization framework, a conceptual model focusing on three core dimensions: developmental communities, developmental aspirations, and developmental practices. These dimensions broaden the scope of coaching interactions. Implementing this organizational change within graduate medical education might be challenging, yet we argue that embracing deliberately developmental principles can embed coaching into everyday interactions and foster a culture in which discussing failure to maximize learning becomes acceptable. By applying the dimensions of developmental communities, aspirations, and practices, we present a six-principle roadmap towards transforming graduate medical education training programs into deliberately developmental organizations.
Collapse
Affiliation(s)
- Kelsey A. Miller
- Pediatrics and Emergency Medicine, Harvard Medical School, Boston, MA, USA
| | - Joshua Nagler
- Pediatrics and Emergency Medicine, Harvard Medical School, Boston, MA, USA
| | - Margaret Wolff
- Emergency Medicine and Pediatrics, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Daniel J. Schumacher
- Cincinnati Children’s Hospital Medical Center and the University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Martin V. Pusic
- Pediatrics and Emergency Medicine, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
48
|
Loosveld LM, Driessen EW, Theys M, Van Gerven PWM, Vanassche E. Combining Support and Assessment in Health Professions Education: Mentors' and Mentees' Experiences in a Programmatic Assessment Context. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:271-281. [PMID: 37426357 PMCID: PMC10327863 DOI: 10.5334/pme.1004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 06/23/2023] [Indexed: 07/11/2023]
Abstract
Introduction Mentors in programmatic assessment support mentees with low-stakes feedback, which often also serves as input for high-stakes decision making. That process potentially causes tensions in the mentor-mentee relationship. This study explored how undergraduate mentors and mentees in health professions education experience combining developmental support and assessment, and what this means for their relationship. Methods The authors chose a pragmatic qualitative research approach and conducted semi-structured vignette-based interviews with 24 mentors and 11 mentees that included learners from medicine and the biomedical sciences. Data were analyzed thematically. Results How participants combined developmental support and assessment varied. In some mentor-mentee relationships it worked well, in others it caused tensions. Tensions were also created by unintended consequences of design decisions at the program level. Dimensions impacted by experienced tensions were: relationship quality, dependence, trust, and nature and focus of mentoring conversations. Mentors and mentees mentioned applying various strategies to alleviate tensions: transparency and expectation management, distinguishing between developmental support and assessment, and justifying assessment responsibility. Discussion Combining the responsibility for developmental support and assessment within an individual worked well in some mentor-mentee relationships, but caused tensions in others. On the program level, clear decisions should be made regarding the design of programmatic assessment: what is the program of assessment and how are responsibilities divided between all involved? If tensions arise, mentors and mentees can try to alleviate these, but continuous mutual calibration of expectations between mentors and mentees remains of key importance.
Collapse
Affiliation(s)
- Lianne M. Loosveld
- Department of Educational Development & Research, School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, the Netherlands
| | - Erik W. Driessen
- Department of Educational Development & Research, School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, the Netherlands
| | - Mattias Theys
- Department of Educational Development & Research, School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, the Netherlands
| | - Pascal W. M. Van Gerven
- Department of Educational Development & Research, School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, the Netherlands
| | - Eline Vanassche
- Faculty of Psychology and Educational Sciences, KU Leuven Kulak, Etienne Sabbelaan 51, P.O. Box 7654, 8500 Kortrijk, Belgium
| |
Collapse
|
49
|
Berger S, Stalmeijer RE, Marty AP, Berendonk C. Exploring the Impact of Entrustable Professional Activities on Feedback Culture: A Qualitative Study of Anesthesiology Residents and Attendings. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:836-843. [PMID: 36812061 DOI: 10.1097/acm.0000000000005188] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
PURPOSE Entrustable professional activities (EPAs) were introduced as a potential way to optimize workplace-based assessments. Yet, recent studies suggest that EPAs have not yet overcome all of the challenges to implementing meaningful feedback. The aim of this study was to explore the extent to which the introduction of EPAs via mobile app impacts feedback culture as experienced by anesthesiology residents and attending physicians. METHOD Using a constructivist grounded theory approach, the authors interviewed a purposive and theoretical sample of residents (n = 11) and attendings (n = 11) at the Institute of Anaesthesiology, University Hospital of Zurich, where EPAs had recently been implemented. Interviews took place between February and December 2021. Data collection and analysis were conducted iteratively. The authors used open, axial, and selective coding to gain knowledge and understanding on the interplay of EPAs and feedback culture. RESULTS Participants reflected on a number of changes in their day-to-day experience of feedback culture with the implementation of EPAs. Three main mechanisms were instrumental in this process: lowering the feedback threshold, change in feedback focus, and gamification. Participants felt a lower threshold to feedback seeking and giving and that the frequency of feedback conversations increased and tended to be more focused on a specific topic and shorter, while feedback content tended to focus more on technical skills and more attention was given to average performances. Residents indicated that the app-based approach fostered a game-like motivation to "climb levels," while attendings did not perceive a game-like experience. CONCLUSIONS EPAs may offer a solution to problems of infrequent occurrence of feedback and invite attention to average performances and technical competencies, but may come at the expense of feedback on nontechnical skills. This study suggests that feedback culture and feedback instruments have a mutually interacting influence on each other.
Collapse
Affiliation(s)
- Sabine Berger
- S. Berger is a third-year medical resident, Internal Medicine Training Program, St. Claraspital, Basel, Switzerland
| | - Renee E Stalmeijer
- R.E. Stalmeijer is associate professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Adrian P Marty
- A.P. Marty is currently senior attending physician and team lead for education, Institute of Anaesthesiology, Intensive Care and Pain Medicine, Orthopedic University Hospital Balgrist, Zurich, Switzerland. At the time of writing, he was attending physician, Institute of Anaesthesiology, University of Zurich, University Hospital of Zurich, Zurich, Switzerland
| | - Christoph Berendonk
- C. Berendonk is senior lecturer in medical education, Institute for Medical Education, University of Bern, Bern, Switzerland
| |
Collapse
|
50
|
Gauthier S, Braund H, Dalgarno N, Taylor D. Assessment-Seeking Strategies: Navigating the Decision to Initiate Workplace-Based Assessment. TEACHING AND LEARNING IN MEDICINE 2023:1-10. [PMID: 37384570 DOI: 10.1080/10401334.2023.2229803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 04/13/2023] [Accepted: 06/01/2023] [Indexed: 07/01/2023]
Abstract
Phenomenon: Competency-based medical education (CBME) relies on workplace-based assessment (WBA) to generate formative feedback (assessment for learning-AfL) and make inferences about competence (assessment of learning-AoL). When approaches to CBME rely on residents to initiate WBA, learners experience tension between seeking WBA for learning and for establishing competence. How learners resolve this tension may lead to unintended consequences for both AfL and AoL. We sought to explore the factors that impact both decisions to seek and not to seek WBA and use the findings to build a model of assessment-seeking strategy used by residents. In building this model we consider how the link between WBA and promotion or progression within a program impacts an individual's assessment-seeking strategy. Approach: We conducted 20 semi-structured interviews with internal medicine residents at Queen's University about the factors that influence their decision to seek or avoid WBA. Using grounded theory methodology, we applied a constant comparative analysis to collect data iteratively and identify themes. A conceptual model was developed to describe the interaction of factors impacting the decision to seek and initiate WBA. Findings: Participants identified two main motivations when deciding to seek assessments: the need to fulfill program requirements and the desire to receive feedback for learning. Analysis suggested that these motivations are often at odds with each other. Participants also described several moderating factors that impact the decision to initiate assessments, irrespective of the primary underlying motivation. These included resident performance, assessor factors, training program expectations, and clinical context. A conceptual framework was developed to describe the factors that lead to strategic assessment-seeking behaviors. Insights: Faced with the dual purpose of WBA in CBME, resident behavior in initiating assessment is guided by specific assessment-seeking strategies. Strategies reflect individual underlying motivations, influenced by four moderating factors. These findings have broad implications for programmatic assessment in a CBME context including validity considerations for assessment data used in summative decision-making including readiness for unsupervised practice.
Collapse
Affiliation(s)
- Stephen Gauthier
- Department of Medicine, Queen's University, Kingston, Ontario, Canada
| | - Heather Braund
- Office of Professional Development and Educational Scholarship, Faculty of Health Sciences, Queen's University, Kingston, Ontario, Canada
| | - Nancy Dalgarno
- Office of Professional Development and Educational Scholarship, Faculty of Health Sciences, Queen's University, Kingston, Ontario, Canada
| | - David Taylor
- Department of Medicine, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|