1
|
Miller DT, Michael S, Bell C, Brevik CH, Kaplan B, Svoboda E, Kendall J. Physical and biophysical markers of assessment in medical training: A scoping review of the literature. MEDICAL TEACHER 2024:1-9. [PMID: 38688520 DOI: 10.1080/0142159x.2024.2345269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 04/16/2024] [Indexed: 05/02/2024]
Abstract
PURPOSE Assessment in medical education has changed over time to measure the evolving skills required of current medical practice. Physical and biophysical markers of assessment attempt to use technology to gain insight into medical trainees' knowledge, skills, and attitudes. The authors conducted a scoping review to map the literature on the use of physical and biophysical markers of assessment in medical training. MATERIALS AND METHODS The authors searched seven databases on 1 August 2022, for publications that utilized physical or biophysical markers in the assessment of medical trainees (medical students, residents, fellows, and synonymous terms used in other countries). Physical or biophysical markers included: heart rate and heart rate variability, visual tracking and attention, pupillometry, hand motion analysis, skin conductivity, salivary cortisol, functional magnetic resonance imaging (fMRI), and functional near-infrared spectroscopy (fNIRS). The authors mapped the relevant literature using Bloom's taxonomy of knowledge, skills, and attitudes and extracted additional data including study design, study environment, and novice vs. expert differentiation from February to June 2023. RESULTS Of 6,069 unique articles, 443 met inclusion criteria. The majority of studies assessed trainees using heart rate variability (n = 160, 36%) followed by visual attention (n = 143, 32%), hand motion analysis (n = 67, 15%), salivary cortisol (n = 67, 15%), fMRI (n = 29, 7%), skin conductivity (n = 26, 6%), fNIRs (n = 19, 4%), and pupillometry (n = 16, 4%). The majority of studies (n = 167, 38%) analyzed non-technical skills, followed by studies that analyzed technical skills (n = 155, 35%), knowledge (n = 114, 26%), and attitudinal skills (n = 61, 14%). 169 studies (38%) attempted to use physical or biophysical markers to differentiate between novice and expert. CONCLUSION This review provides a comprehensive description of the current use of physical and biophysical markers in medical education training, including the current technology and skills assessed. Additionally, while physical and biophysical markers have the potential to augment current assessment in medical education, there remains significant gaps in research surrounding reliability, validity, cost, practicality, and educational impact of implementing these markers of assessment.
Collapse
Affiliation(s)
- Danielle T Miller
- Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, CO, USA
| | - Sarah Michael
- Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, CO, USA
| | - Colin Bell
- Department of Emergency Medicine, University of Calgary, Calgary, Canada
| | - Cody H Brevik
- Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, CO, USA
| | - Bonnie Kaplan
- Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, CO, USA
| | - Ellie Svoboda
- Education Informationist, Strauss Health Sciences Library, University of Colorado Anschutz Medical Campus, Aurora, CO, USA
| | - John Kendall
- Department of Emergency Medicine, Stanford School of Medicine, Palo Alto, CA, USA
| |
Collapse
|
2
|
Bass GA, Kaplan LJ, Gaarder C, Coimbra R, Klingensmith NJ, Kurihara H, Zago M, Cioffi SPB, Mohseni S, Sugrue M, Tolonen M, Valcarcel CR, Tilsed J, Hildebrand F, Marzi I. European society for trauma and emergency surgery member-identified research priorities in emergency surgery: a roadmap for future clinical research opportunities. Eur J Trauma Emerg Surg 2024; 50:367-382. [PMID: 38411700 PMCID: PMC11035411 DOI: 10.1007/s00068-023-02441-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 12/28/2023] [Indexed: 02/28/2024]
Abstract
BACKGROUND European Society for Trauma and Emergency Surgery (ESTES) is the European community of clinicians providing care to the injured and critically ill surgical patient. ESTES has several interlinked missions - (1) the promotion of optimal emergency surgical care through networked advocacy, (2) promulgation of relevant clinical cognitive and technical skills, and (3) the advancement of scientific inquiry that closes knowledge gaps, iteratively improves upon surgical and perioperative practice, and guides decision-making rooted in scientific evidence. Faced with multitudinous opportunities for clinical research, ESTES undertook an exercise to determine member priorities for surgical research in the short-to-medium term; these research priorities were presented to a panel of experts to inform a 'road map' narrative review which anchored these research priorities in the contemporary surgical literature. METHODS Individual ESTES members in active emergency surgery practice were polled as a representative sample of end-users and were asked to rank potential areas of future research according to their personal perceptions of priority. Using the modified eDelphi method, an invited panel of ESTES-associated experts in academic emergency surgery then crafted a narrative review highlighting potential research priorities for the Society. RESULTS Seventy-two responding ESTES members from 23 countries provided feedback to guide the modified eDelphi expert consensus narrative review. Experts then crafted evidence-based mini-reviews highlighting knowledge gaps and areas of interest for future clinical research in emergency surgery: timing of surgery, inter-hospital transfer, diagnostic imaging in emergency surgery, the role of minimally-invasive surgical techniques and Enhanced Recovery After Surgery (ERAS) protocols, patient-reported outcome measures, risk-stratification methods, disparities in access to care, geriatric outcomes, data registry and snapshot audit evaluations, emerging technologies interrogation, and the delivery and benchmarking of emergency surgical training. CONCLUSIONS This manuscript presents the priorities for future clinical research in academic emergency surgery as determined by a sample of the membership of ESTES. While the precise basis for prioritization was not evident, it may be anchored in disease prevalence, controversy around aspects of current patient care, or indeed the identification of a knowledge gap. These expert-crafted evidence-based mini-reviews provide useful insights that may guide the direction of future academic emergency surgery research efforts.
Collapse
Affiliation(s)
- Gary Alan Bass
- Division of Traumatology, Emergency Surgery and Surgical Critical Care, Perelman School of Medicine, University of Pennsylvania, 51 N. 39th Street, MOB 1, Suite 120, Philadelphia, PA, 19104, USA.
- Leonard Davis Institute of Health Economics (LDI), University of Pennsylvania, Philadelphia, PA, USA.
- Center for Perioperative Outcomes Research and Transformation (CPORT), University of Pennsylvania, Philadelphia, PA, USA.
| | - Lewis Jay Kaplan
- Division of Traumatology, Emergency Surgery and Surgical Critical Care, Perelman School of Medicine, University of Pennsylvania, 51 N. 39th Street, MOB 1, Suite 120, Philadelphia, PA, 19104, USA
- Surgical Critical Care, Corporal Michael J Crescenz VA Medical Center, 3900 Woodland Avenue, Philadelphia, PA, 19104, USA
| | - Christine Gaarder
- Department of Traumatology at Oslo University Hospital Ullevål (OUH U), Olso, Norway
| | - Raul Coimbra
- Riverside University Health System Medical Center, Moreno Valley, CA, USA
- Loma Linda University School of Medicine, Loma Linda, CA, USA
- Comparative Effectiveness and Clinical Outcomes Research Center - CECORC, Moreno Valley, CA, USA
| | - Nathan John Klingensmith
- Division of Traumatology, Emergency Surgery and Surgical Critical Care, Perelman School of Medicine, University of Pennsylvania, 51 N. 39th Street, MOB 1, Suite 120, Philadelphia, PA, 19104, USA
| | - Hayato Kurihara
- State University of Milan, Milan, Italy
- Emergency Surgery Unit, Ospedale Policlinico di Milano, Milan, Italy
| | - Mauro Zago
- General & Emergency Surgery Division, A. Manzoni Hospital, ASST, Lecco, Lombardy, Italy
| | | | - Shahin Mohseni
- Department of Surgery, Sheikh Shakhbout Medical City (SSMC), Abu Dhabi, United Arab Emirates
- Division of Trauma and Emergency Surgery, Department of Surgery, Orebro University Hospital, 701 85, Orebro, Sweden
- Faculty of School of Medical Sciences, Orebro University, 702 81, Orebro, Sweden
| | - Michael Sugrue
- Letterkenny Hospital and Galway University, Letterkenny, Ireland
| | - Matti Tolonen
- Emergency Surgery, Meilahti Tower Hospital, HUS Helsinki University Hospital, Haartmaninkatu 4, PO Box 340, 00029, Helsinki, HUS, Finland
| | | | - Jonathan Tilsed
- Hull Royal Infirmary, Anlaby Road, Hu3 2Jz, Hull, England, UK
| | - Frank Hildebrand
- Department of Orthopaedics Trauma and Reconstructive Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Ingo Marzi
- Department of Trauma, Hand and Reconstructive Surgery, University Hospital Frankfurt, Frankfurt, Germany
| |
Collapse
|
3
|
Anderson HL, Abdulla L, Balmer DF, Govaerts M, Busari JO. Inequity is woven into the fabric: a discourse analysis of assessment in pediatric residency training. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024; 29:199-216. [PMID: 37351698 DOI: 10.1007/s10459-023-10260-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Accepted: 06/18/2023] [Indexed: 06/24/2023]
Abstract
Intrinsic inequity in assessment refers to sources of harmful discrimination inherent in the design of assessment tools and systems. This study seeks to understand intrinsic inequity in assessment systems by studying assessment policies and associated procedures in residency training, using general pediatrics as a discourse case study. Foucauldian discourse analysis (FDA) was conducted on assessment policy and procedure documents. Two authors independently prepared structured analytic notes using guiding questions. Documents and respective analytic notes were subsequently reviewed independently by all authors. Each author prepared further unstructured analytic notes on the documents' discourse. The authors then compared notes and constructed truth statements (i.e., interpretations of what the discourse establishes as true about the construct under study) and sub-strands (i.e., themes) that were repeated and legitimized across the documents via iterative discussion. Based on analysis, the authors constructed two truth statements. These truth statements, "good assessment is equitable assessment," and "everyone is responsible for inequity," conceptualized inequity in assessment as an isolated or individual-level aberration in an otherwise effective or neutral system. Closer examination of the truth statements and sub-strands in the discourse presented an alternative view, suggesting that inequity may in fact not be an aberration but rather an inherent feature of assessment systems.
Collapse
Affiliation(s)
- Hannah L Anderson
- Department of Pediatrics, Children's Hospital of Philadelphia, Philadelphia, USA.
- School of Health Professions Education (SHE), Maastricht University, Maastricht, The Netherlands.
| | - Layla Abdulla
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Dorene F Balmer
- Director of Research On Education, Perelman School of Medicine, Children's Hospital of Philadelphia, University of Pennsylvania, Philadelphia, PA, USA
| | - Marjan Govaerts
- Department of Educational Development, School of Health Professions Education (SHE), Maastricht University, Maastricht, The Netherlands
| | - Jamiu O Busari
- Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
4
|
Czaja AS, Mink RB, Herman BE, Weiss P, Turner DA, Curran ML, Stafford DEJ, Myers AL, Langhan ML. Exploring Factors for Implementation of EPAs in Pediatric Subspecialty Fellowships: A Qualitative Study of Program Directors. JOURNAL OF MEDICAL EDUCATION AND CURRICULAR DEVELOPMENT 2024; 11:23821205231225011. [PMID: 38268726 PMCID: PMC10807342 DOI: 10.1177/23821205231225011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 12/19/2023] [Indexed: 01/26/2024]
Abstract
OBJECTIVE To understand fellowship program directors' (FPDs) perspectives on facilitators and barriers to using entrustable professional activities (EPAs) in pediatric subspecialty training. METHODS We performed a qualitative study of FPDs, balancing subspecialty, program size, geographic region and current uses of EPAs. A study coordinator conducted 1-on-1 interviews using a semistructured approach to explore EPA use or nonuse and factors supporting or preventing their use. Investigators independently coded transcribed interviews using an inductive approach and the constant comparative method. Group discussion informed code structure development and refinement. Iterative data collection and analysis continued until theoretical sufficiency was achieved, yielding a thematic analysis. RESULTS Twenty-eight FPDs representing 11 pediatric subspecialties were interviewed, of whom 16 (57%) reported current EPA use. Five major themes emerged: (1) facilitators including the intuitive nature and simple wording of EPAs; (2) barriers such as workload burden and lack of a regulatory requirement; (2) variable knowledge and training surrounding EPAs, leading to differing levels of understanding; (3) limited current use of EPAs, even among self-reported users; and (4) complementary nature of EPAs and milestones. FPDs acknowledged the differing strengths of both EPAs and milestones but sought additional knowledge about the value added by EPAs for assessing trainees, including the impact on outcomes. CONCLUSIONS Identified themes can inform effective and meaningful EPA implementation strategies: Supporting and educating FPDs, ongoing assessment of the value of EPAs in training, and practical integration with current workflow. Generating additional data and engaging stakeholders is critical for successful implementation for the pediatric subspecialties.
Collapse
Affiliation(s)
- Angela S. Czaja
- Department of Pediatrics, Section of Critical Care, University of Colorado School of Medicine, Aurora, CO, USA
| | - Richard B. Mink
- David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
- Department of Pediatrics, Harbor-UCLA Medical Center and The Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Center, Torrance, CA, USA
| | - Bruce E. Herman
- Department of Pediatrics, University of Utah School of Medicine, Salt Lake City, UT, USA
| | - Pnina Weiss
- Department of Pediatrics, Section of Pulmonology, Allergy, Immunology and Sleep Medicine, Yale University School of Medicine, New Haven, CT, USA
| | | | - Megan L. Curran
- Department of Pediatrics, Section of Rheumatology, University of Colorado School of Medicine, Aurora, CO, USA
| | - Diane E. J. Stafford
- Department of Pediatrics, Division of Endocrinology, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Angela L. Myers
- Department of Pediatrics, Children's Mercy, University of Missouri-Kansas City School of Medicine, Kansas City, MO, USA
| | - Melissa L. Langhan
- Department of Pediatrics and Emergency Medicine, Section of Emergency Medicine, Yale University School of Medicine, New Haven, CT, USA
| |
Collapse
|
5
|
Mitchell EC, Ott M, Ross D, Grant A. Development of a Tool to Assess Surgical Resident Competence On-Call: The Western University Call Assessment Tool (WUCAT). JOURNAL OF SURGICAL EDUCATION 2024; 81:106-114. [PMID: 38008642 DOI: 10.1016/j.jsurg.2023.10.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 09/13/2023] [Accepted: 10/02/2023] [Indexed: 11/28/2023]
Abstract
BACKGROUND A central tenet of competency-based medical education is the formative assessment of trainees. There are currently no assessments designed to examine resident competence on-call, despite the on-call period being a significant component of residency, characterized by less direct supervision compared to daytime. The purpose of this study was to design a formative on-call assessment tool and collect valid evidence on its application. METHODS Nominal group technique was used to identify critical elements of surgical resident competence on-call to inform tool development. The tool was piloted over six months in the Division of Plastic & Reconstructive Surgery at our institution. Quantitative and qualitative evidence was collected to examine tool validity. RESULTS A ten-item tool was developed based on the consensus group results. Sixty-three assessments were completed by seven staff members on ten residents during the pilot. The tool had a reliability coefficient of 0.67 based on a generalizability study and internal item consistency was 0.92. Scores were significantly associated with years of training. We found the tool improved the quantity and structure of feedback given and that the tool was considered feasible and acceptable by both residents and staff members. CONCLUSIONS The Western University Call Assessment Tool (WUCAT) has multiple sources of evidence supporting its use in assessing resident competence on-call.
Collapse
Affiliation(s)
- Eric C Mitchell
- Department of Surgery, Western University, London, Ontario, Canada
| | - Michael Ott
- Department of Surgery, Western University, London, Ontario, Canada
| | - Douglas Ross
- Department of Surgery, Western University, London, Ontario, Canada
| | - Aaron Grant
- Department of Surgery, Western University, London, Ontario, Canada.
| |
Collapse
|
6
|
Thayer T. Be prudent with resources. Br Dent J 2024; 236:79. [PMID: 38278880 DOI: 10.1038/s41415-024-6768-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 12/15/2023] [Indexed: 01/28/2024]
Affiliation(s)
- T Thayer
- Liverpool University Dental School and Hospital, Pembroke Place, Liverpool, L3 5PS, UK.
| |
Collapse
|
7
|
Adelman MH, Deshwal H, Pradhan D. Critical Care Ultrasound Competency of Fellows and Faculty in Pulmonary and Critical Care Medicine: A Nationwide Survey. POCUS JOURNAL 2023; 8:202-211. [PMID: 38099164 PMCID: PMC10721306 DOI: 10.24908/pocus.v8i2.16640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/17/2023]
Abstract
Purpose: Competency assessment standards for Critical Care Ultrasonography (CCUS) for Graduate Medical Education (GME) trainees in pulmonary/critical care medicine (PCCM) fellowship programs are lacking. We sought to answer the following research questions: How are PCCM fellows and teaching faculty assessed for CCUS competency? Which CCUS teaching methods are perceived as most effective by program directors (PDs) and fellows. Methods: Cross-sectional, nationwide, electronic survey of PCCM PDs and fellows in accredited GME training programs. Results: PDs and fellows both reported the highest rates of fellow competence to use CCUS for invasive procedural guidance, but lower rates for assessment of deep vein thrombosis and abdominal organs. 54% and 90% of PDs reported never assessing fellows or teaching faculty for CCUS competency, respectively. PDs and fellows perceived hands-on workshops and directly supervised CCUS exams as more effective learning methods than unsupervised CCUS archival with subsequent review and self-directed learning. Conclusions: There is substantial variation in CCUS competency assessment among PCCM fellows and teaching faculty nationwide. The majority of training programs do not formally assess fellows or teaching faculty for CCUS competence. Guidelines are needed to formulate standardized competency assessment tools for PCCM fellowship programs.
Collapse
Affiliation(s)
- Mark H Adelman
- Division of Pulmonary, Critical Care & Sleep Medicine, New York University Grossman School of MedicineNew York, NYUSA
| | - Himanshu Deshwal
- Division of Pulmonary, Critical Care, and Sleep Medicine, West Virginia University Health Sciences CenterMorgantown, WVUSA
| | - Deepak Pradhan
- Division of Pulmonary, Critical Care, and Sleep Medicine, West Virginia University Health Sciences CenterMorgantown, WVUSA
| |
Collapse
|
8
|
Dickie J, Sherriff A, McEwan M, Bell A, Naudi K. Longitudinal assessment of undergraduate dental students: Building evidence for validity. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2023; 27:1136-1150. [PMID: 37141495 DOI: 10.1111/eje.12908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 10/27/2022] [Accepted: 03/31/2023] [Indexed: 05/06/2023]
Abstract
PURPOSE To investigate the content and criterion validity, and reliability of longitudinal clinical assessment of undergraduate dental student clinical competence by determining patterns of clinical performance and comparing them with validated standalone undergraduate examinations. METHODS Group-based trajectory models tracking students' clinical performance over time were produced from LIFTUPP© data for three dental student cohorts (2017-19; n = 235) using threshold models based on the Bayesian information criterion. Content validity was investigated using LIFTUPP© performance indicator 4 as the threshold for competence. Criterion validity was investigated using performance indicator 5 to create distinct trajectories of performance before linking and cross-tabulating trajectory group memberships with a 'top 20%' performance in the final Bachelor of Dental Surgery (BDS) examinations. Reliability was calculated using Cronbach's alpha. RESULTS Threshold 4 models showed all students followed a single upward trajectory in all three cohorts, showing clear progression in competence over three clinical BDS years. A threshold 5 model produced two distinct trajectories, and in each cohort a 'better performing' trajectory was identified. Students allocated to the 'better performing' trajectories scored higher on average in the final examinations for cohort 2 (29% vs 18% (BDS4); 33% vs. 15% (BDS5)) and cohort 3 (19% vs. 16% (BDS4); 21% vs. 16% (BDS5)). Reliability for the undergraduate examinations was high for all three cohorts (≥0.8815) and did not change appreciably when longitudinal assessment was included. CONCLUSIONS There is some evidence to support that longitudinal data have a degree of content and criterion validity for assessing the development of clinical competence in undergraduate dental students, which should increase confidence in decisions based on these data. The findings also provide a good foundation for subsequent research.
Collapse
Affiliation(s)
- Jamie Dickie
- University of Glasgow School of Medicine, Dentistry & Nursing, College of Medical, Veterinary & Life Sciences, Glasgow, UK
| | - Andrea Sherriff
- University of Glasgow School of Medicine, Dentistry & Nursing, College of Medical, Veterinary & Life Sciences, Glasgow, UK
| | - Michael McEwan
- University of Glasgow, Learning Enhancement and Academic Development Service, Glasgow, UK
| | - Aileen Bell
- University of Glasgow School of Medicine, Dentistry & Nursing, College of Medical, Veterinary & Life Sciences, Glasgow, UK
| | - Kurt Naudi
- University of Glasgow School of Medicine, Dentistry & Nursing, College of Medical, Veterinary & Life Sciences, Glasgow, UK
| |
Collapse
|
9
|
Elmanaseer WR, Al-Omoush SA, Alamoush RA, Abu Zaghlan R, Alsoleihat F. Dental Students' Perception and Self-Perceived Confidence Level in Key Dental Procedures for General Practice and the Impact of Competency Implementation on Their Confidence Level, Part I (Prosthodontics and Conservative Dentistry). Int J Dent 2023; 2023:2015331. [PMID: 37868108 PMCID: PMC10586436 DOI: 10.1155/2023/2015331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 08/01/2023] [Accepted: 09/11/2023] [Indexed: 10/24/2023] Open
Abstract
Background Evaluating the level of dental students' competence is crucial for validating their preparedness for graduation. Confidence has a significant role in achieving competence. There are limited studies that assess the level of self-perceived confidence among final-year dental students regarding their ability to conduct key dental procedures. This study aims to assess the self-perceived confidence level of final-year dental students in performing essential dental procedures across various dental disciplines and to assess the effect of implementing competencies in the curriculum on the self-perceived confidence level of students by comparing two cohorts of final-year students in two different years 2016 (Traditional Cohort) and 2019 (Competencies Cohort). Materials and Methods An questionnaire was answered by two cohorts of final-year dental students: one group in 2016 before the implementation of the competency-based assessment system (group 1, n = 153), and the other in 2019 after the implementation of this system (group 2, n = 199), the same questionnaire was used for both cohorts. The results from the two groups were compared regarding the degree of self-perceived confidence in conducting key dental procedures. The data were analysed using SPSS statistics and Levene's Test for Equality of Variances and t-test for Equality of Means calculated. Results Group 1 showed a significantly higher means of self-perceived confidence levels than group 2 in the ability to conduct seven out of the 20 prosthodontics procedures studied: providing patients with Cobalt-Chromium (Co-Cr) removable partial dentures (RPD) (3.77 vs. 3.56), providing the patient with Acrylic RPD (3.70 vs. 3.23), treatment planning for partially edentulous patients (3.83 vs. 3.34), giving OHIs for denture patients (4.17 vs. 3.95), dealing with CD postinsertion complaints (3.97 vs. 3.76), giving postinsertion instructions for removable prostheses cases (4.12 vs. 3.82), and providing patients with immediate dentures (2.67 vs. 2.32). The same applies to 6 out of 16 conservative dentistry procedures: placing anterior composite (4.41 vs. 4.12), placing posterior composite (4.43 vs. 3.88), placing posterior amalgam (4.29 vs. 4.02), placing matrix band for Class II restorations (4.24 vs. 3.71), placing a prefabricated post (3.34 vs. 2.88), and placing fiber post (3.45 vs. 3.34). On the other hand, group 2 shows higher means of self-perceived confidence than group 1 in only two conservative dentistry procedures: onlay restorations (2.18 vs. 2.76) and inlay restorations (2.22 vs. 2.75). No significant differences in means of self-perceived confidence were found between the two groups in the remaining 21 procedures studied. Conclusions This study has shown that final-year dental students have high self-perceived confidence levels in doing simple dental procedures yet less confidence in more complex ones. Although, students' self-perceived confidence decreases after the introduction of a competency-based assessment system. Competency implementation and execution criteria may differ between schools which may have an impact on final outcomes. Hence, there is a need for regular evaluation of competencies being assessed to maintain a curriculum that is up to date.
Collapse
Affiliation(s)
- Wijdan R. Elmanaseer
- Department of Prosthodontics, School of Dentistry, The University of Jordan, Amman 11942, Jordan
| | - Salah A. Al-Omoush
- Department of Prosthodontics, School of Dentistry, The University of Jordan, Amman 11942, Jordan
| | - Rasha A. Alamoush
- Department of Prosthodontics, School of Dentistry, The University of Jordan, Amman 11942, Jordan
| | - Rawan Abu Zaghlan
- Department of Restorative Dentistry, School of Dentistry, The University of Jordan, Amman 11942, Jordan
| | - Firas Alsoleihat
- Department of Restorative Dentistry, School of Dentistry, The University of Jordan, Amman 11942, Jordan
| |
Collapse
|
10
|
Ditoro R, Bernstein J. Student Self-assessment: Reflecting on Physician Assistant Educator's Perceptions and Current Practices in Physician Assistant Training. J Physician Assist Educ 2023; 34:209-217. [PMID: 37647228 DOI: 10.1097/jpa.0000000000000520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
PURPOSE The purpose of this study was to examine relationships between physician assistant (PA) educators' perspectives on students' self-assessment (SA) accuracy and students' use of SA education practices and types of abilities assessed. METHODS Using correlation analysis and a novel, online survey, PA educators were asked about their perceptions of students' SA accuracy in relation to SA educational activities and assessed abilities. RESULTS A total of 308 educators responded. Most respondents used at least one type of SA activity, with feedback and practice being the most common types and comparative assessment, the least common type. Most respondents indicated that students self-assess noncognitive abilities more than cognitive abilities, with SA of communication skills occurring most. Spearman's correlation coefficient was used for correlation analysis with a significant, small correlation noted between the frequency of activities and educators' overall perceptions of students' SA accuracy (r = 0.15, P = .02) and SA accuracy of cognitive abilities (r = 0.17, P = .02). Educators' perceptions of students' SA accuracy were positively skewed, regardless of student training level (ie, didactic and clinical training phases). A mild predictive relationship exists between overall perception of students' SA accuracy and how frequently educators use SA activities (r = 0.29, P = .05). CONCLUSION Although respondents indicated they used practice and feedback activities, providing instruction on how to self-assess and using comparative evaluations to calibrate SAs will improve accuracy. Further research is needed to understand why educators perceive PA students' SA abilities as more accurate, regardless of training level.
Collapse
Affiliation(s)
- Rachel Ditoro
- Rachel Ditoro, EdD, MSPAS, PA-C, is an associate professor, program director, chair, Salus University Physician Assistant Program, Elkins Park, PA
- Joshua Bernstein, PhD, CHES, is an associate professor, Doctor of Education in Health Professions Department, A.T. Still University, College of Health Graduate Studies, Kirksville, MO
| | - Joshua Bernstein
- Rachel Ditoro, EdD, MSPAS, PA-C, is an associate professor, program director, chair, Salus University Physician Assistant Program, Elkins Park, PA
- Joshua Bernstein, PhD, CHES, is an associate professor, Doctor of Education in Health Professions Department, A.T. Still University, College of Health Graduate Studies, Kirksville, MO
| |
Collapse
|
11
|
Seed JD, Gauthier S, Zevin B, Hall AK, Chaplin T. Simulation vs workplace-based assessment in resuscitation: a cross-specialty descriptive analysis and comparison. CANADIAN MEDICAL EDUCATION JOURNAL 2023; 14:92-98. [PMID: 37465738 PMCID: PMC10351640 DOI: 10.36834/cmej.73692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
Background Simulation-based assessment can complement workplace-based assessment of rare or difficult to assess Entrustable Professional Activities (EPAs). We aimed to compare the use of simulation-based assessment for resuscitation-focused EPAs in three postgraduate medical training programs and describe faculty perceptions of simulation-based assessment. Methods EPA assessment scores and setting (simulation or workplace) were extracted from 2017-2020 for internal medicine, emergency medicine, and surgical foundations residents at the transition to discipline and foundations of discipline stages. A questionnaire was distributed to clinical competency committee members. Results Eleven percent of EPA assessments were simulation-based. The proportion of simulation-based assessment did not differ between programs but differed between transition (38%) and foundations (4%) stages within surgical foundations only. Entrustment scores differed between settings in emergency medicine at the transition level only (simulation: 4.82 ± 0.60 workplace: 3.74 ± 0.93). 70% of committee members (n=20) completed the questionnaire. Of those that use simulation-based assessment, 45% interpret them differently than workplace-based assessments. 73% and 100% trust simulation for high-stakes and low-stakes assessment, respectively. Conclusions The proportion of simulation-based assessment for resuscitation focused EPAs did not differ between three postgraduate medical training programs. Interpretation of simulation-based assessment data between committee members was inconsistent. All respondents trust simulation-based assessment for low-stakes, and the majority for high-stakes assessment. These findings have practical implications for the integration simulation into programs of assessment.
Collapse
Affiliation(s)
- Jeremy D Seed
- Department of Emergency Medicine, Queen's University, Ontario, Canada
| | | | - Boris Zevin
- Department of Surgery, Queen's University, Ontario, Canada
| | - Andrew K Hall
- Department of Emergency Medicine, University of Ottawa, Ontario, Canada
| | - Timothy Chaplin
- Department of Emergency Medicine, Queen's University, Ontario, Canada
| |
Collapse
|
12
|
Castanelli D. Sociocultural learning theory and assessment for learning. MEDICAL EDUCATION 2023; 57:382-384. [PMID: 36760219 DOI: 10.1111/medu.15028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 01/30/2023] [Indexed: 06/18/2023]
Affiliation(s)
- Damian Castanelli
- School of Clinical Sciences at Monash Health, Monash University, Clayton, Victoria, Australia
| |
Collapse
|
13
|
Chin M, Pack R, Cristancho S. "A whole other competence story": exploring faculty perspectives on the process of workplace-based assessment of entrustable professional activities. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:369-385. [PMID: 35997910 DOI: 10.1007/s10459-022-10156-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 08/07/2022] [Indexed: 05/11/2023]
Abstract
The centrality of entrustable professional activities (EPAs) in competency-based medical education (CBME) is predicated on the assumption that low-stakes, high-frequency workplace-based assessments used in a programmatic approach will result in accurate and defensible judgments of competence. While there have been conversations in the literature regarding the potential of this approach, only recently has the conversation begun to explore the actual experiences of clinical faculty in this process. The purpose of this qualitative study was to explore the process of EPA assessment for faculty in everyday practice. We conducted 18 semi-structured interviews with Anesthesia faculty at a Canadian academic center. Participants were asked to describe how they engage in EPA assessment in daily practice and the factors they considered. Interviews were audio-recorded, transcribed, and analysed using the constant comparative method of grounded theory. Participants in this study perceived two sources of tension in the EPA assessment process that influenced their scoring on official forms: the potential constraints of the assessment forms and the potential consequences of their assessment outcome. This was particularly salient in circumstances of uncertainty regarding the learner's level of competence. Ultimately, EPA assessment in CBME may be experienced as higher-stakes by faculty than officially recognized due to these tensions, suggesting a layer of discomfort and burden in the process that may potentially interfere with the goal of assessment for learning. Acknowledging and understanding the nature of this burden and identifying strategies to mitigate it are critical to achieving the assessment goals of CBME.
Collapse
Affiliation(s)
- Melissa Chin
- Department of Anesthesia and Perioperative Medicine, London Health Sciences Centre, Schulich School of Medicine and Dentistry, University of Western Ontario, London, ON, Canada.
| | - Rachael Pack
- Center for Education Research and Innovation, University of Western Ontario, London, ON, Canada
| | - Sayra Cristancho
- Center for Education Research and Innovation, University of Western Ontario, London, ON, Canada
| |
Collapse
|
14
|
Yilmaz Y, Chan MK, Richardson D, Atkinson A, Bassilious E, Snell L, Chan TM. Defining new roles and competencies for administrative staff and faculty in the age of competency-based medical education. MEDICAL TEACHER 2023; 45:395-403. [PMID: 36471921 DOI: 10.1080/0142159x.2022.2136517] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE These authors sought to define the new roles and competencies required of administrative staff and faculty in the age of CBME. METHOD A modified Delphi process was used to define the new CBME roles and competencies needed by faculty and administrative staff. We invited international experts in CBME (volunteers from the ICBME Collaborative email list), as well as faculty members and trainees identified via social media to help us determine the new competencies required of faculty and administrative staff in the CBME era. RESULTS Thirteen new roles were identified. The faculty-specific roles were: National Leader/Facilitator in CBME; Institutional/University lead for CBME; Assessment Process & Systems Designer; Local CBME Leads; CBME-specific Faculty Developers or Trainers; Competence Committee Chair; Competence Committee Faculty Member; Faculty Academic Coach/Advisor or Support Person; Frontline Assessor; Frontline Coach. The staff-specific roles were: Information Technology Lead; CBME Analytics/Data Support; Competence Committee Administrative Assistant. CONCLUSIONS The authors present a new set of faculty and staff roles that are relevant to the CBME context. While some of these new roles may be incorporated into existing roles, it may be prudent to examine how best to ensure that all of them are supported within all CBME contexts in some manner.
Collapse
Affiliation(s)
- Yusuf Yilmaz
- McMaster Education Research, Innovation, and Theory (MERIT), and Office of Continuing Professional Development, Faculty of Health Sciences, McMaster University, Hamilton, Canada
- Department of Medical Education, Faculty of Medicine, Ege University, Izmir, Turkey
| | - Ming-Ka Chan
- Department of Pediatrics and Child Health, University of Manitoba, Winnipeg, Canada
| | - Denyse Richardson
- Department of Medicine, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada
| | - Adelle Atkinson
- Department of Pediatrics, University of Toronto, Toronto, Canada
| | - Ereny Bassilious
- Department of Pediatrics, Faculty of Health Sciences, McMaster University, Hamilton, Canada
| | - Linda Snell
- Medicine and Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada
| | - Teresa M Chan
- McMaster Education Research, Innovation, and Theory (MERIT), and Office of Continuing Professional Development, Faculty of Health Sciences, McMaster University, Hamilton, Canada
- Divisions of Emergency Medicine, Department of Medicine, McMaster University, Hamilton, Canada
| |
Collapse
|
15
|
Otaki F, Gholami M, Fawad I, Akbar A, Banerjee Y. Students' Perception of Formative Assessment as an Instructional Tool in Competency-Based Medical Education: Proposal for a Proof-of-Concept Study. JMIR Res Protoc 2023; 12:e41626. [PMID: 36939831 PMCID: PMC10131604 DOI: 10.2196/41626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 02/19/2023] [Accepted: 02/26/2023] [Indexed: 03/02/2023] Open
Abstract
BACKGROUND In competency-based medical education (CBME), "Assessment for learning" or "Formative Assessment" (FA) plays a key role in augmenting student learning. FAs help students to measure their progress over time, enabling them to proactively improve their performance in summative assessments. FAs also encourage students to learn in a way where they address their knowledge gaps and gaps in their conceptualization of the subject matter. The effectiveness of an FA, as a learning and development instrument, relies on the degree of student involvement in the corresponding educational intervention's design and implementation. The extent of students' engagement in FA can be evaluated by appraising their perception regarding the educational intervention itself. OBJECTIVE This proof-of-concept study aims to develop a systemic understanding of a Formative Assessment as an Instructional Tool (FAIS) implemented in a biochemistry course in the Basic Medical Sciences component of an undergraduate entry, CBME. METHODS The educational intervention in question is an FAIS, which is implemented in a biochemistry course in the first semester of a 6-year bachelor of medicine, bachelor of surgery program. When developing the FAIS, each area of knowledge, skills, and attitudes were considered. Assessment formats are developed per Miller's learning pyramid. This multiphase study is meant to rely on a convergent mixed methods design, where qualitative and quantitative data are independently collected and analyzed. Thereafter, the outputs of analyses are systematically merged using joint display analysis process. Qualitative data are collected through a focus group session that captures the students' perception toward the FAIS. Data collection, integral to this focus group session, is exploratory. The inductive qualitative data analysis follows Braun and Clarke's 6-step framework. The quantitative component of this study revolves around investigating the effect of the FAIS on the course's summative assessment. The summative assessment performance of the 71 students, enrolled in the FAIS cohort, will be compared to that of the students in the non-FAIS cohort. The total duration of the proposed multiphase research study is 6 months. RESULTS This proposed multiphase study is expected to showcase, from a systemic perspective, the effectiveness of the respective educational intervention. It will shed light on the participating students' attitudes in relation to the usefulness of FA in achieving competency goals and in fostering self-directed learning. The proposed study could also uncover the hypothesized association between the FA intervention and enhanced performance in summative assessments. CONCLUSIONS Our findings will generate evidence regarding the application of FAs, which can be leveraged by other medical educators in contexts similar to those under investigation. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) DERR1-10.2196/41626.
Collapse
Affiliation(s)
- Farah Otaki
- College of Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai, United Arab Emirates
- Strategy and Institutional Excellence, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai, United Arab Emirates
| | - Mandana Gholami
- College of Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai, United Arab Emirates
| | - Iman Fawad
- College of Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai, United Arab Emirates
| | - Anjum Akbar
- College of Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai, United Arab Emirates
| | - Yajnavalka Banerjee
- College of Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai, United Arab Emirates
- Centre of Medical Education, University of Dundee, Dundee, United Kingdom
| |
Collapse
|
16
|
Warren AE, Tham E, Abeysekera J. Some Things Change, Some Things Stay the Same: Trends in Canadian Education in Paediatric Cardiology and the Cardiac Sciences. CJC PEDIATRIC AND CONGENITAL HEART DISEASE 2022; 1:232-240. [PMID: 37969433 PMCID: PMC10642121 DOI: 10.1016/j.cjcpc.2022.08.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 08/30/2022] [Indexed: 11/17/2023]
Abstract
Education in paediatric cardiology has evolved along with clinical care. The availability and application of new technologies in education, in particular, have had a significant impact. Artificial intelligence; virtual, augmented, and mixed reality learning tools; and gamification of learning have all resulted in new opportunities for today's trainees compared with those of the past. A new training model is also being used. Though currently focused on residency education, competency-based medical education is also being applied to undergraduate education in some Canadian medical schools. Competency-based medical education offers a more transparent relationship between education and physicians' social contract with society. It provides greater accountability for programmes and learners to teach and learn the skills required to function as competent specialists. However, it has not come without challenges. Coincident with the application of this model for learners, there has been increased educational accountability for physicians in practice and for the institutions training them. Despite these changes, some things have remained the same. On the positive side, the importance of good clinical teachers to effective learning remains constant. Unfortunately, the mistreatment of learners within our education system also remains and is perhaps the most important challenge facing medical education in Canada today. Learning to be better teachers and learner advocates is an important goal for all of those involved in educating Canadian medical learners.
Collapse
Affiliation(s)
- Andrew E. Warren
- IWK Health Centre, Halifax, Nova Scotia, Canada
- Dalhousie University, Halifax, Nova Scotia, Canada
| | - Edythe Tham
- Stollery Children’s Hospital, Edmonton, Alberta, Canada
- University of Alberta, Edmonton, Alberta, Canada
| | - Jayani Abeysekera
- IWK Health Centre, Halifax, Nova Scotia, Canada
- Dalhousie University, Halifax, Nova Scotia, Canada
| |
Collapse
|
17
|
Lee ASO, Donoff C, Ross S. Using Learning Analytics to Examine Differences in Assessment Forms From Continuous Versus Episodic Supervisors of Family Medicine Residents. J Grad Med Educ 2022; 14:606-612. [PMID: 36274777 PMCID: PMC9580309 DOI: 10.4300/jgme-d-21-00832.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 01/29/2022] [Accepted: 06/28/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND It is assumed that there is a need for continuity of supervision within competency-based medical education, despite most evidence coming from the undergraduate medical education rather than the graduate medical education (GME) context. This evidence gap must be addressed to justify the time and effort needed to redesign GME programs to support continuity of supervision. OBJECTIVE To examine differences in assessment behaviors of continuous supervisors (CS) versus episodic supervisors (ES), using completed formative assessment forms, FieldNotes, as a proxy. METHODS The FieldNotes CS- and ES-entered for family medicine residents (N=186) across 3 outpatient teaching sites over 3 academic years (2015-2016, 2016-2017, 2017-2018) were examined using 2-sample proportion z-tests to determine differences on 3 FieldNote elements: competency (Sentinel Habit [SH]), Clinical Domain (CD), and Progress Level (PL). RESULTS Sixty-nine percent (6104 of 8909) of total FieldNotes were analyzed. Higher proportions of CS-entered FieldNotes indicated SH3 (Managing patients with best practices), z=-3.631, P<.0001; CD2 (Care of adults), z=-8.659, P<.0001; CD3 (Care of the elderly), z=-4.592, P<.0001; and PL3 (Carry on, got it), z=-4.482, P<.0001. Higher proportions of ES-entered FieldNotes indicated SH7 (Communication skills), z=4.268, P<.0001; SH8 (Helping others learn), z=20.136, P<.0001; CD1 (Doctor-patient relationship/ethics), z=14.888, P<.0001; CD9 (Not applicable), z=7.180, P<.0001; and PL2 (In progress), z=5.117, P<.0001. CONCLUSIONS The type of supervisory relationship impacts assessment: there is variability in which competencies are paid attention to, which contexts or populations are included, and which progress levels are chosen.
Collapse
Affiliation(s)
- Ann S. O. Lee
- Ann S. O. Lee, MD, MEd, is Assistant Professor, Department of Family Medicine, University of Alberta, Edmonton, Alberta, Canada
| | - Christopher Donoff
- Christopher Donoff, MSc, is Junior Data Scientist, Blackline Safety, Calgary, Alberta, Canada
| | - Shelley Ross
- Shelley Ross, PhD, is Professor, Department of Family Medicine, University of Alberta Edmonton, Alberta, Canada
| |
Collapse
|
18
|
Landreville JM, Wood TJ, Frank JR, Cheung WJ. Does direct observation influence the quality of workplace-based assessment documentation? AEM EDUCATION AND TRAINING 2022; 6:e10781. [PMID: 35903424 PMCID: PMC9305723 DOI: 10.1002/aet2.10781] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 06/07/2022] [Accepted: 06/08/2022] [Indexed: 05/30/2023]
Abstract
BACKGROUND A key component of competency-based medical education (CBME) is direct observation of trainees. Direct observation has been emphasized as integral to workplace-based assessment (WBA) yet previously identified challenges may limit its successful implementation. Given these challenges, it is imperative to fully understand the value of direct observation within a CBME program of assessment. Specifically, it is not known whether the quality of WBA documentation is influenced by observation type (direct or indirect). METHODS The objective of this study was to determine the influence of observation type (direct or indirect) on quality of entrustable professional activity (EPA) assessment documentation within a CBME program. EPA assessments were scored by four raters using the Quality of Assessment for Learning (QuAL) instrument, a previously published three-item quantitative measure of the quality of written comments associated with a single clinical performance score. An analysis of variance was performed to compare mean QuAL scores among the direct and indirect observation groups. The reliability of the QuAL instrument for EPA assessments was calculated using a generalizability analysis. RESULTS A total of 244 EPA assessments (122 direct observation, 122 indirect observation) were rated for quality using the QuAL instrument. No difference in mean QuAL score was identified between the direct and indirect observation groups (p = 0.17). The reliability of the QuAL instrument for EPA assessments was 0.84. CONCLUSIONS Observation type (direct or indirect) did not influence the quality of EPA assessment documentation. This finding raises the question of how direct and indirect observation truly differ and the implications for meta-raters such as competence committees responsible for making judgments related to trainee promotion.
Collapse
Affiliation(s)
| | - Timothy J. Wood
- Department of Innovation in Medical EducationUniversity of OttawaOttawaOntarioCanada
| | - Jason R. Frank
- Department of Emergency MedicineUniversity of OttawaOttawaOntarioCanada
| | - Warren J. Cheung
- Department of Emergency MedicineUniversity of OttawaOttawaOntarioCanada
| |
Collapse
|
19
|
Competency-Based Assessment in Experiential Learning in Undergraduate Pharmacy Programmes: Qualitative Exploration of Facilitators' Views and Needs (ACTp Study). PHARMACY 2022; 10:pharmacy10040090. [PMID: 35893728 PMCID: PMC9332294 DOI: 10.3390/pharmacy10040090] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 07/19/2022] [Accepted: 07/21/2022] [Indexed: 02/04/2023] Open
Abstract
Newly registered pharmacists will need to possess higher-level competencies and, in Great Britain, there is an expectation that assessments are undertaken during experiential learning (EL). The aim of this study was to explore the perceptions and educational needs of practice-based EL facilitators of student pharmacists, undertaking competency-based assessments during EL. Semi-structured one-on-one interviews were conducted with EL facilitators working in the community, hospital, and primary-care pharmacies. Data were thematically analysed. Fifteen facilitators were interviewed, and there were five from each site. There was general support for this role, but also anxiety due to the lack of knowledge about assessments and the repercussions on students. Benefits were that students would receive real-time feedback from workplace-based practitioners and facilitators would benefit from self-development. Challenges included additional workload and lack of consistency in marking. The majority agreed that clinical, professional, and communication skills could be assessed; however, a consensus was not reached regarding the tools, methods, and grading of assessments. The need for training and support were highlighted. A co-design method was proposed to ensure that the assessment methods and processes are accepted by all stakeholders. Training and resources should be tailored to the needs of facilitators.
Collapse
|
20
|
Blomberg BA, Chen F, Beck Dallaghan GL, MacDonald J, Wilson L. Development and Evaluation of a Faculty Teaching Boot Camp Before and During the COVID-19 Pandemic. Cureus 2022; 14:e26237. [PMID: 35911319 PMCID: PMC9312938 DOI: 10.7759/cureus.26237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/23/2022] [Indexed: 11/25/2022] Open
Abstract
Introduction Medical faculty often assume teaching responsibilities without formal training in teaching skills. The purpose of this study was to design, implement, and evaluate boot camp workshop training faculty in basic teaching competencies. We also describe the transition to a virtual format necessitated by the COVID-19 pandemic. Methods The workshop content was derived from a needs assessment survey and discussion with content experts. Four main content areas were identified: setting expectations, giving feedback, evaluating learners, and teaching in specific settings (outpatient clinics, inpatient wards, procedures/surgery, and small groups). The initial boot camp was a four-hour in-person event. The following year, the boot camp was offered via online videoconference. We used a pre-post survey to assess participant reaction and knowledge acquisition from session content. Results A total of 30 local faculty attended the 2020 in-person boot camp, while 105 faculty from across the state attended the 2021 online boot camp. Statistically significant increases in post-knowledge scores were identified for two sessions in the 2020 boot camp and four sessions in 2021. The participants rated both boot camps favorably with no significant difference between the in-person and online presentations for most ratings but were less satisfied with networking opportunities in the online boot camp. Discussion We describe an effective faculty development boot camp teaching core competencies for medical clinician-educators. We were able to leverage the online teleconferencing platform to deliver the content to a larger number of preceptors at distant sites without sacrificing outcomes of participant satisfaction and improvement in knowledge scores. The online model allowed busy clinicians to participate while multitasking. Comments also highlighted the importance of having an engaged moderator during the online event. Conclusions Many medical schools utilize preceptors in distant locations. We demonstrated the feasibility of reaching a much larger and geographically widespread group of clinical preceptors using a virtual format while still showing improvement in knowledge scores relating to workshop content. For future faculty development, we propose that hybrid models with both in-person and virtual components will be effective in meeting the needs of a geographically distributed faculty.
Collapse
|
21
|
Do Resident Archetypes Influence the Functioning of Programs of Assessment? EDUCATION SCIENCES 2022. [DOI: 10.3390/educsci12050293] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
While most case studies consider how programs of assessment may influence residents’ achievement, we engaged in a qualitative, multiple case study to model how resident engagement and performance can reciprocally influence the program of assessment. We conducted virtual focus groups with program leaders from four residency training programs from different disciplines (internal medicine, emergency medicine, neurology, and rheumatology) and institutions. We facilitated discussion with live screen-sharing to (1) improve upon a previously-derived model of programmatic assessment and (2) explore how different resident archetypes (sample profiles) may influence their program of assessment. Participants agreed that differences in resident engagement and performance can influence their programs of assessment in some (mal)adaptive ways. For residents who are disengaged and weakly performing (of which there are a few), significantly more time is spent to make sense of problematic evidence, arrive at a decision, and generate recommendations. Whereas for residents who are engaged and performing strongly (the vast majority), significantly less effort is thought to be spent on discussion and formalized recommendations. These findings motivate us to fulfill the potential of programmatic assessment by more intentionally and strategically challenging those who are engaged and strongly performing, and by anticipating ways that weakly performing residents may strain existing processes.
Collapse
|
22
|
Ross S, Hamza D, Zulla R, Stasiuk S, Nichols D. Development of and Preliminary Validity Evidence for the EFeCT Feedback Scoring Tool. J Grad Med Educ 2022; 14:71-79. [PMID: 35222824 PMCID: PMC8848874 DOI: 10.4300/jgme-d-21-00602.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 08/31/2021] [Accepted: 11/02/2021] [Indexed: 02/03/2023] Open
Abstract
BACKGROUND Narrative feedback, like verbal feedback, is essential to learning. Regardless of form, all feedback should be of high quality. This is becoming even more important as programs incorporate narrative feedback into the constellation of evidence used for summative decision-making. Continuously improving the quality of narrative feedback requires tools for evaluating it, and time to score. A tool is needed that does not require clinical educator expertise so scoring can be delegated to others. OBJECTIVE To develop an evidence-based tool to evaluate the quality of documented feedback that could be reliably used by clinical educators and non-experts. METHODS Following a literature review to identify elements of high-quality feedback, an expert consensus panel developed the scoring tool. Messick's unified concept of construct validity guided the collection of validity evidence throughout development and piloting (2013-2020). RESULTS The Evaluation of Feedback Captured Tool (EFeCT) contains 5 categories considered to be essential elements of high-quality feedback. Preliminary validity evidence supports content, substantive, and consequential validity facets. Generalizability evidence supports that EFeCT scores assigned to feedback samples show consistent interrater reliability scores between raters across 5 sessions, regardless of level of medical education or clinical expertise (Session 1: n=3, ICC=0.94; Session 2: n=6, ICC=0.90; Session 3: n=5, ICC=0.91; Session 4: n=6, ICC=0.89; Session 5: n=6, ICC=0.92). CONCLUSIONS There is preliminary validity evidence for the EFeCT as a useful tool for scoring the quality of documented feedback captured on assessment forms. Generalizability evidence indicated comparable EFeCT scores by raters regardless of level of expertise.
Collapse
Affiliation(s)
- Shelley Ross
- Shelley Ross, PhD, is Professor, Department of Family Medicine, University of Alberta, Edmonton, AB, Canada
| | - Deena Hamza
- Deena Hamza, PhD, is Competency-Based Medical Education Evaluation Lead for Postgraduate Medical Education, University of Alberta, Edmonton, AB, Canada
| | - Rosslynn Zulla
- Rosslynn Zulla, PhD, is a Specialist/Advisor, Faculty of Social Work, University of Calgary, AB, Canada
| | - Samantha Stasiuk
- Samantha Stasiuk, MD, MHPE, is Clinical Assistant Professor, Department of Family Practice, University of British Columbia, BC, Canada
| | - Darren Nichols
- Darren Nichols, MD, is Associate Professor, Department of Family Medicine, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
23
|
Alrehaily A, Alharbi N, Zaini R, AlRumayyan A. Perspectives of the Key Stakeholders of the Alignment and Integration of the SaudiMEDs Framework into the Saudi Medical Licensure Examination: A Qualitative Study. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2022; 13:59-69. [PMID: 35046748 PMCID: PMC8763195 DOI: 10.2147/amep.s339147] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 12/16/2021] [Indexed: 05/28/2023]
Abstract
PURPOSE The purpose of the Saudi Medical Education Directives Framework (SaudiMEDs) is to assure the essential level of competencies for medical graduates, which should be reflected in the Saudi Medical Licensure Examination (SMLE). This study explored the opinions of key stakeholders of the alignment and integration of the SMLE's blueprint and contents with the SaudiMEDs competency framework's themes and domains. PARTICIPANTS AND METHODS This was a qualitative case study, using a purposive sampling technique. Ten participants participated in the study representing the main stakeholders. The participants were four deans, an assistant dean, two residents, and three interns of various colleges of medicine (COM). In-depth interviews were conducted through a semi-structured format, either online or in-person. The interviews were recorded, transcribed verbatim and analyzed according to the general guidelines of qualitative content analysis. RESULTS Four major themes emerged from the data, including the current alignment practices of the COM, competencies enhanced through preparing according to the SMLE, the SaudiMEDs representation in the SMLE, and finally a roadmap to achieve optimum alignment between the SaudiMEDs and the SMLE. CONCLUSION The participants were knowledgeable about the SaudiMEDs and perceived the SMLE blueprint and contents to be partially aligned with the themes and domains of the SaudiMEDs competency framework. The responses suggested that additional effort is required to improve the current alignment to assess the competencies of COM graduates appropriately.
Collapse
Affiliation(s)
- Ali Alrehaily
- Department of Internal Medicine, Security Forces Hospital, Riyadh, Saudi Arabia
- Department of Medical Education, College of Medicine, King Saud Bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Ministry of National Guard - Health Affairs, Riyadh, Saudi Arabia
| | - Nouf Alharbi
- Department of Medical Education, College of Medicine, King Saud Bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Ministry of National Guard - Health Affairs, Riyadh, Saudi Arabia
| | - Rania Zaini
- Medical Education, College of Medicine, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Ahmed AlRumayyan
- College of Medicine, King Saud Bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Ministry of National Guard - Health Affairs, Riyadh, Saudi Arabia
| |
Collapse
|
24
|
Wilson CA, Chahine S, Davidson J, Dave S, Sener A, Rasmussen A, Saklofske DH, Wang PZT. Working Towards Competence: A Novel Application of Borderline Regression to a Task-Specific Checklist for Technical Skills in Novices. JOURNAL OF SURGICAL EDUCATION 2021; 78:2052-2062. [PMID: 34092532 DOI: 10.1016/j.jsurg.2021.05.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 04/28/2021] [Accepted: 05/10/2021] [Indexed: 06/12/2023]
Abstract
OBJECTIVE Demonstrated competence through frequent assessment is an expected goal for progressive development in competency-based medical education curricula. The Objective Structured Assessment of Technical Skill (OSATS) is considered a valid method of formative assessment, but in few instances have standards been set for determining competence. The present study used borderline regression methods to examine standard setting of performance on a complex technical task with novices assessed using an OSATS checklist. METHODS This was a single institution prospective single arm experimental design study. Participants were 58 non-medical undergraduate students with no previous surgical experience, who observed a computer-based training module on end-to-side vascular anastomosis. Subsequently, participants were provided two 20-minute training sessions, two weeks apart where they received expert feedback whilst performing the task on a low-fidelity model. After each training session, participants completed the task unaided. Sessions were recorded and assessed using an OSATS checklist retrospectively by experts. RESULTS Paired t-test analyses indicate that for both the checklist total score (t(52) = 8.05, p < 0.001) and the global rating score (t(53) = 8.15, p < 0.001), individuals performed significantly better in Phase 2. Borderline regression analyses indicated that in Phase 1 (R2 = .60) and Phase 2 (R2 = .75), the OSATS checklist could adequately capture variation in performance in novices. Further, the checklist could reliably classify novices at three of the five global rating performance levels. Pass rates determined by regression equations improved from Phase 1 to Phase 2 on all global rating levels. CONCLUSIONS With the increasing focus on competency-based medical education, it is imperative that training programs have the capacity to accurately assess outcomes and set minimum performance standards. Borderline regression methods can accurately differentiate novice learners of varying performance levels before and after training on a complex technical skill task using an OSATS checklist.
Collapse
Affiliation(s)
- Claire A Wilson
- Department of Surgery, Western University, London, Ontario, Canada
| | - Saad Chahine
- Faculty of Education, Queen's University, Kingston, Ontario, Canada
| | - Jacob Davidson
- Division of Pediatric Surgery, London Health Sciences Centre, London, Ontario, Canada
| | - Sumit Dave
- Department of Surgery and Pediatrics, Western University, London, Ontario, Canada
| | - Alp Sener
- Department of Surgery, Western University, London, Ontario, Canada
| | - Andrew Rasmussen
- Department of Surgery, Western University, London, Ontario, Canada
| | | | | |
Collapse
|
25
|
Anderson HL, Kurtz J, West DC. Implementation and Use of Workplace-Based Assessment in Clinical Learning Environments: A Scoping Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S164-S174. [PMID: 34406132 DOI: 10.1097/acm.0000000000004366] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE Workplace-based assessment (WBA) serves a critical role in supporting competency-based medical education (CBME) by providing assessment data to inform competency decisions and support learning. Many WBA systems have been developed, but little is known about how to effectively implement WBA. Filling this gap is important for creating suitable and beneficial assessment processes that support large-scale use of CBME. As a step toward filling this gap, the authors describe what is known about WBA implementation and use to identify knowledge gaps and future directions. METHOD The authors used Arksey and O'Malley's 6-stage scoping review framework to conduct the review, including: (1) identifying the research question; (2) identifying relevant studies; (3) study selection; (4) charting the data; (5) collating, summarizing, and reporting the results; and (6) consulting with relevant stakeholders. RESULTS In 2019-2020, the authors searched and screened 726 papers for eligibility using defined inclusion and exclusion criteria. One hundred sixty-three met inclusion criteria. The authors identified 5 themes in their analysis: (1) Many WBA tools and programs have been implemented, and barriers are common across fields and specialties; (2) Theoretical perspectives emphasize the need for data-driven implementation strategies; (3) User perceptions of WBA vary and are often dependent on implementation factors; (4) Technology solutions could provide useful tools to support WBA; and (5) Many areas of future research and innovation remain. CONCLUSIONS Knowledge of WBA as an implemented practice to support CBME remains constrained. To remove these constraints, future research should aim to generate generalizable knowledge on WBA implementation and use, address implementation factors, and investigate remaining knowledge gaps.
Collapse
Affiliation(s)
- Hannah L Anderson
- H.L. Anderson is research associate, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0002-9435-1535
| | - Joshua Kurtz
- J. Kurtz is a first-year resident, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Daniel C West
- D.C. West is professor of pediatrics, The Perelman School of Medicine at the University of Pennsylvania, and associate chair for education and senior director of medical education, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0002-0909-4213
| |
Collapse
|
26
|
Coertjens L, Lesterhuis M, De Winter BY, Goossens M, De Maeyer S, Michels NRM. Improving Self-Reflection Assessment Practices: Comparative Judgment as an Alternative to Rubrics. TEACHING AND LEARNING IN MEDICINE 2021; 33:525-535. [PMID: 33571014 DOI: 10.1080/10401334.2021.1877709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2020] [Revised: 12/05/2020] [Accepted: 01/17/2021] [Indexed: 06/12/2023]
Abstract
CONSTRUCT The authors aimed to investigate the utility of the comparative judgment method for assessing students' written self-reflections. BACKGROUND Medical practitioners' reflective skills are increasingly considered important and therefore included in the medical education curriculum. However, assessing students' reflective skills using rubrics does not appear to guarantee adequate inter-rater reliabilities. Recently, comparative judgment was introduced as a new method to evaluate performance assessments. This study investigates the merits and limitations of the comparative judgment method for assessing students' written self-reflections. More specifically, it examines the reliability in relation to the time spent assessing, the correlation between the scores obtained using the two methods (rubrics and comparative judgment), and, raters' perceptions of the comparative judgment method. APPROACH Twenty-two self-reflections, that had previously been scored using a rubric, were assessed by a group of eight raters using comparative judgment. Two hundred comparisons were completed and a rank order was calculated. Raters' impressions were investigated using a focus group. FINDINGS Using comparative judgment, each self-reflection needed to be compared seven times with another self-reflection to reach a scale separation reliability of .55. The inter-rater reliability of rating (ICC, (1, k)) using rubrics was .56. The time investment required for these reliability levels in both methods was around 24 minutes. The Kendall's tau rank correlation indicated a strong correlation between the scores obtained via both methods. Raters reported that making comparisons made them evaluate the quality of self-reflections in a more nuanced way. Time investment was, however, considered heavy, especially for the first comparisons. Although raters appreciated that they did not have to assign a grade to each self-reflection, the fact that the method does not automatically lead to a grade or feedback was considered a downside. CONCLUSIONS First evidence was provided for the comparative judgment method as an alternative to using rubrics for assessing students' written self-reflections. Before comparative judgment can be implemented for summative assessment, more research is needed on the time investment required to ensure no contradictory feedback is given back to students. Moreover, as the comparative judgment method requires an additional standard setting exercise to obtain grades, more research is warranted on the merits and limitations of this method when a pass/fail approach is used.
Collapse
Affiliation(s)
- Liesje Coertjens
- Psychological Sciences Research Institute, Université catholique de Louvain, Louvain-la-Neuve, Belgium
- Department of Educational Sciences, Faculty of Social Sciences, University of Antwerp, Antwerp, Belgium
| | - Marije Lesterhuis
- Department of Educational Sciences, Faculty of Social Sciences, University of Antwerp, Antwerp, Belgium
| | - Benedicte Y De Winter
- Skills Lab at the Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Maarten Goossens
- Department of Educational Sciences, Faculty of Social Sciences, University of Antwerp, Antwerp, Belgium
| | - Sven De Maeyer
- Department of Educational Sciences, Faculty of Social Sciences, University of Antwerp, Antwerp, Belgium
| | - Nele R M Michels
- Skills Lab at the Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| |
Collapse
|
27
|
Cheung K, Rogoza C, Chung AD, Kwan BYM. Analyzing the Administrative Burden of Competency Based Medical Education. Can Assoc Radiol J 2021; 73:299-304. [PMID: 34449283 DOI: 10.1177/08465371211038963] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
PURPOSE Postgraduate residency programs in Canada are transitioning to a competency-based medical education (CBME) system. Within this system, resident performance is documented through frequent assessments that provide continual feedback and guidance for resident progression. An area of concern is the perception by faculty of added administrative burden imposed by the frequent evaluations. This study investigated the time spent in the documentation and submission of required assessment forms through analysis of quantitative data from the Queen's University Diagnostic Radiology program. METHODS AND MATERIALS Data regarding time taken to complete Entrustable Professional Activities (EPA) assessments was collected from 24 full-time and part-time radiologists over a period of 18 months. This data was analyzed using SPSS to determine mean time of completion by individuals, departments, and by experience with the assessment process. RESULTS The average time taken to complete an EPA assessment form was 3 minutes and 6 seconds. Assuming 3 completed EPA assessment forms per week for each resident (n = 12) and equal distribution among all staff, this averaged out to an additional 18 minutes of administrative burden per staff member over a 4 week block. CONCLUSIONS This study investigated the perception by faculty of additional administrative burden for assessment in the CBME framework. The data provided quantitative evidence of administrative burden for the documentation and submission of assessments. The data indicated that the added administrative burden may be reasonable given mandate for CBME implementation and the advantages of adoption for postgraduate medical education.
Collapse
Affiliation(s)
- Kevin Cheung
- School of Medicine, Queen's University, Kingston, Ontario, Canada
| | - Christina Rogoza
- Queen's University Faculty of Health Sciences, Kingston, Ontario, Canada
| | - Andrew D Chung
- Department of Diagnostic Radiology, Queen's University, Kingston, Ontario, Canada
| | | |
Collapse
|
28
|
Harnessing the power of simulation for assessment: Consensus recommendations for the use of simulation-based assessment in emergency medicine. CAN J EMERG MED 2021; 22:194-203. [PMID: 32209155 DOI: 10.1017/cem.2019.488] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVES To address the increasing demand for the use of simulation for assessment, our objective was to review the literature pertaining to simulation-based assessment and develop a set of consensus-based expert-informed recommendations on the use of simulation-based assessment as presented at the 2019 Canadian Association of Emergency Physicians (CAEP) Academic Symposium on Education. METHODS A panel of Emergency Medicine (EM) physicians from across Canada, with leadership roles in simulation and/or assessment, was formed to develop the recommendations. An initial scoping literature review was conducted to extract principles of simulation-based assessment. These principles were refined via thematic analysis, and then used to derive a set of recommendations for the use of simulation-based assessment, organized by the Consensus Framework for Good Assessment. This was reviewed and revised via a national stakeholder survey, and then the recommendations were presented and revised at the consensus conference to generate a final set of recommendations on the use of simulation-based assessment in EM. CONCLUSION We developed a set of recommendations for simulation-based assessment, using consensus-based expert-informed methods, across the domains of validity, reproducibility, feasibility, educational and catalytic effects, acceptability, and programmatic assessment. While the precise role of simulation-based assessment will be a subject of continued debate, we propose that these recommendations be used to assist educators and program leaders as they incorporate simulation-based assessment into their programs of assessment.
Collapse
|
29
|
Weller JM, Coomber T, Chen Y, Castanelli DJ. Key dimensions of innovations in workplace-based assessment for postgraduate medical education: a scoping review. Br J Anaesth 2021; 127:689-703. [PMID: 34364651 DOI: 10.1016/j.bja.2021.06.038] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Revised: 05/31/2021] [Accepted: 06/20/2021] [Indexed: 11/28/2022] Open
Abstract
BACKGROUND Specialist training bodies continue to devise innovative methods of gathering information on trainee workplace performance to meet the requirements of competency-based medical education. We reviewed recent innovations in workplace-based assessment (WBA) tools to identify strengths, weaknesses, and trade-offs inherent in their design and use. METHODS In this scoping review, using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, we systematically searched databases between 2009 and 2019 for WBA tools with novel characteristics not typically seen in traditional WBAs. These included innovations in rating scales, ways of collecting information, technological innovations, ways of triggering WBAs, and approaches to compiling and using information. RESULTS We identified 30 innovative WBA tools whose characteristics could be categorised into seven dimensions: frequency of assessment, granularity (unit of performance assessed), coverage of the curriculum, rating method, initiation of the WBA, information use, and incentives. These dimensions had multiple interdependencies and trade-offs, often balancing generating assessment data with available resources. Philosophical stance on assessment also influenced WBA choice, for example prioritising trainee-centred learning (i.e. initiation of WBA and transparency of assessment data), perceptions of assessment and feedback as burdensome or beneficial, and holistic vs reductionist views on assessment of performance. CONCLUSIONS Our synthesis of the literature on innovative WBAs provides a framework for categorising tool characteristics across seven dimensions, systematically teasing apart the considerations in design and use of workplace assessments. It also draws attention to the trade-offs inherent in tool design and selection, and enables a more deliberate consideration of the tool characteristics most appropriate to the local context.
Collapse
Affiliation(s)
- Jennifer M Weller
- Centre for Medical and Health Sciences Education, School of Medicine, University of Auckland, Auckland, New Zealand; Department of Anaesthesia, Auckland City Hospital, Auckland, New Zealand.
| | - Ties Coomber
- Centre for Medical and Health Sciences Education, School of Medicine, University of Auckland, Auckland, New Zealand
| | - Yan Chen
- Centre for Medical and Health Sciences Education, School of Medicine, University of Auckland, Auckland, New Zealand
| | - Damian J Castanelli
- School of Clinical Sciences at Monash Health, Monash University, Clayton, VIC, Australia
| |
Collapse
|
30
|
Kwan BYM, Mbanwi A, Cofie N, Rogoza C, Islam O, Chung AD, Dalgarno N, Dagnone D, Wang X, Mussari B. Creating a Competency-Based Medical Education Curriculum for Canadian Diagnostic Radiology Residency (Queen’s Fundamental Innovations in Residency Education)—Part 1: Transition to Discipline and Foundation of Discipline Stages. Can Assoc Radiol J 2021; 72:372-380. [PMID: 32126802 DOI: 10.1177/0846537119894723] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2023] Open
Abstract
Purpose: The Royal College of Physicians and Surgeons of Canada (RCPSC) has mandated the transition of postgraduate medical training in Canada to a competency-based medical education (CBME) model divided into 4 stages of training. As part of the Queen’s University Fundamental Innovations in Residency Education proposal, Queen’s University in Canada is the first institution to transition all of its residency programs simultaneously to this model, including Diagnostic Radiology. The objective of this report is to describe the Queen’s Diagnostic Radiology Residency Program’s implementation of a CBME curriculum. Methods: At Queen’s University, the novel curriculum was developed using the RCPSC’s competency continuum and the CanMEDS framework to create radiology-specific entrustable professional activities (EPAs) and milestones. In addition, new committees and assessment strategies were established. As of July 2015, 3 cohorts of residents (n = 9) have been enrolled in this new curriculum. Results: EPAs, milestones, and methods of evaluation for the Transition to Discipline and Foundations of Discipline stages, as well as the opportunities and challenges associated with the implementation of a competency-based curriculum in a Diagnostic Radiology Residency Program, are described. Challenges include the increased frequency of resident assessments, establishing stage-specific learner expectations, and the creation of volumetric guidelines for case reporting and procedures. Conclusions: Development of a novel CBME curriculum requires significant resources and dedicated administrative time within an academic Radiology department. This article highlights challenges and provides guidance for this process.
Collapse
Affiliation(s)
- Benjamin Yin Ming Kwan
- Department of Diagnostic Radiology, Kingston Health Sciences Centre, Kingston, Ontario, Canada
| | - Achire Mbanwi
- Queen’s University Faculty of Health Sciences, Kingston, Ontario, Canada
| | - Nicholas Cofie
- Queen’s University Faculty of Health Sciences, Kingston, Ontario, Canada
| | - Christina Rogoza
- Queen’s University Faculty of Health Sciences, Kingston, Ontario, Canada
| | - Omar Islam
- Department of Diagnostic Radiology, Kingston Health Sciences Centre, Kingston, Ontario, Canada
| | - Andrew D. Chung
- Department of Diagnostic Radiology, Kingston Health Sciences Centre, Kingston, Ontario, Canada
| | - Nancy Dalgarno
- Queen’s University Faculty of Health Sciences, Kingston, Ontario, Canada
| | - Damon Dagnone
- Department of Emergency Medicine, Kingston Health Sciences Centre, Kingston, Ontario, Canada
| | - Xi Wang
- Department of Diagnostic Radiology, Kingston Health Sciences Centre, Kingston, Ontario, Canada
| | - Ben Mussari
- Department of Diagnostic Radiology, Kingston Health Sciences Centre, Kingston, Ontario, Canada
| |
Collapse
|
31
|
Valeriano A, Kim A, Katsoulas E, Sanfilippo A, Wang L, Rajaram A. Perspectives of Recent Graduates on Clerkship Procedural Skill Training at a Canadian Medical School: an Exploratory Study. MEDICAL SCIENCE EDUCATOR 2021; 31:1361-1367. [PMID: 34457978 PMCID: PMC8368422 DOI: 10.1007/s40670-021-01313-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/11/2021] [Indexed: 06/13/2023]
Abstract
The implementation of competency-based medical education in Canada has presented both unique opportunities and challenges for improving undergraduate procedural skills curricula. Despite the recognized importance of procedural skills, there remains a lack of national congruency in procedural training across medical schools that must be addressed. When undertaking such curricular development, obtaining learner feedback is a crucial step that can facilitate practical changes and address disparities. The purpose of the current study is to explore the perspectives and insights of recent medical graduates surrounding the clerkship procedural skills curriculum at a Canadian medical school. Six residents from a variety of program specialties participated in a semi-structured focus group interview discussing key aspects of procedural skill training. The focus group was later transcribed and qualitatively analyzed for themes. The results highlight barriers to competency-based procedural skill training involving time constraints and obtaining required evaluations, and the ability of students to self-advocate for learning opportunities. Participants note few opportunities to practice nasogastric tube insertion and casting in particular. Recommendations for curricular improvement are discussed, including options for curricular remediation and resident perspectives on which procedural skills undergraduate trainees should achieve competency in by graduation.
Collapse
Affiliation(s)
| | - Andrew Kim
- School of Medicine, Queen’s University, Kingston, ON Canada
| | | | - Anthony Sanfilippo
- School of Medicine, Queen’s University, Kingston, ON Canada
- Department of Medicine, Queen’s University, Kingston, ON Canada
| | - Louie Wang
- School of Medicine, Queen’s University, Kingston, ON Canada
- Department of Anesthesiology and Perioperative Medicine, Queen’s University, Kingston, ON Canada
| | - Akshay Rajaram
- Department of Family Medicine, Queen’s University, Kingston, ON Canada
| |
Collapse
|
32
|
Woodworth GE, Marty AP, Tanaka PP, Ambardekar AP, Chen F, Duncan MJ, Fromer IR, Hallman MR, Klesius LL, Ladlie BL, Mitchell SA, Miller Juve AK, McGrath BJ, Shepler JA, Sims C, Spofford CM, Van Cleve W, Maniker RB. Development and Pilot Testing of Entrustable Professional Activities for US Anesthesiology Residency Training. Anesth Analg 2021; 132:1579-1591. [PMID: 33661789 DOI: 10.1213/ane.0000000000005434] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
BACKGROUND Modern medical education requires frequent competency assessment. The Accreditation Council for Graduate Medical Education (ACGME) provides a descriptive framework of competencies and milestones but does not provide standardized instruments to assess and track trainee competency over time. Entrustable professional activities (EPAs) represent a workplace-based method to assess the achievement of competency milestones at the point-of-care that can be applied to anesthesiology training in the United States. METHODS Experts in education and competency assessment were recruited to participate in a 6-step process using a modified Delphi method with iterative rounds to reach consensus on an entrustment scale, a list of EPAs and procedural skills, detailed definitions for each EPA, a mapping of the EPAs to the ACGME milestones, and a target level of entrustment for graduating US anesthesiology residents for each EPA and procedural skill. The defined EPAs and procedural skills were implemented using a website and mobile app. The assessment system was piloted at 7 anesthesiology residency programs. After 2 months, faculty were surveyed on their attitudes on usability and utility of the assessment system. The number of evaluations submitted per month was collected for 1 year. RESULTS Participants in EPA development included 18 education experts from 11 different programs. The Delphi rounds produced a final list of 20 EPAs, each differentiated as simple or complex, a defined entrustment scale, mapping of the EPAs to milestones, and graduation entrustment targets. A list of 159 procedural skills was similarly developed. Results of the faculty survey demonstrated favorable ratings on all questions regarding app usability as well as the utility of the app and EPA assessments. Over the 2-month pilot period, 1636 EPA and 1427 procedure assessments were submitted. All programs continued to use the app for the remainder of the academic year resulting in 12,641 submitted assessments. CONCLUSIONS A list of 20 anesthesiology EPAs and 159 procedural skills assessments were developed using a rigorous methodology to reach consensus among education experts. The assessments were pilot tested at 7 US anesthesiology residency programs demonstrating the feasibility of implementation using a mobile app and the ability to collect assessment data. Adoption at the pilot sites was variable; however, the use of the system was not mandatory for faculty or trainees at any site.
Collapse
Affiliation(s)
- Glenn E Woodworth
- From the Department of Anesthesiology and Perioperative Medicine, Oregon Health & Science University, Portland, Oregon
| | - Adrian P Marty
- Institute of Anesthesiology, University Hospital Zurich, Zurich, Switzerland
| | - Pedro P Tanaka
- Department of Anesthesiology, Stanford University, Stanford, California
| | - Aditee P Ambardekar
- Department of Anesthesiology, University of Texas, Southwestern Medical Center, Dallas, Texas
| | - Fei Chen
- Department of Anesthesiology, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina
| | - Michael J Duncan
- Department of Anesthesiology, University of Missouri-Kansas City, Kansas City, Missouri
| | - Ilana R Fromer
- Department of Anesthesiology, University of Minnesota, Minneapolis, Minnesota
| | - Matthew R Hallman
- Department of Anesthesiology and Pain Medicine, University of Washington, Seattle, Washington
| | - Lisa L Klesius
- Department of Anesthesiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin
| | - Beth L Ladlie
- Department of Anesthesiology, Mayo Clinic, Rochester, Minnesota
| | | | - Amy K Miller Juve
- Department of Anesthesiology and Perioperative Medicine, Oregon Health & Science University, Portland, Oregon
| | - Brian J McGrath
- Department of Anesthesiology, University of Florida College of Medicine - Jacksonville, Jacksonville, Florida
| | - John A Shepler
- Department of Anesthesiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin
| | - Charles Sims
- Department of Anesthesiology, Mayo Clinic, Rochester, Minnesota
| | - Christina M Spofford
- Department of Anesthesiology, Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Wil Van Cleve
- Department of Anesthesiology and Pain Medicine, University of Washington, Seattle, Washington
| | - Robert B Maniker
- Department of Anesthesiology, Columbia University, New York, New York
| |
Collapse
|
33
|
The competency-based medical education evolution of Canadian emergency medicine specialist training. CAN J EMERG MED 2021; 22:95-102. [PMID: 31965965 DOI: 10.1017/cem.2019.417] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Canadian specialist emergency medicine (EM) residency training is undergoing the most significant transformation in its history. This article describes the rationale, process, and redesign of EM competency-based medical education. The rationale for this evolution in residency education includes 1) improved public trust by increasing transparency of the quality and rigour of residency education, 2) improved fiscal accountability to government and institutions regarding specialist EM training, 3) improved assessment systems to replace poor functioning end-of-rotation assessment reports and overemphasis on high-stakes, end-of-training examinations, and 4) and tailored learning for residents to address individualized needs. A working group with geographic and stakeholder representation convened over a 2-year period. A consensus process for decision-making was used. Four key design features of the new residency education design include 1) specialty EM-specific outcomes to be achieved in residency; 2) designation of four progressive stages of training, linked to required learning experiences and entrustable professional activities to be achieved at each stage; 3) tailored learning that provides residency programs and learner flexibility to adapt to local resources and learner needs; and 4) programmatic assessment that emphasizes systematic, longitudinal assessments from multiple sources, and sampling sentinel abilities. Required future study includes a program evaluation of this complex education intervention to ensure that intended outcomes are achieved and unintended outcomes are identified.
Collapse
|
34
|
Richardson D, Kinnear B, Hauer KE, Turner TL, Warm EJ, Hall AK, Ross S, Thoma B, Van Melle E. Growth mindset in competency-based medical education. MEDICAL TEACHER 2021; 43:751-757. [PMID: 34410891 DOI: 10.1080/0142159x.2021.1928036] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The ongoing adoption of competency-based medical education (CBME) across health professions training draws focus to learner-centred educational design and the importance of fostering a growth mindset in learners, teachers, and educational programs. An emerging body of literature addresses the instructional practices and features of learning environments that foster the skills and strategies necessary for trainees to be partners in their own learning and progression to competence and to develop skills for lifelong learning. Aligned with this emerging area is an interest in Dweck's self theory and the concept of the growth mindset. The growth mindset is an implicit belief held by an individual that intelligence and abilities are changeable, rather than fixed and immutable. In this paper, we present an overview of the growth mindset and how it aligns with the goals of CBME. We describe the challenges associated with shifting away from the fixed mindset of most traditional medical education assumptions and practices and discuss potential solutions and strategies at the individual, relational, and systems levels. Finally, we present future directions for research to better understand the growth mindset in the context of CBME.
Collapse
Affiliation(s)
- Denyse Richardson
- Department of Medicine, Division of Physiatry, University of Toronto, Ontario, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
| | - Benjamin Kinnear
- Internal Medicine and Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Karen E Hauer
- University of California, San Francisco, San Francisco, CA, USA
| | - Teri L Turner
- Pediatrics, Baylor College of Medicine, Houston, TX, USA
| | - Eric J Warm
- Internal Medicine and Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Andrew K Hall
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Emergency Medicine, Queen's University, Kingston, Canada
| | - Shelley Ross
- Department of Family Medicine, University of Alberta, Edmonton, Canada
| | - Brent Thoma
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Emergency Medicine, University of Saskatchewan, Saskatoon, Canada
| | - Elaine Van Melle
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Family Medicine, Queen's University, Kingston, Canada
| |
Collapse
|
35
|
Ross S, Hauer KE, Wycliffe-Jones K, Hall AK, Molgaard L, Richardson D, Oswald A, Bhanji F. Key considerations in planning and designing programmatic assessment in competency-based medical education. MEDICAL TEACHER 2021; 43:758-764. [PMID: 34061700 DOI: 10.1080/0142159x.2021.1925099] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Programmatic assessment as a concept is still novel for many in clinical education, and there may be a disconnect between the academics who publish about programmatic assessment and the front-line clinical educators who must put theory into practice. In this paper, we clearly define programmatic assessment and present high-level guidelines about its implementation in competency-based medical education (CBME) programs. The guidelines are informed by literature and by lessons learned from established programmatic assessment approaches. We articulate five steps to consider when implementing programmatic assessment in CBME contexts: articulate the purpose of the program of assessment, determine what must be assessed, choose tools fit for purpose, consider the stakes of assessments, and define processes for interpreting assessment data. In the process, we seek to offer a helpful guide or template for front-line clinical educators. We dispel some myths about programmatic assessment to help training programs as they look to design-or redesign-programs of assessment. In particular, we highlight the notion that programmatic assessment is not 'one size fits all'; rather, it is a system of assessment that results when shared common principles are considered and applied by individual programs as they plan and design their own bespoke model of programmatic assessment for CBME in their unique context.
Collapse
Affiliation(s)
- Shelley Ross
- Department of Family Medicine, University of Alberta, Edmonton, Canada
- Canadian Association for Medical Education, Edmonton, Canada
| | | | - Keith Wycliffe-Jones
- Department of Family Medicine, Cumming School of Medicine, University of Calgary, Calgary, Canada
| | - Andrew K Hall
- Department of Emergency Medicine, Queen's University, Kingston, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
| | - Laura Molgaard
- University of Minnesota College of Veterinary Medicine, St. Paul, MIN, USA
| | - Denyse Richardson
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Division of Physiatry, Department of Medicine, University of Toronto, Toronto, Canada
| | - Anna Oswald
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Medicine and CBME lead for the Faculty of Medicine & Dentistry, University of Alberta, Edmonton, Canada
| | - Farhan Bhanji
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Pediatrics at McGill University, Montreal, Canada
| |
Collapse
|
36
|
Kinnear B, Warm EJ, Caretta-Weyer H, Holmboe ES, Turner DA, van der Vleuten C, Schumacher DJ. Entrustment Unpacked: Aligning Purposes, Stakes, and Processes to Enhance Learner Assessment. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S56-S63. [PMID: 34183603 DOI: 10.1097/acm.0000000000004108] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Educators use entrustment, a common framework in competency-based medical education, in multiple ways, including frontline assessment instruments, learner feedback tools, and group decision making within promotions or competence committees. Within these multiple contexts, entrustment decisions can vary in purpose (i.e., intended use), stakes (i.e., perceived risk or consequences), and process (i.e., how entrustment is rendered). Each of these characteristics can be conceptualized as having 2 distinct poles: (1) purpose has formative and summative, (2) stakes has low and high, and (3) process has ad hoc and structured. For each characteristic, entrustment decisions often do not fall squarely at one pole or the other, but rather lie somewhere along a spectrum. While distinct, these continua can, and sometimes should, influence one another, and can be manipulated to optimally integrate entrustment within a program of assessment. In this article, the authors describe each of these continua and depict how key alignments between them can help optimize value when using entrustment in programmatic assessment within competency-based medical education. As they think through these continua, the authors will begin and end with a case study to demonstrate the practical application as it might occur in the clinical learning environment.
Collapse
Affiliation(s)
- Benjamin Kinnear
- B. Kinnear is associate professor of internal medicine and pediatrics, Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0003-0052-4130
| | - Eric J Warm
- E.J. Warm is professor of internal medicine and program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-6088-2434
| | - Holly Caretta-Weyer
- H. Caretta-Weyer is assistant professor of emergency medicine, Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, California; ORCID: https://orcid.org/0000-0002-9783-5797
| | - Eric S Holmboe
- E.S. Holmboe is chief, research, milestones development and evaluation officer, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0003-0108-6021
| | - David A Turner
- D.A. Turner is vice president, Competency-Based Medical Education, American Board of Pediatrics, Chapel Hill, North Carolina
| | - Cees van der Vleuten
- C. van der Vleuten is professor of education, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0001-6802-3119
| | - Daniel J Schumacher
- D.J. Schumacher is associate professor of pediatrics, Cincinnati Children's Hospital Medical Center/University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0001-5507-8452
| |
Collapse
|
37
|
Hasani H, Khoshnoodifar M, Khavandegar A, Ahmadi S, Alijani S, Mobedi A, Tarani S, Vafadar B, Tajbakhsh R, Rezaei M, Parvari S, Shamsoddini S, Silbert DI. Comparison of electronic versus conventional assessment methods in ophthalmology residents; a learner assessment scholarship study. BMC MEDICAL EDUCATION 2021; 21:342. [PMID: 34120607 PMCID: PMC8201812 DOI: 10.1186/s12909-021-02759-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 05/24/2021] [Indexed: 05/21/2023]
Abstract
BACKGROUND Assessment is a necessary part of training postgraduate medical residents. The implementation of methods located at the "shows how" level of Miller's pyramid is believed to be more effective than previous conventional tools. In this study, we quantitatively compared electronic and conventional methods in assessing ophthalmology residents. METHODS In this retrospective study, eight different conventional methods of assessment including residents' attendance, logbook, scholarship and research skills, journal club, outpatient department participation, Multiple Choice Question (MCQ), Objective Structured Clinical Examination (OSCE), and professionalism/360-degree (as one complex) were used to assess 24 ophthalmology residents of all grades. Electronic media consisting of an online Patient Management Problem (e-PMP), and modified electronic OSCE (me-OSCE) tests performed 3 weeks later were also evaluated for each of the 24 residents. Quantitative analysis was then performed comparing the conventional and electronic assessment tools, statistically assessing the correlation between the two approaches. RESULTS Twenty-four ophthalmology residents of different grades were included in this study. In the electronic assessment, average e-PMP scores (48.01 ± 12.40) were much lower than me-OSCE (65.34 ± 17.11). The total average electronic score was 56.67 ± 11.28, while the total average conventional score was 80.74 ± 5.99. Female and male residents' average scores in the electronic and conventional method were (59.15 ± 12.32 versus 83.01 ± 4.95) and (55.19 ± 10.77 versus 79.38 ± 6.29), respectively. The correlation between modified electronic OSCE and all conventional methods was not statistically significant (P-value >0.05). Correlation between e-PMP and six conventional methods, consisting of professionalism/360-degree assessment tool, logbook, research skills, Multiple Choice Questions, Outpatient department participation, and Journal club active participation was statistically significant (P-value < 0.05). The overall correlation between conventional and electronic methods was significant (P-value = 0.017). CONCLUSION In this study, we conclude that electronic PMP can be used alongside all conventional tools, and overall, e-assessment methods could replace currently used conventional methods. Combined electronic PMP and me-OSCE can be used as a replacement for currently used gold-standard assessment methods, including 360-degree assessment.
Collapse
Affiliation(s)
- Hamidreza Hasani
- Eye Research Center, The Five Senses Institute, Rassoul Akram Hospital, Iran University of Medical Sciences, Tehran, Iran
- Department of Ophthalmology, Madani Medical Center, School of Medicine, Alborz University of Medical Sciences, Karaj, Iran
| | - Mehrnoosh Khoshnoodifar
- School of Management & Medical Education, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Armin Khavandegar
- Student Research Committee, Alborz University of Medical Sciences, Karaj, Iran
| | - Soleyman Ahmadi
- School of Management & Medical Education, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Saba Alijani
- Student Research Committee, Alborz University of Medical Sciences, Karaj, Iran
| | - Aidin Mobedi
- Student Research Committee, Alborz University of Medical Sciences, Karaj, Iran
| | - Shaghayegh Tarani
- Student Research Committee, Alborz University of Medical Sciences, Karaj, Iran
| | - Benyamin Vafadar
- Student Research Committee, Alborz University of Medical Sciences, Karaj, Iran
| | - Ramin Tajbakhsh
- Non-Communicable Disease Research Center, Alborz University of Medical Sciences, Karaj, Iran
| | - Mehdi Rezaei
- Department of Emergency Medicine, School of Medicine, Alborz University of Medical Sciences, Karaj, Iran
| | - Soraya Parvari
- Department of Anatomical Sciences, School of Medicine, Alborz University of Medical Sciences, Karaj, Iran
| | | | | |
Collapse
|
38
|
Wilbur K, Teunissen PW, Scheele F, Driessen EW. Team member expectations of trainee communicator and collaborator competencies - so shines a good deed in a weary world? MEDICAL TEACHER 2021; 43:531-537. [PMID: 33476215 DOI: 10.1080/0142159x.2021.1874325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
BACKGROUND Workplace-based assessment may be further optimized by drawing upon the perspectives of multiple assessors, including those outside the trainee's discipline. Interprofessional competencies like communication and collaboration are often considered suitable for team input. AIM We sought to characterize multidisciplinary expectations of communicator and collaborator competency roles. METHODS We adopted a constructivist grounded theory approach to explore perspectives of multidisciplinary team members on a clinical teaching unit. In semi-structured interviews, participants described expectations for competent collaboration and communication of trainees outside their own discipline. Data were analyzed to identify recurring themes, underlying concepts and their interactions using constant comparison. RESULTS Three main underlying perspectives influenced interprofessional characterization of competent communication and collaboration: (1) general expectations of best practice; (2) specific expectations of supportive practice; and (3) perceived commitment to teaching practice. However, participants seemingly judged trainees outside their discipline according to how competencies were exercised to advance their own professional patient care decision-making, with minimal attention to the trainee's specific skillset demonstrated. CONCLUSION While team members expressed commitment to supporting interprofessional competency development of trainees outside their discipline, service-oriented judgement of performance loomed large. The potential impact on the credibility of multidisciplinary sources for workplace-based assessment requires consideration.
Collapse
Affiliation(s)
- Kerry Wilbur
- Faculty of Pharmaceutical Sciences, University of British Columbia, Vancouver, Canada
| | - Pim W Teunissen
- School of Health Professions Education, Maastricht University, Maastricht, the Netherlands
- Department of Maternal Fetal Medicine, Maastricht University Medical Center, Amsterdam, Netherlands
| | - Fedde Scheele
- Health Systems Innovation and Education, Amsterdam UMC and the Athena Institute of the VU University, Amsterdam, The Netherlands
| | - Erik W Driessen
- School of Health Professions Education, Maastricht University, Maastricht, the Netherlands
| |
Collapse
|
39
|
Pinilla S, Kyrou A, Klöppel S, Strik W, Nissen C, Huwendiek S. Workplace-based assessments of entrustable professional activities in a psychiatry core clerkship: an observational study. BMC MEDICAL EDUCATION 2021; 21:223. [PMID: 33882926 PMCID: PMC8059233 DOI: 10.1186/s12909-021-02637-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Accepted: 03/27/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND Entrustable professional activities (EPAs) in competency-based, undergraduate medical education (UME) have led to new formative workplace-based assessments (WBA) using entrustment-supervision scales in clerkships. We conducted an observational, prospective cohort study to explore the usefulness of a WBA designed to assess core EPAs in a psychiatry clerkship. METHODS We analyzed changes in self-entrustment ratings of students and the supervisors' ratings per EPA. Timing and frequencies of learner-initiated WBAs based on a prospective entrustment-supervision scale and resultant narrative feedback were analyzed quantitatively and qualitatively. Predictors for indirect supervision levels were explored via regression analysis, and narrative feedback was coded using thematic content analysis. Students evaluated the WBA after each clerkship rotation. RESULTS EPA 1 ("Take a patient's history"), EPA 2 ("Assess physical & mental status") and EPA 8 ("Document & present a clinical encounter") were most frequently used for learner-initiated WBAs throughout the clerkship rotations in a sample of 83 students. Clinical residents signed off on the majority of the WBAs (71%). EPAs 1, 2, and 8 showed the largest increases in self-entrustment and received most of the indirect supervision level ratings. We found a moderate, positive correlation between self-entrusted supervision levels at the end of the clerkship and the number of documented entrustment-supervision ratings per EPA (p < 0.0001). The number of entrustment ratings explained 6.5% of the variance in the supervisors' ratings for EPA 1. Narrative feedback was documented for 79% (n = 214) of the WBAs. Most narratives addressed the Medical Expert role (77%, n = 208) and used reinforcement (59%, n = 161) as a feedback strategy. Students perceived the feedback as beneficial. CONCLUSIONS Using formative WBAs with an entrustment-supervision scale and prompts for written feedback facilitated targeted, high-quality feedback and effectively supported students' development toward self-entrusted, indirect supervision levels.
Collapse
Affiliation(s)
- Severin Pinilla
- University Hospital of Old Age Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland.
- Department for Assessment and Evaluation, Institute for Medical Education, University of Bern, Bern, Switzerland.
| | - Alexandra Kyrou
- University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| | - Stefan Klöppel
- University Hospital of Old Age Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| | - Werner Strik
- University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| | - Christoph Nissen
- University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| | - Sören Huwendiek
- Department for Assessment and Evaluation, Institute for Medical Education, University of Bern, Bern, Switzerland
| |
Collapse
|
40
|
Landreville JM, Frank JR, Cheung WJ. Does direct observation happen early in a new competency-based residency program? AEM EDUCATION AND TRAINING 2021; 5:e10591. [PMID: 33842816 PMCID: PMC8019151 DOI: 10.1002/aet2.10591] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Revised: 02/16/2021] [Accepted: 02/23/2021] [Indexed: 06/01/2023]
Abstract
BACKGROUND A key component of competency-based medical education is workplace-based assessment, which includes observation (direct or indirect) of residents. Direct observation has been emphasized as an ideal form of assessment yet challenges have been identified that may limit its adoption. At present, it remains unclear how often direct and indirect observation are being used within the clinical setting. The objective of this study was to describe patterns of observation in an emergency medicine competency-based program 2 years postimplementation. METHODS Emergency medicine residents (n = 19) recorded the type of observation they received (direct or indirect) following workplace-based entrustable professional activity (EPA) assessments from December 15, 2019, to April 30, 2020. Assessment forms were reviewed and analyzed to describe patters of observation. RESULTS Assessments were collected on all 19 eligible residents (100% participation). A total of 1,070 EPA assessments were completed during the study period, of which 798 (74.6%) had the type of observation recorded. Of these recorded observations, 546 (68.4%) were directly observed and 252 (31.6%) were indirectly observed. The length of written comments contained within assessments following direct and indirect observation did not differ significantly. There was no significant association between resident gender and observation type or resident stage of training and observation type. Certain EPA assessments showed a clear preference toward either direct or indirect observation. CONCLUSIONS To the best of our knowledge, this study is the first to report patterns of observation in a competency-based residency program. The results suggest that direct observation can be quickly adopted as the primary means of workplace-based assessment. Indirect observation comprised a sizeable minority of observations and may be an underrecognized contributor to workplace-based assessment. The preference toward either direct or indirect observation for certain EPA assessments suggests that the entrustable professional activity itself may influence the type of observation.
Collapse
Affiliation(s)
| | - Jason R. Frank
- Department of Emergency MedicineUniversity of OttawaOttawaOntarioCanada
| | - Warren J. Cheung
- Department of Emergency MedicineUniversity of OttawaOttawaOntarioCanada
| |
Collapse
|
41
|
Lee AS, Ross S. Continuity of supervision: Does it mean what we think it means? MEDICAL EDUCATION 2021; 55:448-454. [PMID: 32929800 DOI: 10.1111/medu.14378] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Revised: 08/20/2020] [Accepted: 09/04/2020] [Indexed: 06/11/2023]
Abstract
CONTEXT Continuity of supervision (CoS) is generally accepted as an important element of competency-based medical education (CBME). However, collecting and interpreting evidence for its effectiveness are a challenge because we lack a shared understanding of CoS. Translating the available evidence about CoS into practice is an even greater challenge because the evidence largely exists in the undergraduate medical education (UME) literature, whereas literature about CBME is mostly situated in postgraduate medical education (PGME). PROPOSAL We explore the potential dangers of basing assumptions of the importance of CoS in CBME on evidence from the UME level where CBME is yet to be widely implemented. First, we discuss current understandings of what is meant by CoS and examine some of its evidence and where such evidence comes from. Next, we consider relevant theories related to CoS in the context of CBME and review how it is conceptualised in different educational models. We then discuss some contextual and pedagogical differences between UME and PGME when CoS is considered. Finally, we propose a shared understanding of CoS and outline implications and next steps to determine if the benefits of CoS seen at the UME level will also manifest with PGME learners. CONCLUSIONS We have the opportunity to undertake research to close our gap in knowledge about CoS at the PGME level using data emerging from our experiences with CBME. Selecting specific dimensions of CoS will allow research that is necessary to determine that what works at the UME level will also work at the PGME level as we continue to march towards CBME.
Collapse
Affiliation(s)
- Ann S Lee
- Department of Family Medicine, Faculty of Medicine, University of Alberta, Edmonton, AB, Canada
| | - Shelley Ross
- Department of Family Medicine, Faculty of Medicine, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
42
|
Dehghani Poudeh M, Mohammadi A, Mojtahedzadeh R, Yamani N. Entrustability levels of general internal medicine residents. BMC MEDICAL EDUCATION 2021; 21:185. [PMID: 33766005 PMCID: PMC7995576 DOI: 10.1186/s12909-021-02624-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 03/19/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND Entrustable professional activities (EPAs) are those activities that a health professional can perform without direct supervision in a defined environment. Bridging the gap between competencies and learning objectives, EPAs have made assessing the performances of health professional more realistic. The main objective of the present study was developing and customizing EPAs for Iranian Internal Medicine Residency Programs. RESULTS After reviewing the publications, residency curricula and logbooks, and collecting experts' ideas, the initial list of EPAs was developed. Then, in a focus group, the list was refined, the entrustability level of each residency year was determined, and finally, the EPA-competency cross-tab was established, and in the next step, through a one- round Delphi, the results were validated. Twenty-eight EPAs were developed. Some of them were definitely suitable for the higher levels of residency, such that they had to be accomplished under direct supervision until the end of the program. On the other hand, some of EPAs were those that residents, even from the first year, are expected to perform independently or under indirect supervision. Most of the EPAs cover a wide range of competencies. CONCLUSION Determining the entrustability level of each residency year in each EPA as well as the competency- EPA matrix has crucial effect on the quality of the graduates. It seems that our findings are applicable in developing countries like Iran.
Collapse
Affiliation(s)
- Mostafa Dehghani Poudeh
- Department of Medical Education, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Aeen Mohammadi
- Department of E-learning in Medical Education, Virtual School, Center for Excellence in E-learning in Medical Education, Tehran University of Medical Sciences, Tehran, Iran
| | - Rita Mojtahedzadeh
- Department of E-learning in Medical Education, Virtual School, Center for Excellence in E-learning in Medical Education, Tehran University of Medical Sciences, Tehran, Iran
| | - Nikoo Yamani
- Department of Medical Education, Isfahan University of Medical Sciences, Isfahan, Iran
| |
Collapse
|
43
|
Mishra S, Chung A, Rogoza C, Islam O, Mussari B, Wang X, Dagnone D, Cofie N, Dalgarno N, Kwan BYM. Creating a Competency-Based Medical Education Curriculum for Canadian Diagnostic Radiology Residency (Queen's Fundamental Innovations in Residency Education)-Part 2: Core of Discipline Stage. Can Assoc Radiol J 2021; 72:678-685. [PMID: 33656945 DOI: 10.1177/0846537121993058] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
PURPOSE All postgraduate residency programs in Canada are transitioning to a competency-based medical education (CBME) model divided into 4 stages of training. Queen's University has been the first Canadian institution to mandate transitioning to CBME across all residency programs, including Diagnostic Radiology. This study describes the implementation of CBME with a focus on the third developmental stage, Core of Discipline, in the Diagnostic Radiology residency program at Queen's University. We describe strategies applied and challenges encountered during the adoption and implementation process in order to inform the development of other CBME residency programs in Diagnostic Radiology. METHODS At Queen's University, the Core of Discipline stage was developed using the Royal College of Physicians and Surgeons of Canada's (RCPSC) competence continuum guidelines and the CanMEDS framework to create radiology-specific entrustable professional activities (EPAs) and milestones for assessment. New committees, administrative positions, and assessment strategies were created to develop these assessment guidelines. Currently, 2 cohorts of residents (n = 6) are enrolled in the Core of Discipline stage. RESULTS EPAs, milestones, and methods of evaluation for the Core of Discipline stage are described. Opportunities during implementation included tracking progress toward educational objectives and increased mentorship. Challenges included difficulty meeting procedural volume requirements, inconsistent procedural tracking, improving feedback mechanisms, and administrative burden. CONCLUSION The transition to a competency-based curriculum in an academic Diagnostic Radiology residency program is significantly resource and time intensive. This report describes challenges faced in developing the Core of Discipline stage and potential solutions to facilitate this process.
Collapse
Affiliation(s)
- Siddharth Mishra
- Department of Diagnostic Radiology, Kingston Health Sciences Centre, Kingston, Ontario, Canada
| | - Andrew Chung
- Department of Diagnostic Radiology, Kingston Health Sciences Centre, Kingston, Ontario, Canada
| | - Christina Rogoza
- 12363Queen's University Faculty of Health Sciences, Kingston, Ontario, Canada
| | - Omar Islam
- Department of Diagnostic Radiology, Kingston Health Sciences Centre, Kingston, Ontario, Canada
| | - Benedetto Mussari
- Department of Diagnostic Radiology, Kingston Health Sciences Centre, Kingston, Ontario, Canada
| | - Xi Wang
- Department of Diagnostic Radiology, Kingston Health Sciences Centre, Kingston, Ontario, Canada
| | - Damon Dagnone
- Department of Emergency Medicine, 71459Kingston Health Sciences Centre, Kingston, Ontario, Canada
| | - Nicholas Cofie
- 12363Queen's University Faculty of Health Sciences, Kingston, Ontario, Canada
| | - Nancy Dalgarno
- 12363Queen's University Faculty of Health Sciences, Kingston, Ontario, Canada
| | - Benjamin Y M Kwan
- Department of Diagnostic Radiology, Kingston Health Sciences Centre, Kingston, Ontario, Canada
| |
Collapse
|
44
|
Saad SL, Richmond CE, Jones K, Malau-Aduli BS. Developing a community of practice for quality assurance within healthcare assessment. MEDICAL TEACHER 2021; 43:174-181. [PMID: 33103522 DOI: 10.1080/0142159x.2020.1830959] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
BACKGROUND The Australian Collaboration for Clinical Assessment in Medicine (ACCLAiM) is a voluntary assessment consortium, involving medical schools nationwide. The aims of ACCLAiM are to benchmark student clinical assessment outcomes and to provide quality assurance (QA) of exit-level Objective Structured Clinical Exams (OSCEs). This study aimed to evaluate the impact of the ACCLAiM QA process for optimising OSCE delivery standards at the member schools using a Community of Practice (CoP) framework. METHODS A mixed methods sequential explanatory design, involving an online questionnaire and subsequent focus group discussions, was utilised. Questionnaire responses were analysed using descriptive statistics, while thematic analysis was employed for the qualitative data. RESULTS Data analysis revealed that school-specific OSCE practices had evolved based on QA feedback, as well as a collaborative sharing of expertise consistent with a CoP model. Extending beyond a QA working group for accountability and demonstration of minimum standards, participation in ACCLAiM QA processes is creating a sustainable socio-academic network focused on quality improvement. CONCLUSION Collaborative QA in clinical assessment creates opportunities for optimising standards in OSCE processes and sharing of resources for OSCE assessments. It also allows for professional development and scholarly engagement in assessment research. These benefits contribute to the existence of an emergent CoP model.
Collapse
Affiliation(s)
- Shannon L Saad
- School of Medicine, The University of Notre Dame, Sydney, Australia
| | | | - Karina Jones
- College of Medicine and Dentistry, James Cook University, Douglas, Australia
| | | |
Collapse
|
45
|
Bearman M, Brown J, Kirby C, Ajjawi R. Feedback That Helps Trainees Learn to Practice Without Supervision. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:205-209. [PMID: 32889944 DOI: 10.1097/acm.0000000000003716] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Feedback pedagogies and research tend to focus on immediate corrective actions rather than learning for the longer term. This approach means that feedback may not support trainees who are managing complex, competing, and ambiguous practice situations, often with limited supervision. There is an opportunity to consider how feedback can help medical trainees sustain their own development into the future, including when they have completed formal training. This article explores how feedback pedagogies can facilitate medical trainees' abilities to develop challenging aspects of practice across multiple clinical environments to eventually practice without supervision. From a sociocultural perspective, clinical training takes place within a practice curriculum; each clinical environment offers varying opportunities, which the trainees may choose to engage with. The authors propose feedback as an interpersonal process that helps trainees make sense of both formal training requirements and performance relevant information, including workplace cues such as patient outcomes or colleagues' comments, found within any practice curriculum. A significant pedagogic strategy may be to develop trainees' evaluative judgment or their capability to identify and appraise the qualities of good practice in both themselves and others. In this way, feedback processes may help trainees surmount complex situations and progressively gain independence from supervision.
Collapse
Affiliation(s)
- Margaret Bearman
- M. Bearman is research professor, Centre for Research in Assessment and Digital Learning, Deakin University, Melbourne, Australia; ORCID: https://orcid.org/0000-0002-6862-9871
| | - James Brown
- J. Brown is a medical practitioner, supervisor, and principal medical education advisor, Royal Australian College of General Practitioners, Melbourne, Australia, and a PhD candidate, Monash University, Melbourne, Australia; ORCID: https://orcid.org/0000-0002-7262-1629
| | - Catherine Kirby
- C. Kirby is research, evaluation, and policy officer, Family Planning Victoria, and adjunct senior lecturer, School of Rural Health, Monash University, Melbourne, Australia
| | - Rola Ajjawi
- R. Ajjawi is associate professor, Centre for Research in Assessment and Digital Learning, Deakin University, Melbourne, Australia; ORCID: https://orcid.org/0000-0003-0651-3870
| |
Collapse
|
46
|
Tu W, Hibbert R, Kontolemos M, Dang W, Wood T, Verma R, McInnes MDF. Diagnostic Radiology Residency Assessment Tools: A Scoping Review. Can Assoc Radiol J 2021; 72:651-660. [PMID: 33401932 DOI: 10.1177/0846537120981581] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
PURPOSE The multifaceted nature of learning in diagnostic radiology residency requires a variety of assessment methods. However, the scope and quality of assessment tools has not been formally examined. A scoping review was performed to identify assessment tools available for radiology resident training and to evaluate the validity of these tools. METHODS A literature search was conducted through multiple databases and on-line resources. Inclusion criteria were defined as any tool used in assessment of radiology resident competence. Data regarding residents, evaluators and specifics of each tool was extracted. Each tool was subjected through a validation process with a customized rating scale using the 5 categories of validity: content, response process, internal structure, relations to other variables, and consequences. RESULTS The initial search returned 447 articles; 35 were included. The most evaluated competency being overall knowledge (31%), most common published journal was Academic Radiology (24%); evaluations were most commonly set in the United States (57%). In terms of validation, we found low adherence to modern integrated validity, with 34% of studies including a definition of validity. When specifically examining the 5 domains of validation evidence presented, most were either absent or of low rigor (70%). Only one study presented a modern definition of validation (3%, 1/35). CONCLUSION We identified 35 evaluation tools covering a variety of competency areas. However, few of these tools have been validated. Development of new validated assessment tools or validation of existing tools is essential for the ongoing transition to a competency-based curriculum.
Collapse
Affiliation(s)
- Wendy Tu
- Department of Radiology, 27337University of Ottawa, Ontario, Canada
| | - Rebecca Hibbert
- Department of Radiology, 27337University of Ottawa, Ontario, Canada
| | - Mario Kontolemos
- Department of Radiology, 27337University of Ottawa, Ontario, Canada
| | - Wilfred Dang
- Department of Radiology, 27337University of Ottawa, Ontario, Canada
| | - Tim Wood
- Department of Innovation in Medical Education, 27337University of Ottawa, Ontario, Canada
| | - Raman Verma
- Department of Radiology, 27337University of Ottawa, Ontario, Canada
| | - Matthew D F McInnes
- Department of Radiology, 27337University of Ottawa, Ontario, Canada.,Clinical Epidemiology Program, 10055Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
| |
Collapse
|
47
|
Abstract
Entrustment decision-making has become a topic of interest in workplace-based assessment in the health professions and is germane to the use of entrustable professional activities. Entrustment decisions stem from judgments of a trainee's competence and include the permission to act with a higher level of responsibility or autonomy and a lower level of supervision. Making entrustment decisions differs from regular assessment of trainees, which usually has no consequences beyond marking trainee progress. Studies show that clinicians generally weigh more factors in making an entrustment decision than when merely assessing trainee competence or performance without direct consequences for patient care. To synthesize the varying factors reported in literature, the authors performed a thematic analysis of key qualitative studies that investigated trainee features clinical supervisors find important when making entrustment decisions. Five themes emerged from the 13 publications: Capability (specific knowledge, skills, experience, situational awareness), Integrity (truthful, benevolent, patient-centered), Reliability (conscientious, predictable, accountable, responsible), Humility (recognizes limits, asks for help, receptive to feedback), Agency (proactive toward work, team, safety, personal development). Thoughtful entrustment decisions, made either by individual clinical supervisors or by clinical competency committees, may be enriched by taking into account these five features.
Collapse
Affiliation(s)
- Olle Ten Cate
- Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands
| | - H Carrie Chen
- Georgetown University School of Medicine, Washington, USA
| |
Collapse
|
48
|
Branfield Day L, Miles A, Ginsburg S, Melvin L. Resident Perceptions of Assessment and Feedback in Competency-Based Medical Education: A Focus Group Study of One Internal Medicine Residency Program. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:1712-1717. [PMID: 32195692 DOI: 10.1097/acm.0000000000003315] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
PURPOSE As key participants in the assessment dyad, residents must be engaged with the process. However, residents' experiences with competency-based medical education (CBME), and specifically with entrustable professional activity (EPA)-based assessments, have not been well studied. The authors explored junior residents' perceptions regarding the implementation of EPA assessment and feedback initiatives in an internal medicine program. METHOD From May to November 2018, 5 focus groups were conducted with 28 first-year internal medicine residents from the University of Toronto, exploring their experiences with facilitators and barriers to EPA-based assessments in the first years of the CBME initiative. Residents were exposed to EPA-based feedback tools from early in residency. Themes were identified using constructivist grounded theory to develop a framework to understand the resident perception of EPA assessment and feedback initiatives. RESULTS Residents' discussions reflected a growth mindset orientation, as they valued the idea of meaningful feedback through multiple low-stakes assessments. However, in practice, feedback seeking was onerous. While the quantity of feedback had increased, the quality had not; some residents felt it had worsened, by reducing it to a form-filling exercise. The assessments were felt to have increased daily workload with consequent disrupted workflow and to have blurred the lines between formative and summative assessment. CONCLUSIONS Residents embraced the driving principles behind CBME, but their experience suggested that changes are needed for CBME in the study site program to meet its goals. Efforts may be needed to reconcile the tension between assessment and feedback and to effectively embed meaningful feedback into CBME learning environments.
Collapse
Affiliation(s)
- Leora Branfield Day
- L. Branfield Day is a fourth-year chief medical resident, internal medicine training program, Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Amy Miles
- A. Miles is a fourth-year resident, geriatric medicine subspecialty training program, Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Shiphra Ginsburg
- S. Ginsburg is staff physician, Division of Respirology, Mount Sinai Hospital and University Health Network, professor, Department of Medicine, Faculty of Medicine, University of Toronto, and scientist, Wilson Centre for Research in Education, University of Toronto, Toronto, Ontario, Canada
| | - Lindsay Melvin
- L. Melvin is assistant professor, Department of Medicine, Faculty of Medicine, University of Toronto, and staff physician, Division of General Internal Medicine, University Health Network, Toronto, Ontario, Canada
| |
Collapse
|
49
|
McEllistrem B, Barrett A, Hanley K. Performance in practice; exploring trainer and trainee experiences of user-designed formative assessment tools. EDUCATION FOR PRIMARY CARE 2020; 32:27-33. [PMID: 33094687 DOI: 10.1080/14739879.2020.1815085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
INTRODUCTION General Practice training in Ireland currently has various methods of formative assessment and feedback delivered to trainees. In 2018 the Irish College of General Practitioners commissioned the generation of two new user-designed formative feedback tools that would allow trainee feedback to drive learning. These tools became known as the Performance in Practice (PiP) tools. AIMS To explore the experiences of General Practice (GP) trainers and trainees having completed a pilot of using the PiP tools for 4 months. METHODS An explorative phenomenological approach was taken to understand the experiences of trainers and trainees. One to one interviews were conducted, and the transcripts analysed for themes and sub-theme via Template analysis. RESULTS User experiences focused on two main areas; educational value and acceptability. In relation to educational value, the PiP tools were seen as an improvement over established forms of formative feedback, as they were centred around the curriculum and therefore reflected the unique multifaceted requirements of an independently practising GP. Acceptability primarily focused around data governance and structures, as well as practical issues such as ease of software use. CONCLUSIONS Overall, the experience of using the PiP tools was positive for both trainers and trainees. Future plans to further explore implementation of the PiP tools have been significantly informed by this research.
Collapse
Affiliation(s)
- B McEllistrem
- General Practice Training Unit, Irish College of General Practitioners, Dublin, Ireland
| | - A Barrett
- General Practice Training Unit, Irish College of General Practitioners, Dublin, Ireland
| | - K Hanley
- General Practice Training Unit, Irish College of General Practitioners, Dublin, Ireland
| |
Collapse
|
50
|
Mann S, Truelove AH, Beesley T, Howden S, Egan R. Resident perceptions of Competency-Based Medical Education. CANADIAN MEDICAL EDUCATION JOURNAL 2020; 11:e31-e43. [PMID: 33062088 PMCID: PMC7522862 DOI: 10.36834/cmej.67958] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
BACKGROUND Residency training programs in Canada are undergoing a mandated transition to competency-based medical education (CBME). There is limited literature regarding resident perspectives on CBME. As upper year residents act as mentors and assessors for incoming cohorts, and are themselves key stakeholders in this educational transition, it is important to understand how they view CBME. We examined how residents who are not currently enrolled in a competency-based program view that method of training, and what they perceive as potential advantages, disadvantages, and considerations regarding its implementation. METHODS Sixteen residents volunteered to participate in individual semi-structured interviews, with questions focussing on participants' knowledge of CBME and its implementation. We used a grounded theory approach to develop explanations of how residents perceive CBME. RESULTS Residents anticipated improved assessment and feedback, earlier identification of residents experiencing difficulties in training, and greater flexibility to pursue self-identified educational needs. Disadvantages included logistical issues surrounding CBME implementation, ability of attending physicians to deliver CBME-appropriate feedback, and the possibility of assessment fatigue. Clear, detailed communication and channels for resident feedback were key considerations regarding implementation. CONCLUSIONS Resident views align with educational experts regarding the practical challenges of implementation. Expectations of improved assessment and feedback highlight the need for both residents and attending physicians to be equipped in these domains. Consequently, faculty development and clear communication will be crucial aspects of successful transitioning to CBME.
Collapse
Affiliation(s)
- Steve Mann
- Department of Surgery, Queen’s University, Ontario, Canada
| | | | - Theresa Beesley
- Office of Accreditation and Education Quality Improvement, Faculty of Medicine, McGill University, Quebec, Canada
| | - Stella Howden
- Centre for Medical Education, University of Dundee, Scotland, United Kingdom
| | - Rylan Egan
- Health Quality Programs, Queen’s University, Ontario, Canada
| |
Collapse
|