1
|
Hays RB, Wilkinson T, Green-Thompson L, McCrorie P, Bollela V, Nadarajah VD, Anderson MB, Norcini J, Samarasekera DD, Boursicot K, Malau-Aduli BS, Mandache ME, Nadkar AA. Managing assessment during curriculum change: Ottawa Consensus Statement. MEDICAL TEACHER 2024; 46:874-884. [PMID: 38766754 DOI: 10.1080/0142159x.2024.2350522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Accepted: 04/29/2024] [Indexed: 05/22/2024]
Abstract
Curriculum change is relatively frequent in health professional education. Formal, planned curriculum review must be conducted periodically to incorporate new knowledge and skills, changing teaching and learning methods or changing roles and expectations of graduates. Unplanned curriculum evolution arguably happens continually, usually taking the form of "minor" changes that in combination over time may produce a substantially different programme. However, reviewing assessment practices is less likely to be a major consideration during curriculum change, overlooking the potential for unintended consequences for learning. This includes potentially undermining or negating the impact of even well-designed and important curriculum changes. Changes to any component of the curriculum "ecosystem "- graduate outcomes, content, delivery or assessment of learning - should trigger an automatic review of the whole ecosystem to maintain constructive alignment. Consideration of potential impact on assessment is essential to support curriculum change. Powerful contextual drivers of a curriculum include national examinations and programme accreditation, so each assessment programme sits within its own external context. Internal drivers are also important, such as adoption of new learning technologies and learning preferences of students and faculty. Achieving optimal and sustainable outcomes from a curriculum review requires strong governance and support, stakeholder engagement, curriculum and assessment expertise and internal quality assurance processes. This consensus paper provides guidance on managing assessment during curriculum change, building on evidence and the contributions of previous consensus papers.
Collapse
Affiliation(s)
- Richard B Hays
- College of Medicine and Dentistry, James Cook University, Townsville, Australia
| | - Tim Wilkinson
- Christchurch School of Medicine & Health Sciences, University of Otago, Christchurch, New Zealand
| | | | - Peter McCrorie
- Centre for Medical and Healthcare Education, St George"s, University of London, London, United Kingdom of Great Britain and Northern Ireland
| | - Valdes Bollela
- Medical Education, Universidade Cidade de São Paulo, Sao Paulo, Brazil
| | | | | | | | | | | | - Bunmi S Malau-Aduli
- College of Medicine and Dentistry, James Cook University, Townsville, Australia
- School of Medicine and Public Health, The University of Newcastle College of Health Medicine and Wellbeing, New South Wales, Australia
| | | | - Azhar Adam Nadkar
- Department of Medicine and Health Sciences, Stellenbosch University, Cape Town, South Africa
| |
Collapse
|
2
|
Favier R, Proot J, Matiasovic M, Roos A, Knaake F, van der Lee A, den Toom M, Paes G, van Oostrom H, Verstappen F, Beukers M, van den Herik T, Bergknut N. Towards a flexible and personalised development of veterinarians and veterinary nurses working in a companion animal referral care setting. Vet Med Sci 2024; 10:e1518. [PMID: 38952266 PMCID: PMC11217593 DOI: 10.1002/vms3.1518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 05/15/2024] [Accepted: 06/10/2024] [Indexed: 07/03/2024] Open
Abstract
In the Netherlands, the demand for veterinarians and veterinary nurses (VNs) working within referral care is rapidly growing and currently exceeds the amount of available board-certified specialists. Simultaneously, a transparent structure to guide training and development and to assess quality of non-specialist veterinarians and VNs working in a referral setting is lacking. In response, we developed learning pathways guided by an entrustable professional activity (EPA) framework and programmatic assessment to support personalised development and competence of veterinarians and VNs working in referral settings. Between 4 and 35 EPAs varying per discipline (n = 11) were developed. To date, 20 trainees across five disciplines have been entrusted. Trainees from these learning pathways have proceeded to acquire new EPAs in addition to their already entrusted set of EPAs or progressed to specialist training during (n = 3) or after successfully completing (n = 1) the learning pathway. Due to their outcome-based approach, the learning pathways support flexible ways of development.
Collapse
Affiliation(s)
| | - Joachim Proot
- Evidensia Dierenziekenhuis BarendrechtBarendrechtThe Netherlands
| | | | - Arno Roos
- Evidensia Dierenziekenhuis NieuwegeinNieuwegeinThe Netherlands
| | - Frans Knaake
- Evidensia Dierenziekenhuis Den HaagDen HaagThe Netherlands
| | | | | | - Geert Paes
- IVC Evidensia the NetherlandsVleutenThe Netherlands
| | - Hugo van Oostrom
- Evidensia Dierenziekenhuis BarendrechtBarendrechtThe Netherlands
- Evidensia Dierenziekenhuis ArnhemArnhemThe Netherlands
| | | | - Martijn Beukers
- Evidensia Dierenziekenhuis BarendrechtBarendrechtThe Netherlands
- Evidensia Dierenziekenhuis Hart van BrabantWaalwijkThe Netherlands
| | | | - Niklas Bergknut
- Evidensia Dierenziekenhuis Hart van BrabantWaalwijkThe Netherlands
| |
Collapse
|
3
|
Lim A, Krishnan S, Singh H, Furletti S, Sarkar M, Stewart D, Malone D. Linking assessment to real life practice - comparing work based assessments and objective structured clinical examinations using mystery shopping. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024; 29:859-878. [PMID: 37728720 PMCID: PMC11208193 DOI: 10.1007/s10459-023-10284-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 09/03/2023] [Indexed: 09/21/2023]
Abstract
Objective Structured Clinical Examinations (OSCEs) and Work Based Assessments (WBAs) are the mainstays of assessing clinical competency in health professions' education. Underpinned by the extrapolation inference in Kane's Validity Framework, the purpose of this study is to determine whether OSCEs translate to real life performance by comparing students' OSCE performance to their performance in real-life (as a WBA) using the same clinical scenario, and to understand factors that affect students' performance. A sequential explanatory mixed methods approach where a grade comparison between students' performance in their OSCE and WBA was performed. Students were third year pharmacy undergraduates on placement at a community pharmacy in 2022. The WBA was conducted by a simulated patient, unbeknownst to students and indistinguishable from a genuine patient, visiting the pharmacy asking for health advice. The simulated patient was referred to as a 'mystery shopper' and the process to 'mystery shopping' in this manuscript. Community pharmacy is an ideal setting for real-time observation and mystery shopping as staff can be accessed without appointment. The students' provision of care and clinical knowledge was assessed by the mystery shopper using the same clinical checklist the student was assessed from in the OSCE. Students who had the WBA conducted were then invited to participate in semi-structured interviews to discuss their experiences in both settings. Overall, 92 mystery shopper (WBA) visits with students were conducted and 36 follow-up interviews were completed. The median WBA score was 41.7% [IQR 28.3] and significantly lower compared to the OSCE score 80.9% [IQR 19.0] in all participants (p < 0.001). Interviews revealed students knew they did not perform as well in the WBA compared to their OSCE, but reflected that they still need OSCEs to prepare them to manage real-life patients. Many students related their performance to how they perceived their role in OSCEs versus WBAs, and that OSCEs allowed them more autonomy to manage the patient as opposed to an unfamiliar workplace. As suggested by the activity theory, the performance of the student can be driven by their motivation which differed in the two contexts.
Collapse
Affiliation(s)
- Angelina Lim
- Faculty of Pharmacy and Pharmaceutical Sciences, Monash University, 3052, Parkville, VIC, Australia.
| | - Sunanthiny Krishnan
- Department of Cardiovascular Sciences, University of Leicester, Glenfield Hospital, LE3 9QP, Leicester, UK
| | - Harjit Singh
- Faculty of Pharmacy and Pharmaceutical Sciences, Monash University, 3052, Parkville, VIC, Australia
| | - Simon Furletti
- Faculty of Pharmacy and Pharmaceutical Sciences, Monash University, 3052, Parkville, VIC, Australia
| | - Mahbub Sarkar
- Monash Centre for Scholarship in Health Education, Faculty of Medicine and Nursing, Monash University, 3806, Clayton, VIC, Australia
| | | | - Daniel Malone
- Faculty of Pharmacy and Pharmaceutical Sciences, Monash University, 3052, Parkville, VIC, Australia
| |
Collapse
|
4
|
Parsons AS, Wijesekera TP, Olson APJ, Torre D, Durning SJ, Daniel M. Beyond thinking fast and slow: Implications of a transtheoretical model of clinical reasoning and error on teaching, assessment, and research. MEDICAL TEACHER 2024:1-12. [PMID: 38835283 DOI: 10.1080/0142159x.2024.2359963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Accepted: 05/22/2024] [Indexed: 06/06/2024]
Abstract
From dual process to a family of theories known collectively as situativity, both micro and macro theories of cognition inform our current understanding of clinical reasoning (CR) and error. CR is a complex process that occurs in a complex environment, and a nuanced, expansive, integrated model of these theories is necessary to fully understand how CR is performed in the present day and in the future. In this perspective, we present these individual theories along with figures and descriptive cases for purposes of comparison before exploring the implications of a transtheoretical model of these theories for teaching, assessment, and research in CR and error.
Collapse
Affiliation(s)
- Andrew S Parsons
- Medicine and Public Health, University of Virginia School of Medicine, Charlottesville, VA, USA
| | | | - Andrew P J Olson
- Medicine and Pediatrics, Medical Education Outcomes Center, University of Minnesota Medical School, Minneapolis, MN, USA
| | - Dario Torre
- Medicine, University of Central Florida College of Medicine, Orlando, FL, USA
| | - Steven J Durning
- Medicine and Pathology, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Michelle Daniel
- Emergency Medicine, University of California San Diego School of Medicine San Diego, CA, USA
| |
Collapse
|
5
|
DeVaul N, Carroll MA, Brown KM. Creative Solutions for a Condensed Anatomy Course. J Physician Assist Educ 2024:01367895-990000000-00151. [PMID: 38833302 DOI: 10.1097/jpa.0000000000000604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/06/2024]
Abstract
ABSTRACT There are many variations of anatomy courses taught in accredited physician assistant (PA) programs in the United States. Course directors and program leadership must choose how to effectively deliver content within their program constraints. Our anatomy course has faced challenges related to instructional time for didactic and laboratory sessions, course length, curricular placement and alignment, assessments, and faculty availability. These challenges are not specific to anatomy courses in PA curricula but exist in anatomy courses in various health care programs. In this article, we present major solutions to challenges in didactic delivery, laboratory sessions, course content, and assessments over a 5-year period. Through modifications and problem-solving, we identified the following 4 lessons learned during this process: course alignment to clinical relevance, intentional content delivery for different pedagogical approaches, structured laboratory sessions with appropriate staffing, and an appropriate weighting for assessments. These lessons and solutions will be useful to other anatomy and disciplines-based course directors facing similar challenges.
Collapse
Affiliation(s)
- Nicole DeVaul
- Nicole DeVaul, PhD, MA, is an associate professor of School of Medicine and Health Sciences at The George Washington University, Washington, District of Columbia
- Melissa A. Carroll, PhD, MS, is an associate professor of School of Medicine and Health Sciences at The George Washington University, Washington, District of Columbia
- Kirsten M. Brown, PhD, MA, is an associate professor of School of Medicine and Health Sciences at The George Washington University, Washington, District of Columbia
| | - Melissa A Carroll
- Nicole DeVaul, PhD, MA, is an associate professor of School of Medicine and Health Sciences at The George Washington University, Washington, District of Columbia
- Melissa A. Carroll, PhD, MS, is an associate professor of School of Medicine and Health Sciences at The George Washington University, Washington, District of Columbia
- Kirsten M. Brown, PhD, MA, is an associate professor of School of Medicine and Health Sciences at The George Washington University, Washington, District of Columbia
| | - Kirsten M Brown
- Nicole DeVaul, PhD, MA, is an associate professor of School of Medicine and Health Sciences at The George Washington University, Washington, District of Columbia
- Melissa A. Carroll, PhD, MS, is an associate professor of School of Medicine and Health Sciences at The George Washington University, Washington, District of Columbia
- Kirsten M. Brown, PhD, MA, is an associate professor of School of Medicine and Health Sciences at The George Washington University, Washington, District of Columbia
| |
Collapse
|
6
|
Torre D, Daniel M, Ratcliffe T, Durning SJ, Holmboe E, Schuwirth L. Programmatic Assessment of Clinical Reasoning: New Opportunities to Meet an Ongoing Challenge. TEACHING AND LEARNING IN MEDICINE 2024:1-9. [PMID: 38794865 DOI: 10.1080/10401334.2024.2333921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 02/29/2024] [Indexed: 05/26/2024]
Abstract
Issue: Clinical reasoning is essential to physicians' competence, yet assessment of clinical reasoning remains a significant challenge. Clinical reasoning is a complex, evolving, non-linear, context-driven, and content-specific construct which arguably cannot be assessed at one point in time or with a single method. This has posed challenges for educators for many decades, despite significant development of individual assessment methods. Evidence: Programmatic assessment is a systematic assessment approach that is gaining momentum across health professions education. Programmatic assessment, and in particular assessment for learning, is well-suited to address the challenges with clinical reasoning assessment. Several key principles of programmatic assessment are particularly well-aligned with developing a system to assess clinical reasoning: longitudinality, triangulation, use of a mix of assessment methods, proportionality, implementation of intermediate evaluations/reviews with faculty coaches, use of assessment for feedback, and increase in learners' agency. Repeated exposure and measurement are critical to develop a clinical reasoning assessment narrative, thus the assessment approach should optimally be longitudinal, providing multiple opportunities for growth and development. Triangulation provides a lens to assess the multidimensionality and contextuality of clinical reasoning and that of its different, yet related components, using a mix of different assessment methods. Proportionality ensures the richness of information on which to draw conclusions is commensurate with the stakes of the decision. Coaching facilitates the development of a feedback culture and allows to assess growth over time, while enhancing learners' agency. Implications: A programmatic assessment model of clinical reasoning that is developmentally oriented, optimizes learning though feedback and coaching, uses multiple assessment methods, and provides opportunity for meaningful triangulation of data can help address some of the challenges of clinical reasoning assessment.
Collapse
Affiliation(s)
- Dario Torre
- Department of Medical Education, University of Central Florida, Orlando, FL, USA
| | - Michelle Daniel
- Department of Emergency Medicine, University of California, San Diego, CA, USA
| | - Temple Ratcliffe
- Department of Medicine, The Joe R and Teresa Lozano Long School of Medicine at University of Texas Health, Texas, USA
| | - Steven J Durning
- Center for Heath Profession Education, Uniformed Services University Center for Neuroscience and Regenerative Medicine, Bethesda, Maryland, USA
| | - Eric Holmboe
- Milestones Development and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, IL, USA
| | | |
Collapse
|
7
|
Alavarce DC, de Medeiros ML, de Araújo Viana D, Abade F, Vieira JE, Machado JLM, Collares CF. The progress test as a structuring initiative for programmatic assessment. BMC MEDICAL EDUCATION 2024; 24:555. [PMID: 38773470 PMCID: PMC11110289 DOI: 10.1186/s12909-024-05537-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 05/08/2024] [Indexed: 05/23/2024]
Abstract
BACKGROUND The Progress Test is an individual assessment applied to all students at the same time and on a regular basis. The test was structured in the medical undergraduate education of a conglomerate of schools to structure a programmatic assessment integrated into teaching. This paper presents the results of four serial applications of the progress test and the feedback method to students. METHODS This assessment comprises 120 items offered online by means of a personal password. Items are authored by faculty, peer-reviewed, and approved by a committee of experts. The items are classified by five major areas, by topics used by the National Board of Medical Examiners and by medical specialties related to a national Unified Health System. The correction uses the Item Response Theory with analysis by the "Rasch" model that considers the difficulty of the item. RESULTS Student participation increased along the four editions of the tests, considering the number of enrollments. The median performances increased in the comparisons among the sequential years in all tests, except for test1 - the first test offered to schools. Between subsequent years of education, 2nd-1st; 4th-3rd and 5th-4th there was an increase in median scores from progress tests 2 through 4. The final year of undergraduate showed a limited increase compared to the 5th year. There is a consistent increase in the median, although with fluctuations between the observed intervals. CONCLUSION The progress test promoted the establishment of regular feedback among students, teachers and coordinators and paved the road to engagement much needed to construct an institutional programmatic assessment.
Collapse
|
8
|
Braund H, Dalgarno N, O’Dell R, Taylor DR. Making assessment a team sport: a qualitative study of facilitated group feedback in internal medicine residency. CANADIAN MEDICAL EDUCATION JOURNAL 2024; 15:14-26. [PMID: 38827914 PMCID: PMC11139793 DOI: 10.36834/cmej.75250] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/05/2024]
Abstract
Purpose Competency-based medical education relies on feedback from workplace-based assessment (WBA) to direct learning. Unfortunately, WBAs often lack rich narrative feedback and show bias towards Medical Expert aspects of care. Building on research examining interactive assessment approaches, the Queen's University Internal Medicine residency program introduced a facilitated, team-based assessment initiative ("Feedback Fridays") in July 2017, aimed at improving holistic assessment of resident performance on the inpatient medicine teaching units. In this study, we aim to explore how Feedback Fridays contributed to formative assessment of Internal Medicine residents within our current model of competency-based training. Method A total of 53 residents participated in facilitated, biweekly group assessment sessions during the 2017 and 2018 academic year. Each session was a 30-minute facilitated assessment discussion done with one inpatient team, which included medical students, residents, and their supervising attending. Feedback from the discussion was collected, summarized, and documented in narrative form in electronic WBA forms by the program's assessment officer for the residents. For research purposes, verbatim transcripts of feedback sessions were analyzed thematically. Results The researchers identified four major themes for feedback: communication, intra- and inter-personal awareness, leadership and teamwork, and learning opportunities. Although feedback related to a broad range of activities, it showed strong emphasis on competencies within the intrinsic CanMEDS roles. Additionally, a clear formative focus in the feedback was another important finding. Conclusions The introduction of facilitated team-based assessment in the Queen's Internal Medicine program filled an important gap in WBA by providing learners with detailed feedback across all CanMEDS roles and by providing constructive recommendations for identified areas for improvement.
Collapse
Affiliation(s)
- Heather Braund
- Office of Professional Development and Educational Scholarship, Ontario, Canada
- Faculty of Education, Queen’s University, Ontario, Canada
| | - Nancy Dalgarno
- Office of Professional Development and Educational Scholarship, Ontario, Canada
- Department of Biomedical and Molecular Sciences, Faculty of Health Sciences, Queen’s University, Ontario, Canada
| | - Rachel O’Dell
- Department of Internal Medicine, Faculty of Health Sciences, Queen’s University, Ontario, Canada
| | - David R Taylor
- Academy for Teachers and Educators, Department of Medicine, Queen’s University, Ontario, Canada
| |
Collapse
|
9
|
Barbagallo C, Osborne K, Dempsey C. Implementation of a programmatic assessment model in radiation oncology medical physics training. J Appl Clin Med Phys 2024; 25:e14354. [PMID: 38620004 PMCID: PMC11087179 DOI: 10.1002/acm2.14354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 01/03/2024] [Accepted: 03/14/2024] [Indexed: 04/17/2024] Open
Abstract
PURPOSE In 2019, a formal review and update of the current training program for medical physics residents/registrars in Australasia was conducted. The purpose of this was to ensure the program met current local clinical and technological requirements, to improve standardization of training across Australia and New Zealand and generate a dynamic curriculum and programmatic assessment model. METHODS A four-phase project was initiated, including a consultant desktop review of the current program and stakeholder consultation. Overarching program outcomes on which to base the training model were developed, with content experts used to update the scientific content. Finally, assessment specialists reviewed a range of assessment models to determine appropriate assessment methods for each learning outcome, creating a model of programmatic assessment. RESULTS The first phase identified a need for increased standardized assessment incorporating programmatic assessment. Seven clear program outcome statements were generated and used to guide and underpin the new curriculum framework. The curriculum was expanded from the previous version to include emerging technologies, while removing previous duplication. Finally, a range of proposed assessments for learning outcomes in the curriculum were generated into the programmatic assessment model. These new assessment methods were structured to incorporate rubric scoring to provide meaningful feedback. CONCLUSIONS An updated training program for Radiation Oncology Medial Physics registrars/residents was released in Australasia. Scientific content from a previous program was used as a foundation and revised for currency with the ability to accommodate a dynamic curriculum model. A programmatic model of assessment was created after comprehensive review and consultation. This new model of assessment provides more structured, ongoing assessment throughout the training period. It contains allowances for local bespoke assessment, and guidance for supervisors by the provision of marking templates and rubrics.
Collapse
Affiliation(s)
- Cathy Barbagallo
- Australasian College of Physical Scientists and Engineers in Medicine (ACPSEM)SydneyNew South WalesAustralia
- Department of Radiation OncologyAlfred HealthPrahranVictoriaAustralia
| | - Kristy Osborne
- Australian Council for Educational ResearchEducation ResearchPolicy and Development DivisionCamberwellVictoriaAustralia
| | - Claire Dempsey
- Australasian College of Physical Scientists and Engineers in Medicine (ACPSEM)SydneyNew South WalesAustralia
- Department of Radiation OncologyCalvary Mater NewcastleWaratahNew South WalesAustralia
- Department of Radiation OncologyUniversity of WashingtonSeattleWashingtonUSA
- School of Health SciencesUniversity of NewcastleCallaghanNew South WalesAustralia
| |
Collapse
|
10
|
Becker M, Shields RK, Sass KJ. Psychometric Analysis of an Integrated Clinical Education Tool for Physical Therapists. JOURNAL, PHYSICAL THERAPY EDUCATION 2024:00001416-990000000-00108. [PMID: 38684094 DOI: 10.1097/jte.0000000000000341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 01/02/2024] [Indexed: 05/02/2024]
Abstract
INTRODUCTION Integrated clinical education (ICE) courses require opportunities for practice, assessment of performance, and specific feedback. The purposes of this study were to 1) analyze the internal consistency of a tool for evaluating students during ICE courses, 2) examine the responsiveness of the tool between midterm and final assessments, and 3) develop a model to predict the final score from midterm assessments and explore relationships among the 6 domains. REVIEW OF LITERATURE Several clinical education assessment tools have been developed for terminal clinical experiences, but few have focused on the needs of learners during the ICE. SUBJECTS Eighty-five student assessments were collected from 2 consecutive cohorts of physical therapist students in a first full-time ICE course. METHODS The tool contained 29 items within 6 domains. Items were rated on a 5-point scale from dependent to indirect supervision. Cronbach's alpha was used to analyze the internal consistency of the tool, whereas responsiveness was examined with paired t-test and Cohen's d. A best subsets regression model was used to determine the best combination of midterm variables that predicted the final total scores. Coefficients of determination (R2) were calculated to explore the relationships among domains. RESULTS The tool was found to have high internal consistency at midterm and final assessment (α = 0.97 and 0.98, respectively). Mean scores increased over time for each domain score and for the total score (P < .001; d = 1.5). Scores in 3 midterm domains predicted more than 57% of the variance in the final total score. DISCUSSION AND CONCLUSION Results support the use of this tool to measure student performance and growth in a first full-time ICE course. Targeted measurement of students' abilities in ICE courses assists with differentiating formative and summative learning needed to achieve academic success.
Collapse
Affiliation(s)
- Marcie Becker
- Marcie Becker is the clinical assistant professor/codirector of clinical education in the Department of Physical Therapy and Rehabilitation Science at the University of Iowa
- Richard K. Shields is the chair/department executive officer in the Department of Physical Therapy and Rehabilitation Science, University of Iowa, 1-252 Medical Education Building, Iowa City, IA . Please address all correspondence to Richard K. Shields
- Kelly J. Sass is the clinical assistant professor/codirector of clinical education in the Department of Physical Therapy and Rehabilitation Science at the University of Iowa
| | - Richard K Shields
- Marcie Becker is the clinical assistant professor/codirector of clinical education in the Department of Physical Therapy and Rehabilitation Science at the University of Iowa
- Richard K. Shields is the chair/department executive officer in the Department of Physical Therapy and Rehabilitation Science, University of Iowa, 1-252 Medical Education Building, Iowa City, IA . Please address all correspondence to Richard K. Shields
- Kelly J. Sass is the clinical assistant professor/codirector of clinical education in the Department of Physical Therapy and Rehabilitation Science at the University of Iowa
| | - Kelly J Sass
- Marcie Becker is the clinical assistant professor/codirector of clinical education in the Department of Physical Therapy and Rehabilitation Science at the University of Iowa
- Richard K. Shields is the chair/department executive officer in the Department of Physical Therapy and Rehabilitation Science, University of Iowa, 1-252 Medical Education Building, Iowa City, IA . Please address all correspondence to Richard K. Shields
- Kelly J. Sass is the clinical assistant professor/codirector of clinical education in the Department of Physical Therapy and Rehabilitation Science at the University of Iowa
| |
Collapse
|
11
|
Fuentes-Cimma J, Sluijsmans D, Riquelme A, Villagran I, Isbej L, Olivares-Labbe MT, Heeneman S. Designing feedback processes in the workplace-based learning of undergraduate health professions education: a scoping review. BMC MEDICAL EDUCATION 2024; 24:440. [PMID: 38654360 PMCID: PMC11036781 DOI: 10.1186/s12909-024-05439-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 04/17/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND Feedback processes are crucial for learning, guiding improvement, and enhancing performance. In workplace-based learning settings, diverse teaching and assessment activities are advocated to be designed and implemented, generating feedback that students use, with proper guidance, to close the gap between current and desired performance levels. Since productive feedback processes rely on observed information regarding a student's performance, it is imperative to establish structured feedback activities within undergraduate workplace-based learning settings. However, these settings are characterized by their unpredictable nature, which can either promote learning or present challenges in offering structured learning opportunities for students. This scoping review maps literature on how feedback processes are organised in undergraduate clinical workplace-based learning settings, providing insight into the design and use of feedback. METHODS A scoping review was conducted. Studies were identified from seven databases and ten relevant journals in medical education. The screening process was performed independently in duplicate with the support of the StArt program. Data were organized in a data chart and analyzed using thematic analysis. The feedback loop with a sociocultural perspective was used as a theoretical framework. RESULTS The search yielded 4,877 papers, and 61 were included in the review. Two themes were identified in the qualitative analysis: (1) The organization of the feedback processes in workplace-based learning settings, and (2) Sociocultural factors influencing the organization of feedback processes. The literature describes multiple teaching and assessment activities that generate feedback information. Most papers described experiences and perceptions of diverse teaching and assessment feedback activities. Few studies described how feedback processes improve performance. Sociocultural factors such as establishing a feedback culture, enabling stable and trustworthy relationships, and enhancing student feedback agency are crucial for productive feedback processes. CONCLUSIONS This review identified concrete ideas regarding how feedback could be organized within the clinical workplace to promote feedback processes. The feedback encounter should be organized to allow follow-up of the feedback, i.e., working on required learning and performance goals at the next occasion. The educational programs should design feedback processes by appropriately planning subsequent tasks and activities. More insight is needed in designing a full-loop feedback process, in which specific attention is needed in effective feedforward practices.
Collapse
Affiliation(s)
- Javiera Fuentes-Cimma
- Department of Health Sciences, Faculty of Medicine, Pontificia Universidad Católica de Chile, Avenida Vicuña Mackenna 4860, Macul, Santiago, Chile.
- School of Health Professions Education, Maastricht University, Maastricht, Netherlands.
| | | | - Arnoldo Riquelme
- Centre for Medical and Health Profession Education, Department of Gastroenterology, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Ignacio Villagran
- Department of Health Sciences, Faculty of Medicine, Pontificia Universidad Católica de Chile, Avenida Vicuña Mackenna 4860, Macul, Santiago, Chile
| | - Lorena Isbej
- School of Health Professions Education, Maastricht University, Maastricht, Netherlands
- School of Dentistry, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile
| | | | - Sylvia Heeneman
- Department of Pathology, Faculty of Health, Medicine and Health Sciences, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
12
|
Senior A, Starchuk C, Gaudet-Amigo G, Green J, Patterson S, Perez A. A novel model for curriculum design: Preparation, planning, prototyping, and piloting. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2024. [PMID: 38520077 DOI: 10.1111/eje.13004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 01/18/2024] [Accepted: 02/16/2024] [Indexed: 03/25/2024]
Abstract
Dental education continuously strives to provide students with positive and meaningful learning experiences. Developing or improving a curriculum usually encompasses three main phases: design, implementation, and evaluation. Most research on curriculum development in dental education has focused on the last two phases. Our commentary addresses this gap by describing a new model for curriculum design that effectively guided the design phase of the complete overhaul of the four-year Doctor of Dental Surgery curriculum at the School of Dentistry, University of Alberta. Built on the strengths of pre-existing curriculum design models, the new model provided enough structure and rigour to support the complexity required during a complete curriculum redesign whilst still allowing sufficient consultation and flexibility to encourage stakeholder engagement. The steps of the new 4P's model (preparation, planning, prototyping, and piloting) and main actions within each step are described. Challenges observed in each step and strategies to address them are reported. Other institutions embarking on renewing or redesigning a curriculum at a program level may benefit from using a curriculum design process similar to the 4P's model. Recommendations are discussed including the inclusion of educational consultants in the curriculum renewal committee, the importance of a leadership that effectively supports curriculum reform, purposeful engagement of stakeholders during each step of the design phase and ensuring that project and change management occur concurrently.
Collapse
Affiliation(s)
- Anthea Senior
- School of Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Colleen Starchuk
- Faculty of Education, University of Alberta, Edmonton, Alberta, Canada
| | | | - Jacqueline Green
- School of Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Steven Patterson
- School of Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Arnaldo Perez
- School of Dentistry, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
13
|
Ginsburg S, Stroud L, Brydges R, Melvin L, Hatala R. Dual purposes by design: exploring alignment between residents' and academic advisors' documents in a longitudinal program. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024:10.1007/s10459-024-10318-2. [PMID: 38438699 DOI: 10.1007/s10459-024-10318-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 02/04/2024] [Indexed: 03/06/2024]
Abstract
Longitudinal academic advising (AA) and coaching programs are increasingly implemented in competency based medical education (CBME) to help residents reflect and act on the voluminous assessment data they receive. Documents created by residents for purposes of reflection are often used for a second, summative purpose-to help competence committees make decisions-which may be problematic. Using inductive, thematic analysis we analyzed written comments generated by 21 resident-AA dyads in one large internal medicine program who met over a 2 year period to determine what residents write when asked to reflect, how this aligns with what the AAs report, and what changes occur over time (total 109 resident self-reflections and 105 AA reports). Residents commented more on their developing autonomy, progress and improvement than AAs, who commented far more on performance measures. Over time, residents' writing shifted away from intrinsic roles, patient care and improvement towards what AAs focused on, including getting EPAs (entrustable professional activities), studying and exams. For EPAs, the emphasis was on getting sufficient numbers rather than reflecting on what residents were learning. Our findings challenge the practice of dual-purposing documents, by questioning the blurring of formative and summative intent, the structure of forms and their multiple conflicting purposes, and assumptions about the advising relationship over time. Our study suggests a need to re-evaluate how reflective documents are used in CBME programs. Further research should explore whether and how documentation can best be used to support resident growth and development.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- Department of Medicine, Mount Sinai Hospital, Toronto, ON, Canada.
- Wilson Centre for Research in Education, University Health Network, Toronto, ON, Canada.
| | - Lynfa Stroud
- Department of Medicine, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Ryan Brydges
- Wilson Centre for Research in Education, University Health Network, Toronto, ON, Canada
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto, Toronto, ON, Canada
| | - Lindsay Melvin
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Department of Medicine, University Health Network, Toronto, ON, Canada
| | - Rose Hatala
- Department of Medicine, University of British Columbia, Vancouver, BC, Canada
- Centre for Health Education Scholarship, University of British Columbia, Vancouver, Canada
| |
Collapse
|
14
|
Klein L, Bentley M, Moad D, Fielding A, Tapley A, van Driel M, Davey A, Mundy B, FitzGerald K, Taylor J, Norris R, Holliday E, Magin P. Perceptions of the effectiveness of using patient encounter data as an education and reflection tool in general practice training. J Prim Health Care 2024; 16:12-20. [PMID: 38546767 DOI: 10.1071/hc22158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 05/18/2023] [Indexed: 04/02/2024] Open
Abstract
Introduction Patient encounter tools provide feedback and potentially reflection on general practitioner (GP) registrars' in-practice learning and may contribute to the formative assessment of clinical competencies. However, little is known about the perceived utility of such tools. Aim To investigate the perceived utility of a patient encounter tool by GP registrars, their supervisors, and medical educators (MEs). Methods General practice registrars, supervisors and MEs from two Australian regional training organisations completed a cross-sectional questionnaire. Registrars rated how Registrar Clinical Encounters in Training (ReCEnT), a patient encounter tool, influenced their reflection on, and change in, clinical practice, learning and training. Supervisors' and MEs' perceptions provided contextual information about understanding their registrars' clinical practice, learning and training needs. Results Questionnaires were completed by 48% of registrars (n = 90), 22% of supervisors (n = 182), and 61% of MEs (n = 62). Most registrars agreed that ReCEnT helped them reflect on their clinical practice (79%), learning needs (69%) and training needs (72%). Many registrars reported changing their clinical practice (54%) and learning approaches (51%). Fewer (37%) agreed that ReCEnT influenced them to change their training plans. Most supervisors (68%) and MEs (82%) agreed ReCEnT reports helped them better understand their registrars' clinical practice. Similarly, most supervisors (63%) and MEs (68%) agreed ReCEnT reports helped them better understand their registrars' learning and training needs. Discussion ReCEnT can prompt self-reflection among registrars, leading to changes in clinical practice, learning approaches and training plans. Reaching its potential as an assessment for learning (as opposed to an assessment of learning) requires effective engagement between registrars, their supervisors and MEs.
Collapse
Affiliation(s)
- Linda Klein
- School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, University Drive, Callaghan, NSW 2308, Australia; and GP Synergy, NSW and ACT Research and Evaluation Unit, Level 1, 20 McIntosh Drive, Mayfield West, NSW 2304, Australia
| | - Michael Bentley
- General Practice Training Tasmania, Level 3, RACT House, 179 Murray Street, Hobart, Tas. 7000, Australia
| | - Dominica Moad
- School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, University Drive, Callaghan, NSW 2308, Australia; and GP Synergy, NSW and ACT Research and Evaluation Unit, Level 1, 20 McIntosh Drive, Mayfield West, NSW 2304, Australia
| | - Alison Fielding
- School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, University Drive, Callaghan, NSW 2308, Australia; and GP Synergy, NSW and ACT Research and Evaluation Unit, Level 1, 20 McIntosh Drive, Mayfield West, NSW 2304, Australia
| | - Amanda Tapley
- School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, University Drive, Callaghan, NSW 2308, Australia; and GP Synergy, NSW and ACT Research and Evaluation Unit, Level 1, 20 McIntosh Drive, Mayfield West, NSW 2304, Australia
| | - Mieke van Driel
- General Practice Clinical Unit, Faculty of Medicine, The University of Queensland, 288 Herston Road, Brisbane, Qld 4006, Australia
| | - Andrew Davey
- School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, University Drive, Callaghan, NSW 2308, Australia; and GP Synergy, NSW and ACT Research and Evaluation Unit, Level 1, 20 McIntosh Drive, Mayfield West, NSW 2304, Australia
| | - Ben Mundy
- School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, University Drive, Callaghan, NSW 2308, Australia; and GP Synergy, NSW and ACT Research and Evaluation Unit, Level 1, 20 McIntosh Drive, Mayfield West, NSW 2304, Australia
| | - Kristen FitzGerald
- General Practice Training Tasmania, Level 3, RACT House, 179 Murray Street, Hobart, Tas. 7000, Australia
| | - Jennifer Taylor
- GP Synergy, NSW and ACT Research and Evaluation Unit, Level 1, 20 McIntosh Drive, Mayfield West, NSW 2304, Australia
| | - Racheal Norris
- School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, University Drive, Callaghan, NSW 2308, Australia; and GP Synergy, NSW and ACT Research and Evaluation Unit, Level 1, 20 McIntosh Drive, Mayfield West, NSW 2304, Australia
| | - Elizabeth Holliday
- School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, University Drive, Callaghan, NSW 2308, Australia
| | - Parker Magin
- School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, University Drive, Callaghan, NSW 2308, Australia; and GP Synergy, NSW and ACT Research and Evaluation Unit, Level 1, 20 McIntosh Drive, Mayfield West, NSW 2304, Australia
| |
Collapse
|
15
|
Oswald A, Dubois D, Snell L, Anderson R, Karpinski J, Hall AK, Frank JR, Cheung WJ. Implementing Competence Committees on a National Scale: Design and Lessons Learned. PERSPECTIVES ON MEDICAL EDUCATION 2024; 13:56-67. [PMID: 38343555 PMCID: PMC10854462 DOI: 10.5334/pme.961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 07/03/2023] [Indexed: 02/15/2024]
Abstract
Competence committees (CCs) are a recent innovation to improve assessment decision-making in health professions education. CCs enable a group of trained, dedicated educators to review a portfolio of observations about a learner's progress toward competence and make systematic assessment decisions. CCs are aligned with competency based medical education (CBME) and programmatic assessment. While there is an emerging literature on CCs, little has been published on their system-wide implementation. National-scale implementation of CCs is complex, owing to the culture change that underlies this shift in assessment paradigm and the logistics and skills needed to enable it. We present the Royal College of Physicians and Surgeons of Canada's experience implementing a national CC model, the challenges the Royal College faced, and some strategies to address them. With large scale CC implementation, managing the tension between standardization and flexibility is a fundamental issue that needs to be anticipated and addressed, with careful consideration of individual program needs, resources, and engagement of invested groups. If implementation is to take place in a wide variety of contexts, an approach that uses multiple engagement and communication strategies to allow for local adaptations is needed. Large-scale implementation of CCs, like any transformative initiative, does not occur at a single point but is an evolutionary process requiring both upfront resources and ongoing support. As such, it is important to consider embedding a plan for program evaluation at the outset. We hope these shared lessons will be of value to other educators who are considering a large-scale CBME CC implementation.
Collapse
Affiliation(s)
- Anna Oswald
- Division of Rheumatology, Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
- Competency Based Medical Education, University of Alberta, Edmonton, AB, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- 8-130 Clinical Sciences building, 11350-83 Avenue, Edmonton, AB, Canada
| | - Daniel Dubois
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Department of Anesthesiology and Pain Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Linda Snell
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Institute of Health Sciences Education and Department of Medicine, McGill University, Montreal, QC, Canada
| | - Robert Anderson
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Northern Ontario School of Medicine University, Sudbury, ON, Canada
| | - Jolanta Karpinski
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Department of Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Andrew K. Hall
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Dept. of Emergency Medicine, University of Ottawa, Canada
| | - Jason R. Frank
- Centre for Innovation in Medical Education, Faculty of Medicine, University of Ottawa, Canada
| | - Warren J. Cheung
- Dept. of Emergency Medicine, University of Ottawa, Canada
- Royal College of Physicians and Surgeons of Canada, 1053 Carling Avenue, Rm F660, Ottawa, Canada
| |
Collapse
|
16
|
Cheung WJ, Bhanji F, Gofton W, Hall AK, Karpinski J, Richardson D, Frank JR, Dudek N. Design and Implementation of a National Program of Assessment Model - Integrating Entrustable Professional Activity Assessments in Canadian Specialist Postgraduate Medical Education. PERSPECTIVES ON MEDICAL EDUCATION 2024; 13:44-55. [PMID: 38343554 PMCID: PMC10854461 DOI: 10.5334/pme.956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 09/04/2023] [Indexed: 02/15/2024]
Abstract
Traditional approaches to assessment in health professions education systems, which have generally focused on the summative function of assessment through the development and episodic use of individual high-stakes examinations, may no longer be appropriate in an era of competency based medical education. Contemporary assessment programs should not only ensure collection of high-quality performance data to support robust decision-making on learners' achievement and competence development but also facilitate the provision of meaningful feedback to learners to support reflective practice and performance improvement. Programmatic assessment is a specific approach to designing assessment systems through the intentional selection and combination of a variety of assessment methods and activities embedded within an educational framework to simultaneously optimize the decision-making and learning function of assessment. It is a core component of competency based medical education and is aligned with the goals of promoting assessment for learning and coaching learners to achieve predefined levels of competence. In Canada, postgraduate specialist medical education has undergone a transformative change to a competency based model centred around entrustable professional activities (EPAs). In this paper, we describe and reflect on the large scale, national implementation of a program of assessment model designed to guide learning and ensure that robust data is collected to support defensible decisions about EPA achievement and progress through training. Reflecting on the design and implications of this assessment system may help others who want to incorporate a competency based approach in their own country.
Collapse
Affiliation(s)
- Warren J. Cheung
- Department of Emergency Medicine, University of Ottawa, Ottawa, ON, CA
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada, 1053 Carling Avenue, Rm F660, Ottawa, ON K1Y 4E9, CA
| | - Farhan Bhanji
- Department of Pediatrics (Critical Care), Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, CA
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, CA
| | - Wade Gofton
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, CA
- Department of Surgery, Division of Orthopaedic Surgery, University of Ottawa, Ottawa, ON, CA
| | - Andrew K. Hall
- Department of Emergency Medicine, University of Ottawa, Ottawa, ON, CA
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, CA
| | - Jolanta Karpinski
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, CA
- Department of Medicine, University of Ottawa, Ottawa, ON, CA
| | - Denyse Richardson
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, CA
- Department of Physical Medicine and Rehabilitation, Queen’s University, Kingston, ON, CA
| | - Jason R. Frank
- Department of Emergency Medicine, Director, Centre for Innovation in Medical Education, Faculty of Medicine, University of Ottawa, Ottawa, ON, CA
| | - Nancy Dudek
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, CA
- Department of Medicine, Division of Physical Medicine and Rehabilitation, University of Ottawa, Ottawa, ON, CA
| |
Collapse
|
17
|
Busari JO, Diffey L, Hauer KE, Lomis KD, Amiel JM, Barone MA, Schultz K, Chen HC, Damodaran A, Turner DA, Jones B, Oandasan I, Chan MK. Advancing anti-oppression and social justice in healthcare through competency-based medical education (CBME). MEDICAL TEACHER 2024:1-8. [PMID: 38215046 DOI: 10.1080/0142159x.2023.2298763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 12/20/2023] [Indexed: 01/14/2024]
Abstract
Competency-based medical education (CBME) focuses on preparing physicians to improve the health of patients and populations. In the context of ongoing health disparities worldwide, medical educators must implement CBME in ways that advance social justice and anti-oppression. In this article, authors describe how CBME can be implemented to promote equity pedagogy, an approach to education in which curricular design, teaching, assessment strategies, and learning environments support learners from diverse groups to be successful. The five core components of CBME programs - outcomes competency framework, progressive sequencing of competencies, learning experiences tailored to learners' needs, teaching focused on competencies, and programmatic assessment - enable individualization of learning experiences and teaching and encourage learners to partner with their teachers in driving their learning. These educational approaches appreciate each learner's background, experiences, and strengths. Using an exemplar case study, the authors illustrate how CBME can afford opportunities to enhance anti-oppression and social justice in medical education and promote each learner's success in meeting the expected outcomes of training. The authors provide recommendations for individuals and institutions implementing CBME to enact equity pedagogy.
Collapse
Affiliation(s)
- Jamiu O Busari
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
- Department of Pediatrics, Dr. Horacio Oduber Hospital, Oranjestad, Aruba
| | - Linda Diffey
- Community Health Sciences, Max Rady College of Medicine, University of Manitoba, Winnipeg, Canada
| | - Karen E Hauer
- University of California, San Francisco School of Medicine, San Francisco, CA, USA
| | | | - Jonathan M Amiel
- Office of Innovation in Health Professions Education and Department of Psychiatry, Columbia University Vagelos College of Physicians and Surgeons, New York, NY, USA
| | - Michael A Barone
- NBME, Philadelphia, PA, USA
- Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Karen Schultz
- PGME Queen's University, Kingston, Canada
- Department of Family Medicine, Queen's University, Kingston, Canada
| | - H Carrie Chen
- Georgetown University School of Medicine, Washington, DC, USA
| | - Arvin Damodaran
- School of Clinical Medicine, Faculty of Medicine and Health, UNSW Sydney, Sydney, Australia
| | - David A Turner
- Department of Pediatrics, Division of Pediatric Critical Care, Duke Health System, Durham, NC, USA
- Competency-Based Medical Education, American Board of Pediatrics, Chapel Hill, NC, USA
| | - Benjamin Jones
- Health Systems Collaborative, Nuffield Department of Medicine, Oxford, UK
| | - Ivy Oandasan
- Toronto General Hospital Research Institute (TGHRI), Toronto, Canada
| | - Ming-Ka Chan
- Department of Pediatrics & Child Health, Office of Leadership Education, Rady Faculty of Health Sciences and Equity, Diversity, Inclusivity and Social Justice Lead, University of Manitoba and The Children's Hospital of Winnipeg, Winnipeg, Canada
| |
Collapse
|
18
|
Mitchell EC, Ott M, Ross D, Grant A. Development of a Tool to Assess Surgical Resident Competence On-Call: The Western University Call Assessment Tool (WUCAT). JOURNAL OF SURGICAL EDUCATION 2024; 81:106-114. [PMID: 38008642 DOI: 10.1016/j.jsurg.2023.10.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 09/13/2023] [Accepted: 10/02/2023] [Indexed: 11/28/2023]
Abstract
BACKGROUND A central tenet of competency-based medical education is the formative assessment of trainees. There are currently no assessments designed to examine resident competence on-call, despite the on-call period being a significant component of residency, characterized by less direct supervision compared to daytime. The purpose of this study was to design a formative on-call assessment tool and collect valid evidence on its application. METHODS Nominal group technique was used to identify critical elements of surgical resident competence on-call to inform tool development. The tool was piloted over six months in the Division of Plastic & Reconstructive Surgery at our institution. Quantitative and qualitative evidence was collected to examine tool validity. RESULTS A ten-item tool was developed based on the consensus group results. Sixty-three assessments were completed by seven staff members on ten residents during the pilot. The tool had a reliability coefficient of 0.67 based on a generalizability study and internal item consistency was 0.92. Scores were significantly associated with years of training. We found the tool improved the quantity and structure of feedback given and that the tool was considered feasible and acceptable by both residents and staff members. CONCLUSIONS The Western University Call Assessment Tool (WUCAT) has multiple sources of evidence supporting its use in assessing resident competence on-call.
Collapse
Affiliation(s)
- Eric C Mitchell
- Department of Surgery, Western University, London, Ontario, Canada
| | - Michael Ott
- Department of Surgery, Western University, London, Ontario, Canada
| | - Douglas Ross
- Department of Surgery, Western University, London, Ontario, Canada
| | - Aaron Grant
- Department of Surgery, Western University, London, Ontario, Canada.
| |
Collapse
|
19
|
Thayer T. Be prudent with resources. Br Dent J 2024; 236:79. [PMID: 38278880 DOI: 10.1038/s41415-024-6768-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 12/15/2023] [Indexed: 01/28/2024]
Affiliation(s)
- T Thayer
- Liverpool University Dental School and Hospital, Pembroke Place, Liverpool, L3 5PS, UK.
| |
Collapse
|
20
|
Black EP, Jones M, Jones M, Williams H, Julian E, Wilson DR. Validation of Longitudinal Progression Examinations for Prediction of APPE Readiness. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2023; 87:100137. [PMID: 38097311 DOI: 10.1016/j.ajpe.2023.100137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 05/12/2023] [Accepted: 05/22/2023] [Indexed: 12/18/2023]
Abstract
OBJECTIVE To study curricular outcomes for the purpose of holistic improvement of the curriculum. METHODS A single-institution retrospective cohort study evaluated 3 cohorts of Doctor of Pharmacy students entering the program through performance in Advanced Pharmacy Practice Experience (APPE) rotations. Assessment scores and pass/fail outcomes were collected from the 3 examinations to use as predictors, and the numbers of "needs improvement" (NI) and "unsatisfactory" (U) ratings from preceptors during the APPE rotations served as outcome measures. RESULTS Pharmacy mathematics competency and Milemarker 1 (MM1) examination first-time scores, but not those from Milemarker 2 (MM2), were significantly associated with NI or U scores on required APPE rotations. Significant correlations for all examinations (pharmacy mathematics competency, MM1, and MM2) were found for the Acute Care/Inpatient APPE rotation for each cohort and the combined cohorts. Significant correlations were also found between all examinations and the APPE rotation courses Advanced Hospital and Ambulatory Care, with the exception of the 2021 cohort. Performance in the Advanced Community rotation was not associated with any of the examinations. MM1 and MM2 are both reliable measures of competence in our didactic curriculum and predictive of scoring an NI or U rating in the APPE Acute Care/Inpatient rotation. CONCLUSION The longitudinal milestone examinations used in our institution provide a mechanism to identify students likely to struggle in required APPE rotations and target them for remediation activities.
Collapse
Affiliation(s)
- Esther P Black
- University of Kentucky, College of Pharmacy, Lexington, KY, USA.
| | - Mandy Jones
- University of Kentucky, College of Pharmacy, Lexington, KY, USA.
| | - Mikael Jones
- University of Kentucky, College of Pharmacy, Lexington, KY, USA.
| | - Houston Williams
- University of Kentucky, College of Pharmacy, Lexington, KY, USA.
| | | | | |
Collapse
|
21
|
Donker EM, Osmani H, Brinkman DJ, van Rosse F, Janssen B, Knol W, Dumont G, Jorens PG, Dupont A, Christiaens T, van Smeden J, de Waard-Siebinga I, Peeters LEJ, Goorden R, Hessel M, Lissenberg-Witte BI, Richir MC, van Agtmael MA, Kramers C, Tichelaar J. The impact of a summative national prescribing assessment and curriculum type on the development of the prescribing competence of junior doctors. Eur J Clin Pharmacol 2023; 79:1613-1621. [PMID: 37737911 PMCID: PMC10663181 DOI: 10.1007/s00228-023-03567-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 09/13/2023] [Indexed: 09/23/2023]
Abstract
PURPOSE The primary aim of this study was to investigate the effect of including the Dutch National Pharmacotherapy Assessment (DNPA) in the medical curriculum on the level and development of prescribing knowledge and skills of junior doctors. The secondary aim was to evaluate the relationship between the curriculum type and the prescribing competence of junior doctors. METHODS We re-analysed the data of a longitudinal study conducted in 2016 involving recently graduated junior doctors from 11 medical schools across the Netherlands and Belgium. Participants completed three assessments during the first year after graduation (around graduation (+ / - 4 weeks), and 6 months, and 1 year after graduation), each of which contained 35 multiple choice questions (MCQs) assessing knowledge and three clinical case scenarios assessing skills. Only one medical school used the DNPA in its medical curriculum; the other medical schools used conventional means to assess prescribing knowledge and skills. Five medical schools were classified as providing solely theoretical clinical pharmacology and therapeutics (CPT) education; the others provided both theoretical and practical CPT education (mixed curriculum). RESULTS Of the 1584 invited junior doctors, 556 (35.1%) participated, 326 (58.6%) completed the MCQs and 325 (58.5%) the clinical case scenarios in all three assessments. Junior doctors whose medical curriculum included the DNPA had higher knowledge scores than other junior doctors (76.7% [SD 12.5] vs. 67.8% [SD 12.6], 81.8% [SD 11.1] vs. 76.1% [SD 11.1], 77.0% [12.1] vs. 70.6% [SD 14.0], p < 0.05 for all three assessments, respectively). There was no difference in skills scores at the moment of graduation (p = 0.110), but after 6 and 12 months junior doctors whose medical curriculum included the DNPA had higher skills scores (both p < 0.001). Junior doctors educated with a mixed curriculum had significantly higher scores for both knowledge and skills than did junior doctors educated with a theoretical curriculum (p < 0.05 in all assessments). CONCLUSION Our findings suggest that the inclusion of the knowledge focused DNPA in the medical curriculum improves the prescribing knowledge, but not the skills, of junior doctors at the moment of graduation. However, after 6 and 12 months, both the knowledge and skills were higher in the junior doctors whose medical curriculum included the DNPA. A curriculum that provides both theoretical and practical education seems to improve both prescribing knowledge and skills relative to a solely theoretical curriculum.
Collapse
Affiliation(s)
- Erik M Donker
- Unit Pharmacotherapy, Department of Internal Medicine, Amsterdam UMC, Location VUmc, De Boelelaan 1117, 1081HV, Amsterdam, The Netherlands.
- Research and Expertise Centre in Pharmacotherapy Education (RECIPE), Amsterdam, The Netherlands.
| | - Hayaudin Osmani
- Unit Pharmacotherapy, Department of Internal Medicine, Amsterdam UMC, Location VUmc, De Boelelaan 1117, 1081HV, Amsterdam, The Netherlands
| | - David J Brinkman
- Unit Pharmacotherapy, Department of Internal Medicine, Amsterdam UMC, Location VUmc, De Boelelaan 1117, 1081HV, Amsterdam, The Netherlands
- Research and Expertise Centre in Pharmacotherapy Education (RECIPE), Amsterdam, The Netherlands
| | - Floor van Rosse
- Department of Hospital Pharmacy, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Ben Janssen
- Department of Pharmacology and Toxicology, Maastricht University, Maastricht, The Netherlands
| | - Wilma Knol
- Department of Geriatric Medicine, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Glenn Dumont
- Department of Hospital Pharmacy and Clinical Pharmacology, Amsterdam UMC, Location AMC, Amsterdam, The Netherlands
| | - Philippe G Jorens
- Department Pharmacotherapy, Antwerp University Hospital, University of Antwerp, Antwerp, Belgium
| | - Alain Dupont
- Department of Clinical Pharmacology, Free University of Brussels (VUB), Brussels, Belgium
| | - Thierry Christiaens
- Clinical Pharmacology, Department of Basic and Applied Medical Sciences, Ghent University, Ghent, Belgium
| | - Jeroen van Smeden
- Department of Education, Centre for Human Drug Research, Leiden, The Netherlands
- Department of Clinical Pharmacy and Toxicology, Leiden University Medical Center, Leiden, The Netherlands
| | - Itte de Waard-Siebinga
- Department Clinical Pharmacy and Pharmacology, University Medical Center Groningen, Groningen, The Netherlands
| | - Laura E J Peeters
- Department of Hospital Pharmacy, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
- Department of Internal Medicine, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Ronald Goorden
- Radboud University Medical Center, Nijmegen, The Netherlands
| | - Marleen Hessel
- Department of Clinical Pharmacy and Toxicology, Leiden University Medical Center, Leiden, The Netherlands
| | - Birgit I Lissenberg-Witte
- Department of Epidemiology and Data Science, Amsterdam UMC, Location VUmc, Amsterdam, The Netherlands
| | - Milan C Richir
- Unit Pharmacotherapy, Department of Internal Medicine, Amsterdam UMC, Location VUmc, De Boelelaan 1117, 1081HV, Amsterdam, The Netherlands
- Research and Expertise Centre in Pharmacotherapy Education (RECIPE), Amsterdam, The Netherlands
- Department of Surgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Michiel A van Agtmael
- Unit Pharmacotherapy, Department of Internal Medicine, Amsterdam UMC, Location VUmc, De Boelelaan 1117, 1081HV, Amsterdam, The Netherlands
- Research and Expertise Centre in Pharmacotherapy Education (RECIPE), Amsterdam, The Netherlands
| | - Cornelis Kramers
- Pharmacology-Toxicology and Internal Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jelle Tichelaar
- Unit Pharmacotherapy, Department of Internal Medicine, Amsterdam UMC, Location VUmc, De Boelelaan 1117, 1081HV, Amsterdam, The Netherlands
- Research and Expertise Centre in Pharmacotherapy Education (RECIPE), Amsterdam, The Netherlands
| |
Collapse
|
22
|
Shimizu I, Kasai H, Shikino K, Araki N, Takahashi Z, Onodera M, Kimura Y, Tsukamoto T, Yamauchi K, Asahina M, Ito S, Kawakami E. Developing Medical Education Curriculum Reform Strategies to Address the Impact of Generative AI: Qualitative Study. JMIR MEDICAL EDUCATION 2023; 9:e53466. [PMID: 38032695 PMCID: PMC10722362 DOI: 10.2196/53466] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 11/19/2023] [Accepted: 11/21/2023] [Indexed: 12/01/2023]
Abstract
BACKGROUND Generative artificial intelligence (GAI), represented by large language models, have the potential to transform health care and medical education. In particular, GAI's impact on higher education has the potential to change students' learning experience as well as faculty's teaching. However, concerns have been raised about ethical consideration and decreased reliability of the existing examinations. Furthermore, in medical education, curriculum reform is required to adapt to the revolutionary changes brought about by the integration of GAI into medical practice and research. OBJECTIVE This study analyzes the impact of GAI on medical education curricula and explores strategies for adaptation. METHODS The study was conducted in the context of faculty development at a medical school in Japan. A workshop involving faculty and students was organized, and participants were divided into groups to address two research questions: (1) How does GAI affect undergraduate medical education curricula? and (2) How should medical school curricula be reformed to address the impact of GAI? The strength, weakness, opportunity, and threat (SWOT) framework was used, and cross-SWOT matrix analysis was used to devise strategies. Further, 4 researchers conducted content analysis on the data generated during the workshop discussions. RESULTS The data were collected from 8 groups comprising 55 participants. Further, 5 themes about the impact of GAI on medical education curricula emerged: improvement of teaching and learning, improved access to information, inhibition of existing learning processes, problems in GAI, and changes in physicians' professionality. Positive impacts included enhanced teaching and learning efficiency and improved access to information, whereas negative impacts included concerns about reduced independent thinking and the adaptability of existing assessment methods. Further, GAI was perceived to change the nature of physicians' expertise. Three themes emerged from the cross-SWOT analysis for curriculum reform: (1) learning about GAI, (2) learning with GAI, and (3) learning aside from GAI. Participants recommended incorporating GAI literacy, ethical considerations, and compliance into the curriculum. Learning with GAI involved improving learning efficiency, supporting information gathering and dissemination, and facilitating patient involvement. Learning aside from GAI emphasized maintaining GAI-free learning processes, fostering higher cognitive domains of learning, and introducing more communication exercises. CONCLUSIONS This study highlights the profound impact of GAI on medical education curricula and provides insights into curriculum reform strategies. Participants recognized the need for GAI literacy, ethical education, and adaptive learning. Further, GAI was recognized as a tool that can enhance efficiency and involve patients in education. The study also suggests that medical education should focus on competencies that GAI hardly replaces, such as clinical experience and communication. Notably, involving both faculty and students in curriculum reform discussions fosters a sense of ownership and ensures broader perspectives are encompassed.
Collapse
Affiliation(s)
- Ikuo Shimizu
- Department of Medical Education, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Hajime Kasai
- Department of Medical Education, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Kiyoshi Shikino
- Health Professional Development Center, Chiba University Hospital, Chiba, Japan
- Department of Community-Oriented Medical Education, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Nobuyuki Araki
- Department of Medical Education, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Zaiya Takahashi
- Department of Medical Education, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Misaki Onodera
- Department of Medical Education, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Yasuhiko Kimura
- Health Professional Development Center, Chiba University Hospital, Chiba, Japan
| | - Tomoko Tsukamoto
- Department of Medical Education, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Kazuyo Yamauchi
- Health Professional Development Center, Chiba University Hospital, Chiba, Japan
- Department of Community-Oriented Medical Education, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Mayumi Asahina
- Health Professional Development Center, Chiba University Hospital, Chiba, Japan
| | - Shoichi Ito
- Department of Medical Education, Graduate School of Medicine, Chiba University, Chiba, Japan
- Health Professional Development Center, Chiba University Hospital, Chiba, Japan
| | - Eiryo Kawakami
- Department of Artificial Intelligence Medicine, Graduate School of Medicine, Chiba University, Chiba, Japan
| |
Collapse
|
23
|
Rath A. Back to basics: reflective take on role of MCQs in undergraduate Malaysian dental professional qualifying exams. Front Med (Lausanne) 2023; 10:1287924. [PMID: 38098841 PMCID: PMC10719850 DOI: 10.3389/fmed.2023.1287924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 11/06/2023] [Indexed: 12/17/2023] Open
Affiliation(s)
- Avita Rath
- Faculty of Dentistry, SEGi University, Petaling Jaya, Selangor, Malaysia
- Edinburgh Medical School- Clinical Education, University of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|
24
|
Wood TJ, Daniels VJ, Pugh D, Touchie C, Halman S, Humphrey-Murto S. Implicit versus explicit first impressions in performance-based assessment: will raters overcome their first impressions when learner performance changes? ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023:10.1007/s10459-023-10302-2. [PMID: 38010576 DOI: 10.1007/s10459-023-10302-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 11/12/2023] [Indexed: 11/29/2023]
Abstract
First impressions can influence rater-based judgments but their contribution to rater bias is unclear. Research suggests raters can overcome first impressions in experimental exam contexts with explicit first impressions, but these findings may not generalize to a workplace context with implicit first impressions. The study had two aims. First, to assess if first impressions affect raters' judgments when workplace performance changes. Second, whether explicitly stating these impressions affects subsequent ratings compared to implicitly-formed first impressions. Physician raters viewed six videos where learner performance either changed (Strong to Weak or Weak to Strong) or remained consistent. Raters were assigned two groups. Group one (n = 23, Explicit) made a first impression global rating (FIGR), then scored learners using the Mini-CEX. Group two (n = 22, Implicit) scored learners at the end of the video solely with the Mini-CEX. For the Explicit group, in the Strong to Weak condition, the FIGR (M = 5.94) was higher than the Mini-CEX Global rating (GR) (M = 3.02, p < .001). In the Weak to Strong condition, the FIGR (M = 2.44) was lower than the Mini-CEX GR (M = 3.96 p < .001). There was no difference between the FIGR and the Mini-CEX GR in the consistent condition (M = 6.61, M = 6.65 respectively, p = .84). There were no statistically significant differences in any of the conditions when comparing both groups' Mini-CEX GR. Therefore, raters adjusted their judgments based on the learners' performances. Furthermore, raters who made their first impressions explicit showed similar rater bias to raters who followed a more naturalistic process.
Collapse
Affiliation(s)
- Timothy J Wood
- Faculty of Medicine, University of Ottawa, 850 Peter Morand Crescent, Ottawa, ON, K1G-5Z3, Canada.
| | - Vijay J Daniels
- Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Canada
| | - Debra Pugh
- Faculty of Medicine, University of Ottawa, 850 Peter Morand Crescent, Ottawa, ON, K1G-5Z3, Canada
- Department of Medicine, The Ottawa Hospital, Ottawa, Canada
- Medical Council of Canada, Ottawa, Canada
| | - Claire Touchie
- Faculty of Medicine, University of Ottawa, 850 Peter Morand Crescent, Ottawa, ON, K1G-5Z3, Canada
- Department of Medicine, The Ottawa Hospital, Ottawa, Canada
| | - Samantha Halman
- Faculty of Medicine, University of Ottawa, 850 Peter Morand Crescent, Ottawa, ON, K1G-5Z3, Canada
- Department of Medicine, The Ottawa Hospital, Ottawa, Canada
| | - Susan Humphrey-Murto
- Faculty of Medicine, University of Ottawa, 850 Peter Morand Crescent, Ottawa, ON, K1G-5Z3, Canada
- Department of Medicine, The Ottawa Hospital, Ottawa, Canada
| |
Collapse
|
25
|
Adelman MH, Deshwal H, Pradhan D. Critical Care Ultrasound Competency of Fellows and Faculty in Pulmonary and Critical Care Medicine: A Nationwide Survey. POCUS JOURNAL 2023; 8:202-211. [PMID: 38099164 PMCID: PMC10721306 DOI: 10.24908/pocus.v8i2.16640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/17/2023]
Abstract
Purpose: Competency assessment standards for Critical Care Ultrasonography (CCUS) for Graduate Medical Education (GME) trainees in pulmonary/critical care medicine (PCCM) fellowship programs are lacking. We sought to answer the following research questions: How are PCCM fellows and teaching faculty assessed for CCUS competency? Which CCUS teaching methods are perceived as most effective by program directors (PDs) and fellows. Methods: Cross-sectional, nationwide, electronic survey of PCCM PDs and fellows in accredited GME training programs. Results: PDs and fellows both reported the highest rates of fellow competence to use CCUS for invasive procedural guidance, but lower rates for assessment of deep vein thrombosis and abdominal organs. 54% and 90% of PDs reported never assessing fellows or teaching faculty for CCUS competency, respectively. PDs and fellows perceived hands-on workshops and directly supervised CCUS exams as more effective learning methods than unsupervised CCUS archival with subsequent review and self-directed learning. Conclusions: There is substantial variation in CCUS competency assessment among PCCM fellows and teaching faculty nationwide. The majority of training programs do not formally assess fellows or teaching faculty for CCUS competence. Guidelines are needed to formulate standardized competency assessment tools for PCCM fellowship programs.
Collapse
Affiliation(s)
- Mark H Adelman
- Division of Pulmonary, Critical Care & Sleep Medicine, New York University Grossman School of MedicineNew York, NYUSA
| | - Himanshu Deshwal
- Division of Pulmonary, Critical Care, and Sleep Medicine, West Virginia University Health Sciences CenterMorgantown, WVUSA
| | - Deepak Pradhan
- Division of Pulmonary, Critical Care, and Sleep Medicine, West Virginia University Health Sciences CenterMorgantown, WVUSA
| |
Collapse
|
26
|
Malau-Aduli BS, Hays RB, D'Souza K, Saad SL, Rienits H, Celenza A, Murphy R. Twelve tips for improving the quality of assessor judgements in senior medical student clinical assessments. MEDICAL TEACHER 2023; 45:1228-1232. [PMID: 37232165 DOI: 10.1080/0142159x.2023.2216364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Assessment of senior medical students is usually calibrated at the level of achieving expected learning outcomes for graduation. Recent research reveals that clinical assessors often balance two slightly different perspectives on this benchmark. The first is the formal learning outcomes at graduation, ideally as part of a systematic, program-wide assessment approach that measures learning achievement, while the second is consideration of the candidate's contribution to safe care and readiness for practice as a junior doctor. The second is more intuitive to the workplace, based on experience working with junior doctors. This perspective may enhance authenticity in assessment decisions made in OSCEs and work-based assessments to better align judgements and feedback with professional expectations that will guide senior medical students and junior doctors' future career development. Modern assessment practices should include consideration of qualitative as well as quantitative information, overtly including perspectives of patients, employers, and regulators. This article presents 12 tips for how medical education faculty might support clinical assessors by capturing workplace expectations of first year medical graduates and develop graduate assessments based on a shared heuristic of 'work-readiness'. Peer-to-peer assessor interaction should be facilitated to achieve correct calibration that 'merges' the differing perspectives to produce a shared construct of an acceptable candidate.
Collapse
Affiliation(s)
- Bunmi S Malau-Aduli
- School of Medicine and Public Health, University of Newcastle, Newcastle, Australia
- College of Medicine and Dentistry, James Cook University, Townsville, Australia
| | - Richard B Hays
- College of Medicine and Dentistry, James Cook University, Townsville, Australia
| | - Karen D'Souza
- School of Medicine, Deakin University, Geelong, Australia
| | | | - Helen Rienits
- Graduate School of Medicine, University of Wollongong, Wollongong, Australia
| | - Antonio Celenza
- School of Medicine, University of Western Australia, Perth, Australia
| | - Rinki Murphy
- Medical Program, University of Auckland, Auckland, New Zealand
| |
Collapse
|
27
|
Maxson IN, Su E, Brown KA, Tcharmtchi MH, Ginsburg S, Bhargava V, Wenger J, Centers GI, Alade KH, Leung SK, Gowda SH, Flores S, Riley A, Thammasitboon S. A Program of Assessment Model for Point-of-Care Ultrasound Training for Pediatric Critical Care Providers: A Comprehensive Approach to Enhance Competency-Based Point-of-Care Ultrasound Training. Pediatr Crit Care Med 2023; 24:e511-e519. [PMID: 37260313 DOI: 10.1097/pcc.0000000000003288] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Point-of-care ultrasound (POCUS) is increasingly accepted in pediatric critical care medicine as a tool for guiding the evaluation and treatment of patients. POCUS is a complex skill that requires user competency to ensure accuracy, reliability, and patient safety. A robust competency-based medical education (CBME) program ensures user competency and mitigates patient safety concerns. A programmatic assessment model provides a longitudinal, holistic, and multimodal approach to teaching, assessing, and evaluating learners. The authors propose a fit-for-purpose and modifiable CBME model that is adaptable for different institutions' resources and needs for any intended competency level. This educational model drives and supports learning, ensures competency attainment, and creates a clear pathway for POCUS education while enhancing patient care and safety.
Collapse
Affiliation(s)
- Ivanna Natasha Maxson
- Department of Pediatrics, Division of Critical Care Medicine, Baylor College of Medicine, Texas Children's Hospital, Houston, TX
| | - Erik Su
- Department of Pediatrics, Division of Critical Care Medicine, Baylor College of Medicine, Texas Children's Hospital, Houston, TX
| | - Kyle A Brown
- Department of Pediatrics, Texas Christian University School of Medicine, Cook Children's Medical Center, Fort Worth, TX
| | - M Hossein Tcharmtchi
- Department of Pediatrics, Division of Critical Care Medicine, Baylor College of Medicine, Texas Children's Hospital, Houston, TX
| | - Sarah Ginsburg
- Department of Pediatrics, Division of Critical Care Medicine, UT Southwestern Medical Center, Dallas, TX
| | - Vidit Bhargava
- Department of Pediatrics, Division of Critical Care Medicine, University of Alabama Children's Hospital of Alabama, Birmingham, AL
| | - Jesse Wenger
- Department of Pediatrics, Division of Critical Care Medicine, University of Washington Seattle Children's Hospital, Seattle, WA
| | - Gabriela I Centers
- Department of Pediatrics, Division of Critical Care Medicine, Indiana University, Riley Children's Hospital, Indianapolis, IN
| | - Kiyetta H Alade
- Department of Pediatrics, Division of Emergency Medicine, Baylor College of Medicine, Texas Children's Hospital, Houston, TX
| | - Stephanie K Leung
- Department of Pediatrics, Division of Emergency Medicine, Baylor College of Medicine, Texas Children's Hospital, Houston, TX
| | - Sharada H Gowda
- Department of Pediatrics, Division of Neonatology, Baylor College of Medicine, Texas Children's Hospital, Houston, TX
| | - Saul Flores
- Department of Pediatrics, Division of Critical Care Medicine and Cardiology, Baylor College of Medicine, Texas Children's Hospital, Houston, TX
| | - Alan Riley
- Department of Pediatrics, Division of Cardiology, Baylor College of Medicine, Texas Children's Hospital, Houston, TX
| | - Satid Thammasitboon
- Department of Pediatrics, Division of Critical Care Medicine, Baylor College of Medicine, Texas Children's Hospital, Houston, TX
- Department of Pediatrics, Center for Research, Innovation, and Scholarship in Medical Education, Baylor College of Medicine, Texas Children's Hospital, Houston, TX
| |
Collapse
|
28
|
Norcini J. On Purpose: The Case for Alignment in Assessment. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:1240-1242. [PMID: 37556812 DOI: 10.1097/acm.0000000000005430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
In this issue, Ryan and colleagues underscore the need for criterion-based assessments in the context of competency-based curricula in undergraduate medical education (UME). They also point out that the same scores are often interpreted from a norm-referenced perspective to support the admissions process for residency training. This problem is not unique to UME because in graduate medical education (GME), the same assessments are often used for both decision making and providing feedback. Unfortunately, an assessment with 2 purposes is neither optimal nor efficient for either purpose and may be accompanied by significant side effects. One approach to addressing these challenges is to develop a system of assessment that addresses both purposes but where each component is focused on a single purpose. This leads to alignment and transparency from purpose to test content and from test content to score interpretation and/or feedback. It ensures that the test material is optimized for the task, that individual assessments are constructed to enhance the validity of their scores, and that undesirable side effects are limited.
Collapse
Affiliation(s)
- John Norcini
- J. Norcini is research professor, Department of Psychiatry, SUNY Upstate Medical University, Syracuse, New York; ORCID: https://orcid.org/0000-0002-8464-4115
| |
Collapse
|
29
|
Liao KC, Ajjawi R, Peng CH, Jenq CC, Monrouxe LV. Striving to thrive or striving to survive: Professional identity constructions of medical trainees in clinical assessment activities. MEDICAL EDUCATION 2023; 57:1102-1116. [PMID: 37394612 DOI: 10.1111/medu.15152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 05/15/2023] [Accepted: 05/30/2023] [Indexed: 07/04/2023]
Abstract
CONTEXT Assessment plays a key role in competence development and the shaping of future professionals. Despite its presumed positive impacts on learning, unintended consequences of assessment have drawn increasing attention in the literature. Considering professional identities and how these can be dynamically constructed through social interactions, as in assessment contexts, our study sought to understand how assessment influences the construction of professional identities in medical trainees. METHODS Within social constructionism, we adopted a discursive, narrative approach to investigate the different positions trainees narrate for themselves and their assessors in clinical assessment contexts and the impact of these positions on their constructed identities. We purposively recruited 28 medical trainees (23 students and five postgraduate trainees), who took part in entry, follow-up and exit interviews of this study and submitted longitudinal audio/written diaries across nine-months of their training programs. Thematic framework and positioning analyses (focusing on how characters are linguistically positioned in narratives) were applied using an interdisciplinary teamwork approach. RESULTS We identified two key narrative plotlines, striving to thrive and striving to survive, across trainees' assessment narratives from 60 interviews and 133 diaries. Elements of growth, development, and improvement were identified as trainees narrated striving to thrive in assessment. Neglect, oppression and perfunctory narratives were elaborated as trainees narrated striving to survive from assessment. Nine main character tropes adopted by trainees with six key assessor character tropes were identified. Bringing these together we present our analysis of two exemplary narratives with elaboration of their wider social implications. CONCLUSION Adopting a discursive approach enabled us to better understand not only what identities are constructed by trainees in assessment contexts but also how they are constructed in relation to broader medical education discourses. The findings are informative for educators to reflect on, rectify and reconstruct assessment practices for better facilitating trainee identity construction.
Collapse
Affiliation(s)
- Kuo-Chen Liao
- Division of Geriatrics and General Internal Medicine, Department of Internal Medicine, Chang Gung Memorial Hospital (CGMH), Linkou, Taiwan (ROC)
- Chang Gung Medical Education Research Centre, CGMH, Linkou, Taiwan (ROC)
- School of Medicine, College of Medicine, Chang Gung University, Taoyuan City, Taiwan (ROC)
| | - Rola Ajjawi
- Centre for Research in Assessment and Digital Learning, Deakin University, Melbourne, Victoria, Australia
| | - Chang-Hsuan Peng
- Chang Gung Medical Education Research Centre, CGMH, Linkou, Taiwan (ROC)
- School of Medicine, College of Medicine, Chang Gung University, Taoyuan City, Taiwan (ROC)
| | - Chang-Chyi Jenq
- Chang Gung Medical Education Research Centre, CGMH, Linkou, Taiwan (ROC)
- Department of Nephrology, CGMH, Linkou, Taiwan (ROC)
- Medical Humanities Center, CGMH, Linkou, Taiwan (ROC)
- Department of Medical Humanities and Social Sciences, School of Medicine, College of Medicine, Chang Gung University, Taoyuan City, Taiwan (ROC)
| | - Lynn V Monrouxe
- Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
30
|
Burr SA, Gale T, Kisielewska J, Millin P, Pêgo JM, Pinter G, Robinson IM, Zahra D. A narrative review of adaptive testing and its application to medical education. MEDEDPUBLISH 2023; 13:221. [PMID: 38028657 PMCID: PMC10680016 DOI: 10.12688/mep.19844.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/27/2023] [Indexed: 12/01/2023] Open
Abstract
Adaptive testing has a long but largely unrecognized history. The advent of computer-based testing has created new opportunities to incorporate adaptive testing into conventional programmes of study. Relatively recently software has been developed that can automate the delivery of summative assessments that adapt by difficulty or content. Both types of adaptive testing require a large item bank that has been suitably quality assured. Adaptive testing by difficulty enables more reliable evaluation of individual candidate performance, although at the expense of transparency in decision making, and requiring unidirectional navigation. Adaptive testing by content enables reduction in compensation and targeted individual support to enable assurance of performance in all the required outcomes, although at the expense of discovery learning. With both types of adaptive testing, candidates are presented a different set of items to each other, and there is the potential for that to be perceived as unfair. However, when candidates of different abilities receive the same items, they may receive too many they can answer with ease, or too many that are too difficult to answer. Both situations may be considered unfair as neither provides the opportunity to demonstrate what they know. Adapting by difficulty addresses this. Similarly, when everyone is presented with the same items, but answer different items incorrectly, not providing individualized support and opportunity to demonstrate performance in all the required outcomes by revisiting content previously answered incorrectly could also be considered unfair; a point addressed when adapting by content. We review the educational rationale behind the evolution of adaptive testing and consider its inherent strengths and limitations. We explore the continuous pursuit of improvement of examination methodology and how software can facilitate personalized assessment. We highlight how this can serve as a catalyst for learning and refinement of curricula; fostering engagement of learner and educator alike.
Collapse
Affiliation(s)
| | - Thomas Gale
- University of Plymouth, Plymouth, England, UK
| | | | - Paul Millin
- University of Plymouth, Plymouth, England, UK
| | | | | | | | | |
Collapse
|
31
|
Ahmed S, Shersad F, Ziganshina A, Shadan M, Elmardi A, El Tayeb Y. Operationalizing competency-based assessment: Contextualizing for cultural and gender divides. MEDEDPUBLISH 2023; 13:210. [PMID: 37842229 PMCID: PMC10576182 DOI: 10.12688/mep.19728.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2023] Open
Abstract
Following current trends, educational institutions often decide to use a competency framework as an overarching structure in their assessment system. Despite the presence of a common understanding of how different examinations can contribute to the decision on attaining a particular competency, a detailed mapping of the data points appears to be a challenging area that remains to be explored. Faced with the newly emerged task of introducing the assessment of the attainment of UAE medical students against the EmiratesMEDs competency framework, Dubai Medical College for Girls (DMCG) attempted to operationalise the designed concept in the assessment system considering the cultural and gender divide. We believe that health professionals who attempt to implement contextualized competency-based assessment could benefit from being acquainted with our experience. The article offers a step-by-step guide on contextualized competency assessment operationalization, describing building the team, working with consultants and faculty development, estimating institutional assessment capacity, mapping and operationalizing the maps by using both human recourses and the software. We also offer the readers the list of enabling factors and introduce the scope of limitations in the process of developing the competency-based assessment system. We believe that following the present guide can allow educators to operationalize competency-based assessment in any context with respect to local culture and traditions.
Collapse
Affiliation(s)
- Samar Ahmed
- Faculty of Medicine, Forensic Medicine and Toxicology department, Ain Shams University, Cairo, Cairo Governorate, 11488, Egypt
- Associate Dean Academic Affairs, Dubai Medical College for Girls, Dubai, United Arab Emirates
| | - Fouzia Shersad
- Medical Education, Dubai Medical College for Girls, Dubai, United Arab Emirates
| | - Arina Ziganshina
- Clinical Department, Dubai Medical College for Girls, Dubai, United Arab Emirates
| | - Mariam Shadan
- Biomedical Department, Dubai Medical College for Girls, Dubai, United Arab Emirates
| | - Abdelmoneim Elmardi
- Biomedical Department, Dubai Medical College for Girls, Dubai, United Arab Emirates
| | - Yousif El Tayeb
- Clinical Department, Dubai Medical College for Girls, Dubai, United Arab Emirates
| |
Collapse
|
32
|
Mink RB, Carraccio CL, Herman BE, Weiss P, Turner DA, Stafford DEJ, McGann KA, Kesselheim J, Hsu DC, High PC, Fussell JJ, Curran ML, Chess PR, Sauer C, Pitts S, Myers AL, Mahan JD, Dammann CEL, Aye T, Schwartz A. Relationship between epa level of supervision with their associated subcompetency milestone levels in pediatric fellow assessment. BMC MEDICAL EDUCATION 2023; 23:720. [PMID: 37789289 PMCID: PMC10548580 DOI: 10.1186/s12909-023-04689-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 09/15/2023] [Indexed: 10/05/2023]
Abstract
BACKGROUND Entrustable Professional Activities (EPA) and competencies represent components of a competency-based education framework. EPAs are assessed based on the level of supervision (LOS) necessary to perform the activity safely and effectively. The broad competencies, broken down into narrower subcompetencies, are assessed using milestones, observable behaviors of one's abilities along a developmental spectrum. Integration of the two methods, accomplished by mapping the most relevant subcompetencies to each EPA, may provide a cross check between the two forms of assessment and uncover those subcompetencies that have the greatest influence on the EPA assessment. OBJECTIVES We hypothesized that 1) there would be a strong correlation between EPA LOS ratings with the milestone levels for the subcompetencies mapped to the EPA; 2) some subcompetencies would be more critical in determining entrustment decisions than others, and 3) the correlation would be weaker if the analysis included only milestones reported to the Accreditation Council for Graduate Medical Education (ACGME). METHODS In fall 2014 and spring 2015, the Subspecialty Pediatrics Investigator Network asked Clinical Competency Committees to assign milestone levels to each trainee enrolled in a pediatric fellowship for all subcompetencies mapped to 6 Common Pediatric Subspecialty EPAs as well as provide a rating for each EPA based upon a 5-point LOS scale. RESULTS One-thousand forty fellows were assessed in fall and 1048 in spring, representing about 27% of all fellows. For each EPA and in both periods, the average milestone level was highly correlated with LOS (rho range 0.59-0.74; p < 0.001). Correlations were similar when using a weighted versus unweighted milestone score or using only the ACGME reported milestones (p > 0.05). CONCLUSIONS We found a strong relationship between milestone level and EPA LOS rating but no difference if the subcompetencies were weighted, or if only milestones reported to the ACGME were used. Our results suggest that representative behaviors needed to effectively perform the EPA, such as key subcompetencies and milestones, allow for future language adaptations while still supporting the current model of assessment. In addition, these data provide additional validity evidence for using these complementary tools in building a program of assessment.
Collapse
Affiliation(s)
- Richard B Mink
- Department of Pediatrics, David Geffen School of Medicine at UCLA and the Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Center, 1124 West Carson Street, Torrance, CA, 90502, USA.
| | | | - Bruce E Herman
- University of Utah School of Medicine, Salt Lake, UT, USA
| | - Pnina Weiss
- Department of Pediatrics, Yale School of Medicine, New Haven, CT, USA
| | | | - Diane E J Stafford
- Division of Endocrinology, Department of Pediatrics, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Kathleen A McGann
- Department of Pediatrics, Duke University Medical Center, Durham, NC, USA
| | - Jennifer Kesselheim
- Dana-Farber/Boston Children's Cancer and Blood Disorders Center, Boston, MA, USA
| | | | - Pamela C High
- Alpert Medical School of Brown University, Providence, RI, USA
- Developmental-Behavioral Pediatrics, Hasbro Children's Hospital, Providence, RI, USA
| | - Jill J Fussell
- University of Arkansas for Medical Sciences and Arkansas Children's Hospital, Little Rock, AR, USA
| | - Megan L Curran
- Department of Pediatrics, University of Colorado School of Medicine, Aurora, CO, USA
| | | | - Cary Sauer
- Department of Pediatrics, Emory University School of Medicine and Children's Healthcare of Atlanta, Atlanta, GA, USA
| | - Sarah Pitts
- Division of Adolescent/Young Adult Medicine, Boston Children's Hospital, Boston, MA, USA
| | - Angela L Myers
- Center for Wellbeing, Children's Mercy Hospital and University of Missouri-Kansas City School of Medicine, Kansas City, MO, USA
| | - John D Mahan
- Department of Pediatrics, Nationwide Children's Hospital and The Ohio State University College of Medicine, Columbus, OH, USA
| | | | - Tandy Aye
- Division of Endocrinology, Department of Pediatrics, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Alan Schwartz
- University of Illinois College of Medicine at Chicago, Chicago, IL, USA
| |
Collapse
|
33
|
van der Gulden R, Veen M, Thoonen BPA. A Philosophical Discussion of the Support of Self-Regulated Learning in Medical Education: The Treasure Hunt Approach Versus the (Dutch) "Dropping" Approach. TEACHING AND LEARNING IN MEDICINE 2023; 35:623-629. [PMID: 36939190 DOI: 10.1080/10401334.2023.2187810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 01/24/2023] [Accepted: 02/13/2023] [Indexed: 06/18/2023]
Abstract
Issue: Many current educational approaches are intended to cultivate learners' full (learning) potential by fostering self-regulated learning (SRL), as it is expected that those learners with a high degree of SRL learn more effectively than those with a low degree of SRL. However, these attempts to foster SRL are not always successful. Evidence: We considered complexities related to fostering self-regulated learning by use of an analogy. This analogy was based on two (Dutch) children's games: the treasure hunt (children can find a "treasure" by following directions, completing assignments and/or answering questions) and the dropping (pre-teens are dropped in the woods at nighttime with the assignment to find their way back home). We formulated four interrelated philosophical questions. These questions were not formulated with the intention to provide clear-cut answers, but were instead meant to evoke contemplation about the SRL concept. During this contemplation, the implications of definitional issues regarding SRL were discussed by use of the first question: What are the consequences of the difficulties to explicate what is (not) SRL? The second question (How does SRL relate to autonomy?) touched upon the intricate relationship between SRL and autonomy, by discussing the role of social interaction and varying degrees of instruction when fostering SRL. Next, a related topic was addressed by the third question: How much risk are we willing and able to take when fostering SRL? And finally, the importance of and possibilities to assess SRL were discussed by the fourth question (Should SRL be assessed?). Implications: From our contemplations it has become clear that approaches to foster SRL are often insufficiently aligned with the experience and needs of learners. Instead these approaches are commonly defined by contextual factors, such as misconceptions about SRL and lack of leeway for learners. Consequently, we have used principles that apply to both treasure hunts and droppings, to provide guidelines on how to align one's approach to foster SRL with the educational context and experience and needs of learners.
Collapse
Affiliation(s)
- Rozemarijn van der Gulden
- Department of Primary and Community Care, Radboud university medical center, Nijmegen, The Netherlands
| | - Mario Veen
- Department of General Practice, Erasmus University Medical Center, Rotterdam, The Netherlands
| | - Bart P A Thoonen
- Department of Primary and Community Care, Radboud university medical center, Nijmegen, The Netherlands
| |
Collapse
|
34
|
Vennemeyer S, Kinnear B, Gao A, Zhu S, Nattam A, Knopp MI, Warm E, Wu DT. User-Centered Evaluation and Design Recommendations for an Internal Medicine Resident Competency Assessment Dashboard. Appl Clin Inform 2023; 14:996-1007. [PMID: 38122817 PMCID: PMC10733060 DOI: 10.1055/s-0043-1777103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 10/25/2023] [Indexed: 12/23/2023] Open
Abstract
OBJECTIVES Clinical Competency Committee (CCC) members employ varied approaches to the review process. This makes the design of a competency assessment dashboard that fits the needs of all members difficult. This work details a user-centered evaluation of a dashboard currently utilized by the Internal Medicine Clinical Competency Committee (IM CCC) at the University of Cincinnati College of Medicine and generated design recommendations. METHODS Eleven members of the IM CCC participated in semistructured interviews with the research team. These interviews were recorded and transcribed for analysis. The three design research methods used in this study included process mapping (workflow diagrams), affinity diagramming, and a ranking experiment. RESULTS Through affinity diagramming, the research team identified and organized opportunities for improvement about the current system expressed by study participants. These areas include a time-consuming preprocessing step, lack of integration of data from multiple sources, and different workflows for each step in the review process. Finally, the research team categorized nine dashboard components based on rankings provided by the participants. CONCLUSION We successfully conducted user-centered evaluation of an IM CCC dashboard and generated four recommendations. Programs should integrate quantitative and qualitative feedback, create multiple views to display these data based on user roles, work with designers to create a usable, interpretable dashboard, and develop a strong informatics pipeline to manage the system. To our knowledge, this type of user-centered evaluation has rarely been attempted in the medical education domain. Therefore, this study provides best practices for other residency programs to evaluate current competency assessment tools and to develop new ones.
Collapse
Affiliation(s)
- Scott Vennemeyer
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
| | - Benjamin Kinnear
- Department of Pediatrics, College of Medicine, University of Cincinnati, Ohio, United States
- Department of Internal Medicine, College of Medicine, University of Cincinnati, Ohio, United States
| | - Andy Gao
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- Medical Sciences Baccalaureate Program, College of Medicine, University of Cincinnati, Ohio, United States
| | - Siyi Zhu
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- School of Design, College of Design, Architecture, Art, and Planning (DAAP), University of Cincinnati, Ohio, United States
| | - Anunita Nattam
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- Medical Sciences Baccalaureate Program, College of Medicine, University of Cincinnati, Ohio, United States
| | - Michelle I. Knopp
- Department of Internal Medicine, College of Medicine, University of Cincinnati, Ohio, United States
- Division of Hospital Medicine, Cincinnati Children's Hospital Medical Center, Department of Pediatrics, College of Medicine, University of Cincinnati, Ohio, United States
| | - Eric Warm
- Department of Internal Medicine, College of Medicine, University of Cincinnati, Ohio, United States
| | - Danny T.Y. Wu
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- Department of Pediatrics, College of Medicine, University of Cincinnati, Ohio, United States
- Medical Sciences Baccalaureate Program, College of Medicine, University of Cincinnati, Ohio, United States
- School of Design, College of Design, Architecture, Art, and Planning (DAAP), University of Cincinnati, Ohio, United States
| |
Collapse
|
35
|
Zafar I, Chilton J, Edwards J, Watson H, Zahra D. Exploring basic science knowledge retention within a cohort of undergraduate medical students in the United Kingdom: A longitudinal study. CLINICAL TEACHER 2023; 20:e13633. [PMID: 37646408 DOI: 10.1111/tct.13633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Accepted: 08/01/2023] [Indexed: 09/01/2023]
Abstract
BACKGROUND Clinical reasoning is reliant on students having acquired a strong foundation in the basic sciences. However, there remains uncertainty regarding whether medical students are maintaining this knowledge over the span of their degrees. Therefore, this project aimed to assess long-term retention of basic science knowledge within a cohort of students from an undergraduate medical school in the United Kingdom (UK). METHODS This longitudinal study followed a cohort of students, from their first to final year. In their final year, participants sat a bespoke formative basic science knowledge assessment that utilised 46 single-best-answer questions. To examine for long-term attainment differences, these scores were compared with those achieved in first-year assessments. RESULTS Of the eligible students, 40% partook in the study (n = 22). Comparing assessment scores highlighted an enhancement in overall basic science knowledge between first and final year (p < 0.01). Although most basic science domains remained unchanged between both time points, anatomy and physiology scores increased (p = 0.03 and p = 0.02, respectively), whereas biochemistry scores were the only ones to decrease (p = 0.02). DISCUSSION This project provides insight into how well students are retaining the basic sciences during their studies. Underperforming science domains were identified, alongside pedagogical explanations for their individual shortcomings; for instance, students' perceived relevance of a domain is seen as a driver for its retention. Subsequently, a group of recommendations were derived to reinforce the most affected domains. The inclusion of more questions on the underperforming sciences, in clinically focussed assessments, is one such suggestion.
Collapse
|
36
|
Staudenmann D, Waldner N, Lörwald A, Huwendiek S. Medical specialty certification exams studied according to the Ottawa Quality Criteria: a systematic review. BMC MEDICAL EDUCATION 2023; 23:619. [PMID: 37649019 PMCID: PMC10466740 DOI: 10.1186/s12909-023-04600-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Accepted: 08/18/2023] [Indexed: 09/01/2023]
Abstract
BACKGROUND Medical specialty certification exams are high-stakes summative assessments used to determine which doctors have the necessary skills, knowledge, and attitudes to treat patients independently. Such exams are crucial for patient safety, candidates' career progression and accountability to the public, yet vary significantly among medical specialties and countries. It is therefore of paramount importance that the quality of specialty certification exams is studied in the scientific literature. METHODS In this systematic literature review we used the PICOS framework and searched for papers concerning medical specialty certification exams published in English between 2000 and 2020 in seven databases using a diverse set of search term variations. Papers were screened by two researchers independently and scored regarding their methodological quality and relevance to this review. Finally, they were categorized by country, medical specialty and the following seven Ottawa Criteria of good assessment: validity, reliability, equivalence, feasibility, acceptability, catalytic and educational effect. RESULTS After removal of duplicates, 2852 papers were screened for inclusion, of which 66 met all relevant criteria. Over 43 different exams and more than 28 different specialties from 18 jurisdictions were studied. Around 77% of all eligible papers were based in English-speaking countries, with 55% of publications centered on just the UK and USA. General Practice was the most frequently studied specialty among certification exams with the UK General Practice exam having been particularly broadly analyzed. Papers received an average of 4.2/6 points on the quality score. Eligible studies analyzed 2.1/7 Ottawa Criteria on average, with the most frequently studied criteria being reliability, validity, and acceptability. CONCLUSIONS The present systematic review shows a growing number of studies analyzing medical specialty certification exams over time, encompassing a wider range of medical specialties, countries, and Ottawa Criteria. Due to their reliance on multiple assessment methods and data-points, aspects of programmatic assessment suggest a promising way forward in the development of medical specialty certification exams which fulfill all seven Ottawa Criteria. Further research is needed to confirm these results, particularly analyses of examinations held outside the Anglosphere as well as studies analyzing entire certification exams or comparing multiple examination methods.
Collapse
Affiliation(s)
| | - Noemi Waldner
- University of Bern, Institute for Medical Education, Bern, Switzerland
| | - Andrea Lörwald
- University of Bern, Institute for Medical Education, Bern, Switzerland
| | - Sören Huwendiek
- University of Bern, Institute for Medical Education, Bern, Switzerland
| |
Collapse
|
37
|
Holmboe ES, Osman NY, Murphy CM, Kogan JR. The Urgency of Now: Rethinking and Improving Assessment Practices in Medical Education Programs. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S37-S49. [PMID: 37071705 DOI: 10.1097/acm.0000000000005251] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Assessment is essential to professional development. Assessment provides the information needed to give feedback, support coaching and the creation of individualized learning plans, inform progress decisions, determine appropriate supervision levels, and, most importantly, help ensure patients and families receive high-quality, safe care in the training environment. While the introduction of competency-based medical education has catalyzed advances in assessment, much work remains to be done. First, becoming a physician (or other health professional) is primarily a developmental process, and assessment programs must be designed using a developmental and growth mindset. Second, medical education programs must have integrated programs of assessment that address the interconnected domains of implicit, explicit and structural bias. Third, improving programs of assessment will require a systems-thinking approach. In this paper, the authors first address these overarching issues as key principles that must be embraced so that training programs may optimize assessment to ensure all learners achieve desired medical education outcomes. The authors then explore specific needs in assessment and provide suggestions to improve assessment practices. This paper is by no means inclusive of all medical education assessment challenges or possible solutions. However, there is a wealth of current assessment research and practice that medical education programs can use to improve educational outcomes and help reduce the harmful effects of bias. The authors' goal is to help improve and guide innovation in assessment by catalyzing further conversations.
Collapse
Affiliation(s)
- Eric S Holmboe
- E.S. Holmboe is chief, Research, Milestones Development and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0003-0108-6021
| | - Nora Y Osman
- N.Y. Osman is associate professor of medicine, Harvard Medical School, and director of undergraduate medical education, Brigham and Women's Hospital Department of Medicine, Boston, Massachusetts; ORCID: https://orcid.org/0000-0003-3542-1262
| | - Christina M Murphy
- C.M. Murphy is a fourth-year medical student and president, Medical Student Government at Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0003-3966-5264
| | - Jennifer R Kogan
- J.R. Kogan is associate dean, Student Success and Professional Development, and professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-8426-9506
| |
Collapse
|
38
|
Hauer KE, Park YS, Bullock JL, Tekian A. "My Assessments Are Biased!" Measurement and Sociocultural Approaches to Achieve Fairness in Assessment in Medical Education. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S16-S27. [PMID: 37094278 DOI: 10.1097/acm.0000000000005245] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Assessing learners is foundational to their training and developmental growth throughout the medical education continuum. However, growing evidence shows the prevalence and impact of harmful bias in assessments in medical education, accelerating the urgency to identify solutions. Assessment bias presents a critical problem for all stages of learning and the broader educational system. Bias poses significant challenges to learners, disrupts the learning environment, and threatens the pathway and transition of learners into health professionals. While the topic of assessment bias has been examined within the context of measurement literature, limited guidance and solutions exist for learners in medical education, particularly in the clinical environment. This article presents an overview of assessment bias, focusing on clinical learners. A definition of bias and its manifestations in assessments are presented. Consequences of assessment bias are discussed within the contexts of validity and fairness and their impact on learners, patients/caregivers, and the broader field of medicine. Messick's unified validity framework is used to contextualize assessment bias; in addition, perspectives from sociocultural contexts are incorporated into the discussion to elaborate the nuanced implications in the clinical training environment. Discussions of these topics are conceptualized within the literature and the interventions used to date. The article concludes with practical recommendations to overcome bias and to develop an ideal assessment system. Recommendations address articulating values to guide assessment, designing assessment to foster learning and outcomes, attending to assessment procedures, promoting continuous quality improvement of assessment, and fostering equitable learning and assessment environments.
Collapse
Affiliation(s)
- Karen E Hauer
- K.E. Hauer is associate dean for competency assessment and professional standards, and professor, Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, California; ORCID: http://orcid.org/0000-0002-8812-4045
| | - Yoon Soo Park
- Y.S. Park is associate professor and associate head, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0001-8583-4335
| | - Justin L Bullock
- J.L. Bullock is a fellow, Department of Medicine, Division of Nephrology, University of Washington School of Medicine, Seattle, Washington; ORCID: http://orcid.org/0000-0003-4240-9798
| | - Ara Tekian
- A. Tekian is professor and associate dean for international education, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0002-9252-1588
| |
Collapse
|
39
|
Onumah CM, Pincavage AT, Lai CJ, Levine DL, Ismail NJ, Alexandraki I, Osman NY. Strategies for Advancing Equity in Frontline Clinical Assessment. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S57-S63. [PMID: 37071692 DOI: 10.1097/acm.0000000000005246] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Educational equity in medicine cannot be achieved without addressing assessment bias. Assessment bias in health professions education is prevalent and has extensive implications for learners and, ultimately, the health care system. Medical schools and educators desire to minimize assessment bias, but there is no current consensus on effective approaches. Frontline teaching faculty have the opportunity to mitigate bias in clinical assessment in real time. Based on their experiences as educators, the authors created a case study about a student to illustrate ways bias affects learner assessment. In this paper, the authors use their case study to provide faculty with evidence-based approaches to mitigate bias and promote equity in clinical assessment. They focus on 3 components of equity in assessment: contextual equity, intrinsic equity, and instrumental equity. To address contextual equity, or the environment in which learners are assessed, the authors recommend building a learning environment that promotes equity and psychological safety, understanding the learners' contexts, and undertaking implicit bias training. Intrinsic equity, centered on the tools and practices used during assessment, can be promoted by using competency-based, structured assessment methods and employing frequent, direct observation to assess multiple domains. Instrumental equity, focused on communication and how assessments are used, includes specific, actionable feedback to support growth and use of competency-based narrative descriptors in assessments. Using these strategies, frontline clinical faculty members can actively promote equity in assessment and support the growth of a diverse health care workforce.
Collapse
Affiliation(s)
- Chavon M Onumah
- C.M. Onumah is associate professor, Department of Medicine, George Washington School of Medicine and Health Sciences, Washington, DC
| | - Amber T Pincavage
- A.T. Pincavage is professor, Department of Medicine, University of Chicago Pritzker School of Medicine, Chicago, Illinois
| | - Cindy J Lai
- C.J. Lai is professor and director of medical student clinical education, Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, California
| | - Diane L Levine
- D.L. Levine is professor and vice chair for education, Department of Internal Medicine, Wayne State University School of Medicine, Detroit, Michigan
| | - Nadia J Ismail
- N.J. Ismail is professor, Department of Medicine and Department of Education, Innovation and Technology, and vice dean, Baylor College of Medicine, Houston, Texas
| | - Irene Alexandraki
- I. Alexandraki is professor and senior associate dean, academic affairs, Office of Academic Affairs, University of Arizona College of Medicine-Phoenix, Phoenix, Arizona
| | - Nora Y Osman
- N.Y. Osman is associate professor, Harvard Medical School, and director of undergraduate medical education, Department of Medicine, Brigham and Women's Hospital, Boston, Massachusetts
| |
Collapse
|
40
|
Loosveld LM, Driessen EW, Theys M, Van Gerven PWM, Vanassche E. Combining Support and Assessment in Health Professions Education: Mentors' and Mentees' Experiences in a Programmatic Assessment Context. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:271-281. [PMID: 37426357 PMCID: PMC10327863 DOI: 10.5334/pme.1004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 06/23/2023] [Indexed: 07/11/2023]
Abstract
Introduction Mentors in programmatic assessment support mentees with low-stakes feedback, which often also serves as input for high-stakes decision making. That process potentially causes tensions in the mentor-mentee relationship. This study explored how undergraduate mentors and mentees in health professions education experience combining developmental support and assessment, and what this means for their relationship. Methods The authors chose a pragmatic qualitative research approach and conducted semi-structured vignette-based interviews with 24 mentors and 11 mentees that included learners from medicine and the biomedical sciences. Data were analyzed thematically. Results How participants combined developmental support and assessment varied. In some mentor-mentee relationships it worked well, in others it caused tensions. Tensions were also created by unintended consequences of design decisions at the program level. Dimensions impacted by experienced tensions were: relationship quality, dependence, trust, and nature and focus of mentoring conversations. Mentors and mentees mentioned applying various strategies to alleviate tensions: transparency and expectation management, distinguishing between developmental support and assessment, and justifying assessment responsibility. Discussion Combining the responsibility for developmental support and assessment within an individual worked well in some mentor-mentee relationships, but caused tensions in others. On the program level, clear decisions should be made regarding the design of programmatic assessment: what is the program of assessment and how are responsibilities divided between all involved? If tensions arise, mentors and mentees can try to alleviate these, but continuous mutual calibration of expectations between mentors and mentees remains of key importance.
Collapse
Affiliation(s)
- Lianne M. Loosveld
- Department of Educational Development & Research, School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, the Netherlands
| | - Erik W. Driessen
- Department of Educational Development & Research, School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, the Netherlands
| | - Mattias Theys
- Department of Educational Development & Research, School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, the Netherlands
| | - Pascal W. M. Van Gerven
- Department of Educational Development & Research, School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, the Netherlands
| | - Eline Vanassche
- Faculty of Psychology and Educational Sciences, KU Leuven Kulak, Etienne Sabbelaan 51, P.O. Box 7654, 8500 Kortrijk, Belgium
| |
Collapse
|
41
|
Berger S, Stalmeijer RE, Marty AP, Berendonk C. Exploring the Impact of Entrustable Professional Activities on Feedback Culture: A Qualitative Study of Anesthesiology Residents and Attendings. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:836-843. [PMID: 36812061 DOI: 10.1097/acm.0000000000005188] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
PURPOSE Entrustable professional activities (EPAs) were introduced as a potential way to optimize workplace-based assessments. Yet, recent studies suggest that EPAs have not yet overcome all of the challenges to implementing meaningful feedback. The aim of this study was to explore the extent to which the introduction of EPAs via mobile app impacts feedback culture as experienced by anesthesiology residents and attending physicians. METHOD Using a constructivist grounded theory approach, the authors interviewed a purposive and theoretical sample of residents (n = 11) and attendings (n = 11) at the Institute of Anaesthesiology, University Hospital of Zurich, where EPAs had recently been implemented. Interviews took place between February and December 2021. Data collection and analysis were conducted iteratively. The authors used open, axial, and selective coding to gain knowledge and understanding on the interplay of EPAs and feedback culture. RESULTS Participants reflected on a number of changes in their day-to-day experience of feedback culture with the implementation of EPAs. Three main mechanisms were instrumental in this process: lowering the feedback threshold, change in feedback focus, and gamification. Participants felt a lower threshold to feedback seeking and giving and that the frequency of feedback conversations increased and tended to be more focused on a specific topic and shorter, while feedback content tended to focus more on technical skills and more attention was given to average performances. Residents indicated that the app-based approach fostered a game-like motivation to "climb levels," while attendings did not perceive a game-like experience. CONCLUSIONS EPAs may offer a solution to problems of infrequent occurrence of feedback and invite attention to average performances and technical competencies, but may come at the expense of feedback on nontechnical skills. This study suggests that feedback culture and feedback instruments have a mutually interacting influence on each other.
Collapse
Affiliation(s)
- Sabine Berger
- S. Berger is a third-year medical resident, Internal Medicine Training Program, St. Claraspital, Basel, Switzerland
| | - Renee E Stalmeijer
- R.E. Stalmeijer is associate professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Adrian P Marty
- A.P. Marty is currently senior attending physician and team lead for education, Institute of Anaesthesiology, Intensive Care and Pain Medicine, Orthopedic University Hospital Balgrist, Zurich, Switzerland. At the time of writing, he was attending physician, Institute of Anaesthesiology, University of Zurich, University Hospital of Zurich, Zurich, Switzerland
| | - Christoph Berendonk
- C. Berendonk is senior lecturer in medical education, Institute for Medical Education, University of Bern, Bern, Switzerland
| |
Collapse
|
42
|
Gauthier S, Braund H, Dalgarno N, Taylor D. Assessment-Seeking Strategies: Navigating the Decision to Initiate Workplace-Based Assessment. TEACHING AND LEARNING IN MEDICINE 2023:1-10. [PMID: 37384570 DOI: 10.1080/10401334.2023.2229803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 04/13/2023] [Accepted: 06/01/2023] [Indexed: 07/01/2023]
Abstract
Phenomenon: Competency-based medical education (CBME) relies on workplace-based assessment (WBA) to generate formative feedback (assessment for learning-AfL) and make inferences about competence (assessment of learning-AoL). When approaches to CBME rely on residents to initiate WBA, learners experience tension between seeking WBA for learning and for establishing competence. How learners resolve this tension may lead to unintended consequences for both AfL and AoL. We sought to explore the factors that impact both decisions to seek and not to seek WBA and use the findings to build a model of assessment-seeking strategy used by residents. In building this model we consider how the link between WBA and promotion or progression within a program impacts an individual's assessment-seeking strategy. Approach: We conducted 20 semi-structured interviews with internal medicine residents at Queen's University about the factors that influence their decision to seek or avoid WBA. Using grounded theory methodology, we applied a constant comparative analysis to collect data iteratively and identify themes. A conceptual model was developed to describe the interaction of factors impacting the decision to seek and initiate WBA. Findings: Participants identified two main motivations when deciding to seek assessments: the need to fulfill program requirements and the desire to receive feedback for learning. Analysis suggested that these motivations are often at odds with each other. Participants also described several moderating factors that impact the decision to initiate assessments, irrespective of the primary underlying motivation. These included resident performance, assessor factors, training program expectations, and clinical context. A conceptual framework was developed to describe the factors that lead to strategic assessment-seeking behaviors. Insights: Faced with the dual purpose of WBA in CBME, resident behavior in initiating assessment is guided by specific assessment-seeking strategies. Strategies reflect individual underlying motivations, influenced by four moderating factors. These findings have broad implications for programmatic assessment in a CBME context including validity considerations for assessment data used in summative decision-making including readiness for unsupervised practice.
Collapse
Affiliation(s)
- Stephen Gauthier
- Department of Medicine, Queen's University, Kingston, Ontario, Canada
| | - Heather Braund
- Office of Professional Development and Educational Scholarship, Faculty of Health Sciences, Queen's University, Kingston, Ontario, Canada
| | - Nancy Dalgarno
- Office of Professional Development and Educational Scholarship, Faculty of Health Sciences, Queen's University, Kingston, Ontario, Canada
| | - David Taylor
- Department of Medicine, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
43
|
Zhou Y, Wieringa TH, Brouwer J, Diemers AD, Bos NA. Challenges to acquire similar learning outcomes across four parallel thematic learning communities in a medical undergraduate curriculum. BMC MEDICAL EDUCATION 2023; 23:349. [PMID: 37202782 DOI: 10.1186/s12909-023-04341-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 05/09/2023] [Indexed: 05/20/2023]
Abstract
BACKGROUND To train physicians who are able to meet the evolving requirements from health care, the University of Groningen Medical Center adopted in 2014 a new curriculum named G2020. This curriculum combines thematic learning communities with competency-based medical education and Problem-based learning. In the learning community program, different learning tasks were used to train general competencies. The challenge of this program was whether students acquire similar levels of learning outcomes within the different variations of the program. METHOD We used the assessment results of three cohorts for the first two bachelor years. We used progress tests and written tests to analyze knowledge development, and the assessment results of seven competencies to analyze competence development. Concerning knowledge, we used the cumulative deviation method to compare progress tests and used the Kruskal-Wallis H test to compare written test scores between programs. Descriptive statistics are used to present all assessments of the students' competencies. RESULTS We observed similarly high passing rates both for competency and knowledge assessments in all programs. However, we did observe some differences. The two programs that focused more on competencies development underperformed the other two programs on knowledge assessment but outperformed on competencies assessment. CONCLUSION This study indicates that it is possible to train students in different learning programs within one curriculum while having similar learning outcomes. There are however some differences in obtained levels between the different programs. The new curriculum still needs to improve by balancing variations in the programs and comparability of assessments across the programs.
Collapse
Affiliation(s)
- Yan Zhou
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, China
- Center for Education Development and Research in Health Professions (CEDAR), LEARN, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Thomas H Wieringa
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
- Medical Decision Making, Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, The Netherlands
| | - Jasperina Brouwer
- Educational Sciences, Faculty Behavioural and Social Sciences, University of Groningen, Groningen, The Netherlands
| | - Agnes D Diemers
- Center for Education Development and Research in Health Professions (CEDAR), LEARN, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Nicolaas A Bos
- Center for Education Development and Research in Health Professions (CEDAR), LEARN, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
- Wenckebach Institute for Education and Training, University of Groningen, University Medical Center Groningen, Hanzeplein 1, P.O.Box 30.001, 9700 RB, Groningen, The Netherlands.
| |
Collapse
|
44
|
Blanchette P, Poitras ME, St-Onge C. Assessing trainee's performance using reported observations: Perceptions of nurse meta-assessors. NURSE EDUCATION TODAY 2023; 126:105836. [PMID: 37167832 DOI: 10.1016/j.nedt.2023.105836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 04/12/2023] [Accepted: 04/30/2023] [Indexed: 05/13/2023]
Abstract
BACKGROUND Educational and health care organizations who prepare meta-assessors to fulfill their role in the assessment of trainees' performance based on reported observations have little literature to rely on. While the assessment of trainees' performance based on reported observations has been operationalized, we have yet to understand the elements that can affect its quality fully. Closing this gap in the literature will provide valuable insight that could inform the implementation and quality monitoring of the assessment of trainees' performance based on reported observations. OBJECTIVES The purpose of this study was to explore the elements to consider in the assessment of trainees' performance based on reported observations from the perspectives of meta-assessors. METHODS Design, Settings, Participants, data collection and analysis. The authors adopted Sandelowski's qualitative descriptive approach to interview nurse meta-assessors from two nursing programs. A semi-structured interview guide was used to document the elements to consider in the assessment of nursing trainees' performance based on reported observations, and a survey was used to collect sociodemographic data. The authors conducted a thematic analysis of the interview transcripts. RESULTS Thirteen meta-assessors participated in the study. Three core themes were identified: (1) meta-assessors' appropriation of their perceived assessment roles and activities, (2) team climate of information sharing, and (3) challenges associated with the assessment of trainees' performance based on reported observations. Each theme is comprised of several sub themes. CONCLUSIONS To optimize the quality of the assessment of the trainee's performance based on reported observations and ratings, HPE programs might consider how to clarify better the meta-assessor's roles and activities, as well as how interventions could be created to promote a climate of information sharing and to address the challenges identified. This work will guide educational and health care organizations for better preparation and support for meta-assessors and preceptors.
Collapse
Affiliation(s)
| | - Marie-Eve Poitras
- Department of Family Medicine and Emergency Medicine, University of Sherbrooke, Sherbrooke, Quebec, Canada.
| | - Christina St-Onge
- Department of Medicine, University of Sherbrooke, Sherbrooke, Quebec, Canada
| |
Collapse
|
45
|
Martin L, Blissett S, Johnston B, Tsang M, Gauthier S, Ahmed Z, Sibbald M. How workplace-based assessments guide learning in postgraduate education: A scoping review. MEDICAL EDUCATION 2023; 57:394-405. [PMID: 36286100 DOI: 10.1111/medu.14960] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 09/16/2022] [Accepted: 10/21/2022] [Indexed: 06/16/2023]
Abstract
INTRODUCTION Competency-based medical education (CBME) led to the widespread adoption of workplace-based assessment (WBA) with the promise of achieving assessment for learning. Despite this, studies have illustrated tensions between the summative and formative role of WBA which undermine learning goals. Models of workplace-based learning (WBL) provide insight, however, these models excluded WBA. This scoping review synthesizes the primary literature addressing the role of WBA to guide learning in postgraduate medical education, with the goal of identifying gaps to address in future studies. METHODS The search was applied to OVID Medline, Web of Science, ERIC and CINAHL databases, articles up to September 2020 were included. Titles and abstracts were screened by two reviewers, followed by a full text review. Two members independently extracted and analysed quantitative and qualitative data using a descriptive-analytic technique rooted in Billett's four premises of WBL. Themes were synthesized and discussed until consensus. RESULTS All 33 papers focused on the perception of learning through WBA. The majority applied qualitative methodology (70%), and 12 studies (36%) made explicit reference to theory. Aligning with Billett's first premise, results reinforce that learning always occurs in the workplace. WBA helped guide learning goals and enhanced feedback frequency and specificity. Billett's remaining premises provided an important lens to understand how tensions that existed in WBL have been exacerbated with frequent WBA. As individuals engage in both work and WBA, they are slowly transforming the workplace. Culture and context frame individual experiences and the perceived authenticity of WBA. Finally, individuals will have different goals, and learn different things, from the same experience. CONCLUSION Analysing WBA literature through the lens of WBL theory allows us to reframe previously described tensions. We propose that future studies attend to learning theory, and demonstrate alignment with philosophical position, to advance our understanding of assessment-for-learning in the workplace.
Collapse
Affiliation(s)
- Leslie Martin
- Department of Medicine, McMaster University, Hamilton, Ontario, Canada
| | - Sarah Blissett
- Department of Medicine, Western University, London, Ontario, Canada
| | - Bronte Johnston
- McMaster Education Research, Innovation, and Theory Program, McMaster University, Hamilton, Ontario, Canada
| | - Michael Tsang
- Department of Medicine, McMaster University, Hamilton, Ontario, Canada
| | - Stephen Gauthier
- Department of Medicine, Queens University, Kingston, Ontario, Canada
| | - Zeeshan Ahmed
- Department of Medicine, Ottawa University, Ottawa, Ontario, Canada
| | - Matthew Sibbald
- Department of Medicine, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
46
|
Wadi MM, Yusoff MSB, Taha MH, Shorbagi S, Nik Lah NAZ, Abdul Rahim AF. The framework of Systematic Assessment for Resilience (SAR): development and validation. BMC MEDICAL EDUCATION 2023; 23:213. [PMID: 37016407 PMCID: PMC10073620 DOI: 10.1186/s12909-023-04177-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Accepted: 03/20/2023] [Indexed: 06/19/2023]
Abstract
BACKGROUND Burnout and depression among health professions education (HPE) students continue to rise, leading to unwanted effects that ultimately jeopardise optimal medical care and patient health. Promoting the resilience of medical students is one solution to this issue. Several interventions have been implemented to foster resilience, but they focus on aspects other than the primary cause: the assessment system. The purpose of this study is to develop a framework to promote resilience in assessment planning and practice. METHODS We followed the guidelines suggested by Whetten for constructing a theoretical model for framework development. There were four phases in the model development. In the first phase, different literature review methods were used, and additional students' perspectives were collected through focus group discussions. Then, using the data, we constructed the theoretical model in the second phase. In the third phase, we validated the newly developed model and its related guidelines. Finally, we performed response process validation of the model with a group of medical teachers. RESULTS The developed systematic assessment resilience framework (SAR) promotes four constructs: self-control, management, engagement, and growth, through five phases of assessment: assessment experience, assessment direction, assessment preparation, examiner focus, and student reflection. Each phase contains a number of practical guidelines to promote resilience. We rigorously triangulated each approach with its theoretical foundations and evaluated it on the basis of its content and process. The model showed high levels of content and face validity. CONCLUSIONS The SAR model offers a novel guideline for fostering resilience through assessment planning and practice. It includes a number of attainable and practical guidelines for enhancing resilience. In addition, it opens a new horizon for HPE students' future use of this framework in the new normal condition (post COVID 19).
Collapse
Affiliation(s)
- Majed Mohammed Wadi
- Medical Education Department, College of Medicine, Qassim University, Buraydah, Saudi Arabia
| | - Muhamad Saiful Bahri Yusoff
- Medical Education Department, School of Medical Sciences, Universiti Sains Malaysia, Kota Bharu, Kelantan Malaysia
| | - Mohamed Hassan Taha
- College of Medicine and Center of Medical Education, University of Sharjah, Sharjah, United Arab Emirates
| | - Sarra Shorbagi
- Department of Family and Community Medicine and Behavioral Science, College of Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Nik Ahmad Zuky Nik Lah
- Obstetrics and Gynecology Department, School of Medical Sciences, Universiti Sains Malaysia, Kota Bharu, Kelantan Malaysia
| | - Ahmad Fuad Abdul Rahim
- Medical Education Department, School of Medical Sciences, Universiti Sains Malaysia, Kota Bharu, Kelantan Malaysia
| |
Collapse
|
47
|
Hu WCY, Dillon HCB, Wilkinson TJ. Educators as Judges: Applying Judicial Decision-Making Principles to High-Stakes Education Assessment Decisions. TEACHING AND LEARNING IN MEDICINE 2023; 35:168-179. [PMID: 35253558 DOI: 10.1080/10401334.2022.2038176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Accepted: 01/24/2022] [Indexed: 06/14/2023]
Abstract
Phenomenon: Programmatic assessment and competency-based education have highlighted the need to make robust high-stakes assessment decisions on learner performance from evidence of varying types and quality. Without guidance, lengthy deliberations by decision makers and competence committees can end inconclusively with unresolved concerns. These decisional dilemmas are heightened by their potential impacts. For learners, erroneous decisions may lead to an unjustified exit from a long-desired career, or premature promotion to clinical responsibilities. For educators, there is the risk of wrongful decision-making, leading to successful appeals and mistrust. For communities, ill-prepared graduates risk the quality and safety of care. Approaches such as psychometric analyses are limited when decision-makers are faced with seemingly contradictory qualitative and quantitative evidence about the same individual. Expertise in using such evidence to make fair and defensible decisions is well established in judicial practice but is yet to be practically applied to assessment decision-making. Approach: Through interdisciplinary exchange, we investigated medical education and judicial perspectives on decision-making to explore whether principles of decision-making in law could be applied to educational assessment decision-making. Using Dialogic Inquiry, an iterative process of scholarly and mutual critique, we contrasted assessment decision making in medical education with judicial practice to identify key principles in judicial decision-making relevant to educational assessment decisions. We developed vignettes about common but problematic high-stakes decision-making scenarios to test how these principles could apply. Findings: Over 14 sessions, we identified, described, and applied four principles for fair, reasonable, and transparent assessment decision-making. These were: The person whose interests are affected has a right to know the case against them, and to be heard.Reasons for the decision should be given.Rules should be transparent and consistently applied.Like cases should be treated alike and unlike cases treated differently.Reflecting our dialogic process, we report findings by separately presenting the medical educator and judicial perspectives, followed by a synthesis describing a preferred approach to decision-making in three vignettes. Insights: Judicial principles remind educators to consider both sides of arguments, to be consistent, and to demonstrate transparency when making assessment decisions. Dialogic Inquiry is a useful approach for generating interdisciplinary insights on challenges in medical education by critiquing difference (e.g., the meaning of objectivity) and achieving synthesis where possible (e.g., fairness is not equal treatment of all cases). Our principles and exemplars provide groundwork for promoting good practice and furthering assessment research toward fairer and more robust decisions that will assist learning.
Collapse
Affiliation(s)
- Wendy C Y Hu
- Medical Education Unit, School of Medicine, Western Sydney University, Penrith South, New South Wales, Australia
| | - Hugh C B Dillon
- Faculty of Law, University of New South Wales, Sydney, Australia
| | - Tim J Wilkinson
- Education Unit, University of Otago, Christchurch, New Zealand
| |
Collapse
|
48
|
Bamber H. Evaluation of the Workplace-Based Assessment Anaesthesia-Clinical Evaluation Exercise (A-CEX) and Its Role in the Royal College of Anaesthetists 2021 Curriculum. Cureus 2023; 15:e37402. [PMID: 37181999 PMCID: PMC10171902 DOI: 10.7759/cureus.37402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/09/2023] [Indexed: 05/16/2023] Open
Abstract
The workplace-based assessment (WPBA) Anaesthesia-Clinical Evaluation Exercise (A-CEX) is used in anaesthetic training in the Royal College of Anaesthetists 2021 curriculum. WBPAs are part of a multimodal approach to assess competencies, but can be limited by their granularity. They are an essential component of assessment and are used in both a formative and summative capacity. The A-CEX is a form of WBPA which evaluates knowledge, behaviours and skill of anaesthetists in training across a variety of 'real world' situations. An entrustment scale is assigned to the evaluation which has implications for future practice and ongoing supervision requirements. Despite being a key component in the curriculum the A-CEX has drawbacks. Its qualitative nature results in variation in feedback provided amongst assessors, which may have ongoing implications for clinical practice. Furthermore, the completion of an A-CEX can be viewed as a 'tick box' exercise and does not guarantee that learning has taken place. Currently no direct evidence exists as to the benefit of the A-CEX in anaesthetic training, but extrapolated data from other studies may show validity. However, the assessment remains a key part of the 2021 curriculum, Future areas for consideration include education for those assessing trainees via A-CEX, altering the matrix of assessment to a less granular approach and a longitudinal study as to the utility of A-CEX in anaesthetics training.
Collapse
|
49
|
Dart J, Rees C, Ash S, McCall L, Palermo C. Shifting the narrative and practice of assessing professionalism in dietetics education: An Australasian qualitative study. Nutr Diet 2023. [PMID: 36916155 DOI: 10.1111/1747-0080.12804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 01/10/2023] [Accepted: 02/01/2023] [Indexed: 03/16/2023]
Abstract
AIM We aimed to explore current approaches to assessing professionalism in dietetics education in Australia and New Zealand, and asked the questions what is working well and what needs to improve? METHOD We employed a qualitative interpretive approach and conducted interviews with academic and practitioner (workplace-based) educators (total sample n = 78) with a key stake in dietetics education across Australia and New Zealand. Data were analysed using team-based, framework analysis. RESULTS Our findings suggest significant shifts in dietetics education in the area of professionalism assessment. Professionalism assessment is embedded in formal curricula of dietetics programs and is occurring in university and placement settings. In particular, advances have been demonstrated in those programs assessing professionalism as part of the programmatic assessment. Progress has been enabled by philosophical and curricula shifts; clearer articulation and shared understandings of professionalism standards; enhanced learner agency and reduced power distance; early identification and intervention of professionalism lapses; and increased confidence and capabilities of educators. CONCLUSIONS These findings suggest there have been considerable advances in professionalism assessment in recent years with shifts in practice in approaching professionalism through a more interpretivist lens, holistically and more student-centred. Professionalism assessment in dietetics education is a shared responsibility and requires further development and transformation to more fully embed and strengthen curricula approaches across programs. Further work should investigate strategies to build safer learning cultures and capacity for professionalism conversations and in strengthening approaches to remediation.
Collapse
Affiliation(s)
- Janeane Dart
- Department of Nutrition, Dietetics and Food, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| | - Charlotte Rees
- Head of School, School of Health Sciences, College of Health, Medicine and Wellbeing, University of Newcastle, Callaghan, New South Wales, Australia.,Monash Centre for Scholarship in Health Education (MCSHE), Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| | - Susan Ash
- Department of Nutrition, Dietetics and Food, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| | - Louise McCall
- Department of Nutrition, Dietetics and Food, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| | - Claire Palermo
- Office of the Deputy Dean Education, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| |
Collapse
|
50
|
Rietmeijer CBT, van Esch SCM, Blankenstein AH, van der Horst HE, Veen M, Scheele F, Teunissen PW. A phenomenology of direct observation in residency: Is Miller's 'does' level observable? MEDICAL EDUCATION 2023; 57:272-279. [PMID: 36515981 PMCID: PMC10107098 DOI: 10.1111/medu.15004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 12/08/2022] [Accepted: 12/10/2022] [Indexed: 06/17/2023]
Abstract
INTRODUCTION Guidelines on direct observation (DO) present DO as an assessment of Miller's 'does' level, that is, the learner's ability to function independently in clinical situations. The literature, however, indicates that residents may behave 'inauthentically' when observed. To minimise this 'observer effect', learners are encouraged to 'do what they would normally do' so that they can receive feedback on their actual work behaviour. Recent phenomenological research on patients' experiences with DO challenges this approach; patients needed-and caused-some participation of the observing supervisor. Although guidelines advise supervisors to minimise their presence, we are poorly informed on how some deliberate supervisor participation affects residents' experience in DO situations. Therefore, we investigated what residents essentially experienced in DO situations. METHODS We performed an interpretive phenomenological interview study, including six general practice (GP) residents. We collected and analysed our data, using the four phenomenological lenses of lived body, lived space, lived time and lived relationship. We grouped our open codes by interpreting what they revealed about common structures of residents' pre-reflective experiences. RESULTS Residents experienced the observing supervisor not just as an observer or assessor. They also experienced them as both a senior colleague and as the patient's familiar GP, which led to many additional interactions. When residents tried to act as if the supervisor was not there, they could feel insecure and handicapped because the supervisor was there, changing the situation. DISCUSSION Our results indicate that the 'observer effect' is much more material than was previously understood. Consequently, observing residents' 'authentic' behaviour at Miller's 'does' level, as if the supervisor was not there, seems impossible and a misleading concept: misleading, because it may frustrate residents and cause supervisors to neglect patients' and residents' needs in DO situations. We suggest that one-way DO is better replaced by bi-directional DO in working-and-learning-together sessions.
Collapse
Affiliation(s)
- Chris B. T. Rietmeijer
- Department of General PracticeAmsterdam UMC, location Vrije Universiteit AmsterdamAmsterdamThe Netherlands
| | - Suzanne C. M. van Esch
- Department of General PracticeAmsterdam UMC, location University of AmsterdamAmsterdamThe Netherlands
| | - Annette H. Blankenstein
- Department of General PracticeAmsterdam UMC, location Vrije Universiteit AmsterdamAmsterdamThe Netherlands
| | - Henriëtte E. van der Horst
- Department of General PracticeAmsterdam UMC, location Vrije Universiteit AmsterdamAmsterdamThe Netherlands
| | - Mario Veen
- Department of General PracticeErasmus Medical CenterRotterdamThe Netherlands
| | - Fedde Scheele
- School of Medical Sciences, Athena Institute for Transdisciplinary ResearchAmsterdam UMC, location Vrije Universiteit AmsterdamAmsterdamThe Netherlands
| | - Pim W. Teunissen
- School of Health Professions EducationMaastricht UniversityMaastrichtThe Netherlands
| |
Collapse
|