1
|
Kinnear B, Schumacher DJ, Varpio L, Driessen EW, Konopasky A. Legitimation Without Argumentation: An Empirical Discourse Analysis of 'Validity as an Argument' in Assessment. PERSPECTIVES ON MEDICAL EDUCATION 2024; 13:469-480. [PMID: 39372230 PMCID: PMC11451546 DOI: 10.5334/pme.1404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 09/20/2024] [Indexed: 10/08/2024]
Abstract
Introduction Validity is frequently conceptualized in health professions education (HPE) assessment as an argument that supports the interpretation and uses of data. However, previous work has shown that many validity scholars believe argument and argumentation are relatively lacking in HPE. To better understand HPE's discourse around argument and argumentation with regard to assessment validity, the authors explored the discourses present in published HPE manuscripts. Methods The authors used a bricolage of critical discourse analysis approaches to understand how the language in influential peer reviewed manuscripts has shaped HPE's understanding of validity arguments and argumentation. The authors used multiple search strategies to develop a final corpus of 39 manuscripts that were seen as influential in how validity arguments are conceptualized within HPE. An analytic framework drawing on prior research on Argumentation Theory was used to code manuscripts before developing themes relevant to the research question. Results The authors found that the elaboration of argument and argumentation within HPE's validity discourse is scant, with few components of Argumentation Theory (such as intended audience) existing within the discourse. The validity as an argument discourse was legitimized via authorization (reference to authority), rationalization (reference to institutionalized action), and mythopoesis (narrative building). This legitimation has cemented the validity as an argument discourse in HPE despite minimal exploration of what argument and argumentation are. Discussion This study corroborates previous work showing the dearth of argument and argumentation present within HPE's validity discourse. An opportunity exists to use Argumentation Theory in HPE to better develop validation practices that support use of argument.
Collapse
Affiliation(s)
- Benjamin Kinnear
- Department of Pediatrics at University of Cincinnati College of Medicine in Cincinnati, OH, USA
| | - Daniel J. Schumacher
- Department of Pediatrics at Cincinnati Children’s Hospital Medical Center/University of Cincinnati College of Medicine in Cincinnati, OH, USA
| | - Lara Varpio
- Department of Pediatrics at the Perelman School of Medicine, University of Pennsylvania, USA
- Children’s Hospital of Philadelphia in Philadelphia, PA, USA
| | - Erik W. Driessen
- School of Health Professions Education (SHE) at Faculty of Health at the Medicine and Life Sciences of Maastricht University in Maastricht, NL
| | - Abigail Konopasky
- Geisel School of Medicine at Dartmouth in Hanover, New Hampshire, USA
| |
Collapse
|
2
|
Kelleher M, Kinnear B, Weber DE, Knopp MI, Schumacher D, Warm E. Point/counterpoint: Should we stop writing and reading letters of recommendation for residency selection? J Hosp Med 2024; 19:858-862. [PMID: 38923809 DOI: 10.1002/jhm.13440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 06/11/2024] [Accepted: 06/11/2024] [Indexed: 06/28/2024]
Affiliation(s)
- Matthew Kelleher
- Department of Pediatrics, Division of Hospital Medicine, Internal Medicine and Pediatrics Hospital Medicine, University of Cincinnati College of Medicine/Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - Benjamin Kinnear
- Department of Pediatrics, Division of Hospital Medicine, University of Cincinnati College of Medicine/Cincinnati Childrens Medical Center, Cincinnati, Ohio, USA
| | - Danielle E Weber
- Department of Pediatrics, Division of Hospital Medicine, University of Cincinnati College of Medicine/Cincinnati Childrens Medical Center, Cincinnati, Ohio, USA
| | - Michelle I Knopp
- Department of Pediatrics, Division of Hospital Medicine, University of Cincinnati College of Medicine/Cincinnati Childrens Medical Center, Cincinnati, Ohio, USA
| | - Daniel Schumacher
- Department of Pediatrics, Division of Emergency Medicine, University of Cincinnati College of Medicine/Cincinnati Childrens Medical Center, Cincinnati, Ohio, USA
| | - Eric Warm
- Department of Internal Medicine and Division of General Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| |
Collapse
|
3
|
Bonvin R, Cerutti B. Shake-up in the world of assessment: Impressions from the Ottawa Conference on Assessment from Down Under. Swiss Med Wkly 2024; 154:3862. [PMID: 39154250 DOI: 10.57187/s.3862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Accepted: 06/04/2024] [Indexed: 08/19/2024] Open
Abstract
No abstract available.
Collapse
Affiliation(s)
- Raphaël Bonvin
- Medical Education Unit, University of Fribourg, Fribourg, Switzerland
| | - Bernard Cerutti
- Faculty of medicine, University of Geneva, Geneva, Switzerland
| |
Collapse
|
4
|
Blanchette P, Poitras ME, Lefebvre AA, St-Onge C. Making judgments based on reported observations of trainee performance: a scoping review in Health Professions Education. CANADIAN MEDICAL EDUCATION JOURNAL 2024; 15:63-75. [PMID: 39310309 PMCID: PMC11415737 DOI: 10.36834/cmej.75522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/25/2024]
Abstract
Background Educators now use reported observations when assessing trainees' performance. Unfortunately, they have little information about how to design and implement assessments based on reported observations. Objective The purpose of this scoping review was to map the literature on the use of reported observations in judging health professions education (HPE) trainees' performances. Methods Arksey and O'Malley's (2005) method was used with four databases (sources: ERIC, CINAHL, MEDLINE, PsycINFO). Eligibility criteria for articles were: documents in English or French, including primary data, and initial or professional training; (2) training in an HPE program; (3) workplace-based assessment; and (4) assessment based on reported observations. The inclusion/exclusion, and data extraction steps were performed (agreement rate > 90%). We developed a data extraction grid to chart the data. Descriptive analyses were used to summarize quantitative data, and the authors conducted thematic analysis for qualitative data. Results Based on 36 papers and 13 consultations, the team identified six steps characterizing trainee performance assessment based on reported observations in HPE: (1) making first contact, (2) observing and documenting the trainee performance, (3) collecting and completing assessment data, (4) aggregating assessment data, (5) inferring the level of competence, and (6) documenting and communicating the decision to the stakeholders. Discussion The design and implementation of assessment based on reported observations is a first step towards a quality implementation by guiding educators and administrators responsible for graduating competent professionals. Future research might focus on understanding the context beyond assessor cognition to ensure the quality of meta-assessors' decisions.
Collapse
|
5
|
Fatima SS, Sheikh NA, Osama A. Authentic assessment in medical education: exploring AI integration and student-as-partners collaboration. Postgrad Med J 2024:qgae088. [PMID: 39041454 DOI: 10.1093/postmj/qgae088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 06/25/2024] [Accepted: 07/03/2024] [Indexed: 07/24/2024]
Abstract
BACKGROUND Traditional assessments often lack flexibility, personalized feedback, real-world applicability, and the ability to measure skills beyond rote memorization. These may not adequately accommodate diverse learning styles and preferences, nor do they always foster critical thinking or creativity. The inclusion of Artificial Intelligence (AI), especially Generative Pre-trained Transformers, in medical education marks a significant shift, offering both exciting opportunities and notable challenges for authentic assessment practices. Various fields, including anatomy, physiology, pharmacy, dentistry, and pathology, are anticipated to employ the metaverse for authentic assessments increasingly. This innovative approach will likely enable students to engage in immersive, project-based learning experiences, facilitating interdisciplinary collaboration and providing a platform for real-world application of knowledge and skills. METHODS This commentary paper explores how AI, authentic assessment, and Student-as-Partners (SaP) methodologies can work together to reshape assessment practices in medical education. RESULTS The paper provides practical insights into effectively utilizing AI tools to create authentic assessments, offering educators actionable guidance to enhance their teaching practices. It also addresses the challenges and ethical considerations inherent in implementing AI-driven assessments, emphasizing the need for responsible and inclusive practices within medical education. Advocating for a collaborative approach between AI and SaP methodologies, the commentary proposes a robust plan to ensure ethical use while upholding academic integrity. CONCLUSION Through navigating emerging assessment paradigms and promoting genuine evaluation of medical knowledge and proficiency, this collaborative effort aims to elevate the quality of medical education and better prepare learners for the complexities of clinical practice.
Collapse
Affiliation(s)
- Syeda Sadia Fatima
- Department of Biological and Biomedical Sciences, Aga Khan University, Karachi 74800, Pakistan
| | - Nabeel Ashfaque Sheikh
- Medical Oncology, Shaukat Khanum Memorial Cancer Hospital and Research Center, Lahore 54000, Pakistan
| | - Athar Osama
- INNOVentures Global (Pvt) Ltd., Karachi, 75350, Pakistan
| |
Collapse
|
6
|
Ben Amor A, Farhat H, Alinier G, Ounallah A, Bouallegue O. Evaluation of the implementation of the objective structured clinical examination in health sciences education from a low-income context in Tunisia: A cross-sectional study. Health Sci Rep 2024; 7:e2116. [PMID: 38742094 PMCID: PMC11089342 DOI: 10.1002/hsr2.2116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 04/09/2024] [Accepted: 04/26/2024] [Indexed: 05/16/2024] Open
Abstract
Background Objective structured clinical examination (OSCE) is well-established and designed to evaluate students' clinical competence and practical skills in a standardized and objective manner. While OSCEs are widespread in higher-income countries, their implementation in low-resource settings presents unique challenges that warrant further investigation. Aim This study aims to evaluate the perception of the health sciences students and their educators regarding deploying OSCEs within the School of Health Sciences and Techniques of Sousse (SHSTS) in Tunisia and their efficacity in healthcare education compared to traditional practical examination methods. Methods This cross-sectional study was conducted in June 2022, focusing on final-year Health Sciences students at the SHSTS in Tunisia. The study participants were students and their educators involved in the OSCEs from June 6th to June 11th, 2022. Anonymous paper-based 5-point Likert scale satisfaction surveys were distributed to the students and their educators, with a separate set of questions for each. Spearman, Mann-Whitney U and Krusakll-Wallis tests were utilized to test the differences in satisfaction with the OSCEs among the students and educators. The Wilcoxon Rank test was utilized to examine the differences in students' assessment scores in the OSCEs and the traditional practical examination methods. Results The satisfaction scores were high among health sciences educators and above average for students, with means of 3.82 ± 1.29 and 3.15 ± 0.56, respectively. The bivariate and multivariate analyzes indicated a significant difference in the satisfaction between the students' specialities. Further, a significant difference in their assessment scores distribution in the practical examinations and OSCEs was also demonstrated, with better performance in the OSCEs. Conclusion Our study provides evidence of the relatively high level of satisfaction with the OSCEs and better performance compared to the traditional practical examinations. These findings advocate for the efficacy of OSCEs in low-income countries and the need to sustain them.
Collapse
Affiliation(s)
- Asma Ben Amor
- Faculty of Medicine “Ibn El Jazzar”University of SousseSousseTunisia
- Higher School of Health Sciences and TechniquesUniversity of SousseSousseTunisia
| | - Hassan Farhat
- Faculty of Medicine “Ibn El Jazzar”University of SousseSousseTunisia
- Ambulance ServiceHamad Medical CorporationDohaQatar
- Faculty of SciencesUniversity of SfaxSfaxTunisia
| | - Guillaume Alinier
- Ambulance ServiceHamad Medical CorporationDohaQatar
- School of Health and Social WorkUniversity of HertfordshireHatfieldUK
- Weill Cornell Medicine‐QatarDohaQatar
- Faculty of Health and Life SciencesNorthumbria UniversityNewcastle upon TyneUK
| | - Amina Ounallah
- Faculty of Medicine “Ibn El Jazzar”University of SousseSousseTunisia
- Department of DermatologyAcademic Hospital "Farhat Hached"SousseTunisia
| | - Olfa Bouallegue
- Faculty of Medicine “Ibn El Jazzar”University of SousseSousseTunisia
- Microbiology Laboratory, Hygiene and Critical Care DepartmentsAcademic Hospital of SahloulSousseTunisia
| |
Collapse
|
7
|
Ellaway RH. Historicity and the impossible present. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024; 29:361-365. [PMID: 38683299 DOI: 10.1007/s10459-024-10330-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/01/2024]
Abstract
In this editorial the editor considers issues of historicity (understanding things in their historical context) in health professions education and the sciences thereof, and argues for more attention to historical and other contextual factors in creating and appraising the research literature.
Collapse
Affiliation(s)
- Rachel H Ellaway
- Department of Community Health Sciences and Office of Health and Medical Education Scholarship, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada.
| |
Collapse
|
8
|
Miller DT, Michael S, Bell C, Brevik CH, Kaplan B, Svoboda E, Kendall J. Physical and biophysical markers of assessment in medical training: A scoping review of the literature. MEDICAL TEACHER 2024:1-9. [PMID: 38688520 DOI: 10.1080/0142159x.2024.2345269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 04/16/2024] [Indexed: 05/02/2024]
Abstract
PURPOSE Assessment in medical education has changed over time to measure the evolving skills required of current medical practice. Physical and biophysical markers of assessment attempt to use technology to gain insight into medical trainees' knowledge, skills, and attitudes. The authors conducted a scoping review to map the literature on the use of physical and biophysical markers of assessment in medical training. MATERIALS AND METHODS The authors searched seven databases on 1 August 2022, for publications that utilized physical or biophysical markers in the assessment of medical trainees (medical students, residents, fellows, and synonymous terms used in other countries). Physical or biophysical markers included: heart rate and heart rate variability, visual tracking and attention, pupillometry, hand motion analysis, skin conductivity, salivary cortisol, functional magnetic resonance imaging (fMRI), and functional near-infrared spectroscopy (fNIRS). The authors mapped the relevant literature using Bloom's taxonomy of knowledge, skills, and attitudes and extracted additional data including study design, study environment, and novice vs. expert differentiation from February to June 2023. RESULTS Of 6,069 unique articles, 443 met inclusion criteria. The majority of studies assessed trainees using heart rate variability (n = 160, 36%) followed by visual attention (n = 143, 32%), hand motion analysis (n = 67, 15%), salivary cortisol (n = 67, 15%), fMRI (n = 29, 7%), skin conductivity (n = 26, 6%), fNIRs (n = 19, 4%), and pupillometry (n = 16, 4%). The majority of studies (n = 167, 38%) analyzed non-technical skills, followed by studies that analyzed technical skills (n = 155, 35%), knowledge (n = 114, 26%), and attitudinal skills (n = 61, 14%). 169 studies (38%) attempted to use physical or biophysical markers to differentiate between novice and expert. CONCLUSION This review provides a comprehensive description of the current use of physical and biophysical markers in medical education training, including the current technology and skills assessed. Additionally, while physical and biophysical markers have the potential to augment current assessment in medical education, there remains significant gaps in research surrounding reliability, validity, cost, practicality, and educational impact of implementing these markers of assessment.
Collapse
Affiliation(s)
- Danielle T Miller
- Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, CO, USA
| | - Sarah Michael
- Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, CO, USA
| | - Colin Bell
- Department of Emergency Medicine, University of Calgary, Calgary, Canada
| | - Cody H Brevik
- Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, CO, USA
| | - Bonnie Kaplan
- Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, CO, USA
| | - Ellie Svoboda
- Education Informationist, Strauss Health Sciences Library, University of Colorado Anschutz Medical Campus, Aurora, CO, USA
| | - John Kendall
- Department of Emergency Medicine, Stanford School of Medicine, Palo Alto, CA, USA
| |
Collapse
|
9
|
Schauber SK, Olsen AO, Werner EL, Magelssen M. Inconsistencies in rater-based assessments mainly affect borderline candidates: but using simple heuristics might improve pass-fail decisions. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024:10.1007/s10459-024-10328-0. [PMID: 38649529 DOI: 10.1007/s10459-024-10328-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 03/24/2024] [Indexed: 04/25/2024]
Abstract
INTRODUCTION Research in various areas indicates that expert judgment can be highly inconsistent. However, expert judgment is indispensable in many contexts. In medical education, experts often function as examiners in rater-based assessments. Here, disagreement between examiners can have far-reaching consequences. The literature suggests that inconsistencies in ratings depend on the level of performance a to-be-evaluated candidate shows. This possibility has not been addressed deliberately and with appropriate statistical methods. By adopting the theoretical lens of ecological rationality, we evaluate if easily implementable strategies can enhance decision making in real-world assessment contexts. METHODS We address two objectives. First, we investigate the dependence of rater-consistency on performance levels. We recorded videos of mock-exams and had examiners (N=10) evaluate four students' performances and compare inconsistencies in performance ratings between examiner-pairs using a bootstrapping procedure. Our second objective is to provide an approach that aids decision making by implementing simple heuristics. RESULTS We found that discrepancies were largely a function of the level of performance the candidates showed. Lower performances were rated more inconsistently than excellent performances. Furthermore, our analyses indicated that the use of simple heuristics might improve decisions in examiner pairs. DISCUSSION Inconsistencies in performance judgments continue to be a matter of concern, and we provide empirical evidence for them to be related to candidate performance. We discuss implications for research and the advantages of adopting the perspective of ecological rationality. We point to directions both for further research and for development of assessment practices.
Collapse
Affiliation(s)
- Stefan K Schauber
- Centre for Health Sciences Education, Faculty of Medicine, University of Oslo, Oslo, Norway.
- Centre for Educational Measurement (CEMO), Faculty of Educational Sciences, University of Oslo, Oslo, Norway.
| | - Anne O Olsen
- Department of Community Medicine and Global Health, Institute of Health and Society, University of Oslo, Oslo, Norway
| | - Erik L Werner
- Department of General Practice, Institute of Health and Society, University of Oslo, Oslo, Norway
| | - Morten Magelssen
- Centre for Medical Ethics, Institute of Health and Society, University of Oslo, Oslo, Norway
| |
Collapse
|
10
|
Garcia-Ros R, Ruescas-Nicolau MA, Cezón-Serrano N, Flor-Rufino C, Martin-Valenzuela CS, Sánchez-Sánchez ML. Improving assessment of procedural skills in health sciences education: a validation study of a rubrics system in neurophysiotherapy. BMC Psychol 2024; 12:147. [PMID: 38486300 PMCID: PMC10941460 DOI: 10.1186/s40359-024-01643-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 03/05/2024] [Indexed: 03/17/2024] Open
Abstract
BACKGROUND The development of procedural skills is essential in health sciences education. Rubrics can be useful for learning and assessing these skills. To this end, a set of rubrics were developed in case of neurophysiotherapy maneuvers for undergraduates. Although students found the rubrics to be valid and useful in previous courses, the analysis of the practical exam results showed the need to change them in order to improve their validity and reliability, especially when used for summative purposes. After reviewing the rubrics, this paper analyzes their validity and reliability for promoting the learning of neurophysiotherapy maneuvers and assessing the acquisition of the procedural skills they involve. METHODS In this cross-sectional and psychometric study, six experts and 142 undergraduate students of a neurophysiotherapy subject from a Spanish university participated. The rubrics' validity (content and structural) and reliability (inter-rater and internal consistency) were analyzed. The students' scores in the subject practical exam derived from the application of the rubrics, as well as the rubrics' criteria difficulty and discrimination indices were also determined. RESULTS The rubrics´ content validity was found to be adequate (Content Validity Index > 0.90). These showed a unidimensional structure, and an acceptable internal consistency (α = 0.71) and inter-rater reliability (Fleiss' ƙ=0.44, ICC = 0.94). The scores of the subject practical exam practically covered the entire range of possible theoretical scores, showing all the criterion medium-low to medium difficulty indices - except for the one related to the physical therapist position-. All the criterion exhibited adequate discrimination indices (rpbis > 0.39), as did the rubric as a whole (Ferguson's δ = 0.86). Students highlighted the rubrics´ usefulness for learning the maneuvers, as well as their validity and reliability for formative and summative assessment. CONCLUSIONS The changed rubrics constitute a valid and reliable instrument for evaluating the execution quality of neurophysiotherapy maneuvers from a summative evaluation viewpoint. This study facilitates the development of rubrics aimed at promoting different practical skills in health-science education.
Collapse
Affiliation(s)
- Rafael Garcia-Ros
- Department of Developmental and Educational Psychology, Faculty of Psychology, University of Valencia, Blasco Ibáñez Av. no. 21, Valencia, 46010, Spain
- Neurophysiotherapy Teaching Innovation Group, Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain
| | - Maria-Arantzazu Ruescas-Nicolau
- Neurophysiotherapy Teaching Innovation Group, Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain.
- Physiotherapy in Motion. Multispeciality Research Group (PTinMOTION), Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain.
| | - Natalia Cezón-Serrano
- Neurophysiotherapy Teaching Innovation Group, Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain
- Physiotherapy in Motion. Multispeciality Research Group (PTinMOTION), Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain
| | - Cristina Flor-Rufino
- Neurophysiotherapy Teaching Innovation Group, Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain
- Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain
| | - Constanza San Martin-Valenzuela
- Neurophysiotherapy Teaching Innovation Group, Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain
- Research unit in Clinical biomechanics - UBIC, Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain
| | - M Luz Sánchez-Sánchez
- Neurophysiotherapy Teaching Innovation Group, Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain
- Physiotherapy in Motion. Multispeciality Research Group (PTinMOTION), Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain
| |
Collapse
|
11
|
Caretta-Weyer HA, Smirnova A, Barone MA, Frank JR, Hernandez-Boussard T, Levinson D, Lombarts KMJMH, Lomis KD, Martini A, Schumacher DJ, Turner DA, Schuh A. The Next Era of Assessment: Building a Trustworthy Assessment System. PERSPECTIVES ON MEDICAL EDUCATION 2024; 13:12-23. [PMID: 38274558 PMCID: PMC10809864 DOI: 10.5334/pme.1110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 12/18/2023] [Indexed: 01/27/2024]
Abstract
Assessment in medical education has evolved through a sequence of eras each centering on distinct views and values. These eras include measurement (e.g., knowledge exams, objective structured clinical examinations), then judgments (e.g., workplace-based assessments, entrustable professional activities), and most recently systems or programmatic assessment, where over time multiple types and sources of data are collected and combined by competency committees to ensure individual learners are ready to progress to the next stage in their training. Significantly less attention has been paid to the social context of assessment, which has led to an overall erosion of trust in assessment by a variety of stakeholders including learners and frontline assessors. To meaningfully move forward, the authors assert that the reestablishment of trust should be foundational to the next era of assessment. In our actions and interventions, it is imperative that medical education leaders address and build trust in assessment at a systems level. To that end, the authors first review tenets on the social contextualization of assessment and its linkage to trust and discuss consequences should the current state of low trust continue. The authors then posit that trusting and trustworthy relationships can exist at individual as well as organizational and systems levels. Finally, the authors propose a framework to build trust at multiple levels in a future assessment system; one that invites and supports professional and human growth and has the potential to position assessment as a fundamental component of renegotiating the social contract between medical education and the health of the public.
Collapse
Affiliation(s)
- Holly A. Caretta-Weyer
- Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, California, USA
| | - Alina Smirnova
- Department of Family Medicine, University of Calgary, Calgary, Alberta, Canada
- Kern Institute for the Transformation of Medical Education, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
| | - Michael A. Barone
- NBME, Philadelphia, Pennsylvania, USA
- Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Jason R. Frank
- Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, CA
| | | | - Dana Levinson
- Josiah Macy Jr Foundation, Philadelphia, Pennsylvania, USA
| | - Kiki M. J. M. H. Lombarts
- Department of Medical Psychology, Amsterdam University Medical Centers, University of Amsterdam, NL
- Amsterdam Public Health research institute, Amsterdam, NL
| | - Kimberly D. Lomis
- Undergraduate Medical Education Innovations, American Medical Association, Chicago, Illinois, USA
| | - Abigail Martini
- Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio, USA
| | - Daniel J. Schumacher
- Division of Emergency Medicine, Cincinnati Children’s Hospital Medical Center/University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| | - David A. Turner
- American Board of Pediatrics, Chapel Hill, North Carolina, USA
| | - Abigail Schuh
- Division of Emergency Medicine, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
| |
Collapse
|
12
|
Li S, Qi X, Li H, Zhou W, Jiang Z, Qi J. Exploration of validity evidence for core residency entrustable professional activities in Chinese pediatric residency. Front Med (Lausanne) 2024; 10:1301356. [PMID: 38259855 PMCID: PMC10801054 DOI: 10.3389/fmed.2023.1301356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Accepted: 12/12/2023] [Indexed: 01/24/2024] Open
Abstract
Introduction This study seeks to explore validity and reliability evidence for core residency entrustable professional activities (CR-EPAs) that were developed by Peking University First Hospital (PKUFH) in 2020. Methods A prospective cohort study was conducted in PKUFH. Trainers (raters) assessed pediatric residents on CR-EPAs over 1 academic year, bi-annually. Critical components within a validity evidence framework were examined: response process (rater perceptions), the internal structure (reliability and contributions of different variance sources), and consequences (potential use of a cutoff score). Results In total, 37 residents were enrolled, and 111 and 99 trainers' ratings were collected in Fall 2020 and Spring 2021, respectively. For rater perceptions, all the raters considered CR-EPAs highly operational and convenient. In all ratings, individual EPAs correlate with total EPA moderately, with Spearman correlation coefficients spanning from 0.805 to 0.919. EPA 2 (select and interpret the auxiliary examinations), EPA 5 (prepare and complete medical documents), EPA 6 (provide an oral presentation of a case or a clinical encounter), and EPA 7 (identify and manage the general clinical conditions) were EPAs correlated with other EPAs significantly. The results of the generalizability theory indicated that the variability due to residents is the highest (nearly 78.5%), leading to a large size of the reliability estimates. The matching results indicate that the lowest error locates at 5.933. Conclusion The rating showed good validity and reliability. The ratings were reliable based on G-theory. CR-EPAs have a magnificent internal structure and have promising consequences. Our results indicate that CR-EPAs are a robust assessment tool in workplace-based training in a carefully designed setting.
Collapse
Affiliation(s)
- Shan Li
- Department of Paediatrics, Peking University First Hospital, Beijing, China
| | - Xin Qi
- Department of Plastic Surgery and Burns, Peking University First Hospital, Beijing, China
| | - Haichao Li
- Department of Respiratory and Critical Medicine, Peking University First Hospital, Beijing, China
| | - Wenjing Zhou
- School of Public Health, Peking University, Beijing, China
| | - Zhehan Jiang
- Institute of Medical Education and National Center for Health Professions Education Department, Peking University, Beijing, China
| | - Jianguang Qi
- Department of Paediatrics, Peking University First Hospital, Beijing, China
| |
Collapse
|
13
|
McDonald J, Hu W, Heeneman S. Struggles and Joys: A Mixed Methods Study of the Artefacts and Reflections in Medical Student Portfolios. PERSPECTIVES ON MEDICAL EDUCATION 2024; 3:1-11. [PMID: 38188594 PMCID: PMC10768569 DOI: 10.5334/pme.1029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 12/08/2023] [Indexed: 01/09/2024]
Abstract
Introduction Portfolios scaffold reflection on experience so students can plan their learning. To elicit reflection, the learning experiences documented in portfolios must be meaningful. To understand what experiences first- and second-year medical students find meaningful, we studied the patterns in the artefacts chosen for portfolios and their associated written reflections. Methods This explanatory mixed methods study of a longitudinal dataset of 835 artefacts from 37 medical student' portfolios, identified patterns in artefact types over time. Mixed model logistic regression analysis identified time, student and curriculum factors associated with inclusion of the most common types of artefacts. Thematic analysis of participants' reflections about their artefacts provided insight into their choices. Interpretation of the integrated findings was informed by Transformative Learning (TL) theory. Results Artefact choices changed over time, influenced by curriculum changes and personal factors. In first year, the most common types of artefacts were Problem Based Learning mechanism diagrams and group photos representing classwork; in second year written assignments and 'selfies' representing social and clinical activities. Themes in the written reflections were Landmarks and Progress, Struggles and Strategies, Connection and Collaboration, and Joyful Memories for Balance. Coursework artefacts and photographic self-portraits represented all levels of transformative learning from across the curriculum. Conclusions Medical students chose artefacts to represent challenging and/or landmark experiences, balanced by experiences that were joyful or fostered peer connection. Novelty influenced choice. To maximise learning students should draw from all experiences, to promote supported reflection with an advisor. Tasks should be timed to coincide with the introduction of new challenges.
Collapse
Affiliation(s)
- Jenny McDonald
- Translational Health Research Institute, School of Medicine, Western Sydney University, South Penrith, Australia
- School of Health Profession Education, Maastricht University, the Netherlands
| | - Wendy Hu
- Translational Health Research Institute, School of Medicine, Western Sydney University, South Penrith, Australia
| | - Sylvia Heeneman
- School of Health Profession Education, Maastricht University, the Netherlands
| |
Collapse
|
14
|
Schumacher DJ, Kinnear B, Carraccio C, Holmboe E, Busari JO, van der Vleuten C, Lingard L. Competency-based medical education: The spark to ignite healthcare's escape fire. MEDICAL TEACHER 2024; 46:140-146. [PMID: 37463405 DOI: 10.1080/0142159x.2023.2232097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
High-value care is what patients deserve and what healthcare professionals should deliver. However, it is not what happens much of the time. Quality improvement master Dr. Don Berwick argued more than two decades ago that American healthcare needs an escape fire, which is a new way of seeing and acting in a crisis situation. While coined in the U.S. context, the analogy applies in other Western healthcare contexts as well. Therefore, in this paper, the authors revisit Berwick's analogy, arguing that medical education can, and should, provide the spark for such an escape fire across the globe. They assert that medical education can achieve this by fully embracing competency-based medical education (CBME) as a way to place medicine's focus on the patient. CBME targets training outcomes that prepare graduates to optimize patient care. The authors use the escape fire analogy to argue that medical educators must drop long-held approaches and tools; treat CBME implementation as an adaptive challenge rather than a technical fix; demand genuine, rich discussions and engagement about the path forward; and, above all, center the patient in all they do.
Collapse
Affiliation(s)
- Daniel J Schumacher
- Pediatrics, Cincinnati Children's Hospital Medical Center and, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| | - Benjamin Kinnear
- Pediatrics and Internal Medicine, Cincinnati Children's Hospital Medical Center and, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| | - Carol Carraccio
- Vice President of Competency-Based Medical Education, American Board of Pediatrics, Chapel Hill, North Carolina, USA
| | - Eric Holmboe
- Milestones Development and Evaluation Officer, Accreditation Council for Graduate Medical Education, Chicago, Illinois, USA
| | - Jamiu O Busari
- Department of Educational Development and Research, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Cees van der Vleuten
- Department of Educational Development and Research, Faculty of Health, Medicine, and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands
| | - Lorelei Lingard
- Department of Medicine, and Center for Education Research & Innovation, Schulich School of Medicine and Dentistry at Western University, London, Ontario, Canada
| |
Collapse
|
15
|
Tavares W, Kinnear B, Schumacher DJ, Forte M. "Rater training" re-imagined for work-based assessment in medical education. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:1697-1709. [PMID: 37140661 DOI: 10.1007/s10459-023-10237-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 04/30/2023] [Indexed: 05/05/2023]
Abstract
In this perspective, the authors critically examine "rater training" as it has been conceptualized and used in medical education. By "rater training," they mean the educational events intended to improve rater performance and contributions during assessment events. Historically, rater training programs have focused on modifying faculty behaviours to achieve psychometric ideals (e.g., reliability, inter-rater reliability, accuracy). The authors argue these ideals may now be poorly aligned with contemporary research informing work-based assessment, introducing a compatibility threat, with no clear direction on how to proceed. To address this issue, the authors provide a brief historical review of "rater training" and provide an analysis of the literature examining the effectiveness of rater training programs. They focus mainly on what has served to define effectiveness or improvements. They then draw on philosophical and conceptual shifts in assessment to demonstrate why the function, effectiveness aims, and structure of rater training requires reimagining. These include shifting competencies for assessors, viewing assessment as a complex cognitive task enacted in a social context, evolving views on biases, and reprioritizing which validity evidence should be most sought in medical education. The authors aim to advance the discussion on rater training by challenging implicit incompatibility issues and stimulating ways to overcome them. They propose that "rater training" (a moniker they suggest be reserved for strong psychometric aims) be augmented with "assessor readiness" programs that link to contemporary assessment science and enact the principle of compatibility between that science and ways of engaging with advances in real-world faculty-learner contexts.
Collapse
Affiliation(s)
- Walter Tavares
- Department of Health and Society, Wilson Centre, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada.
| | - Benjamin Kinnear
- Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Daniel J Schumacher
- Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Milena Forte
- Department of Family and Community Medicine, Temerty Faculty of Medicine, Mount Sinai Hospital, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
16
|
Ras T, Stander Jenkins L, Lazarus C, van Rensburg JJ, Cooke R, Senkubuge F, N Dlova A, Singaram V, Daitz E, Buch E, Green-Thompson L, Burch V. "We just don't have the resources": Supervisor perspectives on introducing workplace-based assessments into medical specialist training in South Africa. BMC MEDICAL EDUCATION 2023; 23:832. [PMID: 37932732 PMCID: PMC10629100 DOI: 10.1186/s12909-023-04840-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 11/02/2023] [Indexed: 11/08/2023]
Abstract
BACKGROUND South Africa (SA) is on the brink of implementing workplace-based assessments (WBA) in all medical specialist training programmes in the country. Despite the fact that competency-based medical education (CBME) has been in place for about two decades, WBA offers new and interesting challenges. The literature indicates that WBA has resource, regulatory, educational and social complexities. Implementing WBA would therefore require a careful approach to this complex challenge. To date, insufficient exploration of WBA practices, experiences, perceptions, and aspirations in healthcare have been undertaken in South Africa or Africa. The aim of this study was to identify factors that could impact WBA implementation from the perspectives of medical specialist educators. The outcomes being reported are themes derived from reported potential barriers and enablers to WBA implementation in the SA context. METHODS This paper reports on the qualitative data generated from a mixed methods study that employed a parallel convergent design, utilising a self-administered online questionnaire to collect data from participants. Data was analysed thematically and inductively. RESULTS The themes that emerged were: Structural readiness for WBA; staff capacity to implement WBA; quality assurance; and the social dynamics of WBA. CONCLUSIONS Participants demonstrated impressive levels of insight into their respective working environments, producing an extensive list of barriers and enablers. Despite significant structural and social barriers, this cohort perceives the impending implementation of WBA to be a positive development in registrar training in South Africa. We make recommendations for future research, and to the medical specialist educational leaders in SA.
Collapse
Affiliation(s)
- Tasleem Ras
- University of Cape Town, Cape Town, South Africa.
| | | | | | | | - Richard Cooke
- Witwatersrand University, Johannesburg, South Africa
| | | | | | | | - Emma Daitz
- University of Cape Town, Cape Town, South Africa
| | - Eric Buch
- Colleges of Medicine of South Africa, Johannesburg, South Africa
| | - Lionel Green-Thompson
- University of Cape Town & South African Committee Of Medical Deans, Cape Town, South Africa
| | - Vanessa Burch
- Colleges of Medicine of South Africa, Johannesburg, South Africa
| |
Collapse
|
17
|
Kibble J, Plochocki J. Comparing Machine Learning Models and Human Raters When Ranking Medical Student Performance Evaluations. J Grad Med Educ 2023; 15:488-493. [PMID: 37637337 PMCID: PMC10449343 DOI: 10.4300/jgme-d-22-00678.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 05/10/2023] [Accepted: 06/01/2023] [Indexed: 08/29/2023] Open
Abstract
Background The Medical Student Performance Evaluation (MSPE), a narrative summary of each student's academic and professional performance in US medical school is long, making it challenging for residency programs evaluating large numbers of applicants. Objective To create a rubric to assess MSPE narratives and to compare the ability of 3 commercially available machine learning models (MLMs) to rank MSPEs in order of positivity. Methods Thirty out of a possible 120 MSPEs from the University of Central Florida class of 2020 were de-identified and subjected to manual scoring and ranking by a pair of faculty members using a new rubric based on the Accreditation Council for Graduate Medical Education competencies, and to global sentiment analysis by the MLMs. Correlation analysis was used to assess reliability and agreement between student rank orders produced by faculty and MLMs. Results The intraclass correlation coefficient used to assess faculty interrater reliability was 0.864 (P<.001; 95% CI 0.715-0.935) for total rubric scores and ranged from 0.402 to 0.768 for isolated subscales; faculty rank orders were also highly correlated (rs=0.758; P<.001; 95% CI 0.539-0.881). The authors report good feasibility as the rubric was easy to use and added minimal time to reading MSPEs. The MLMs correctly reported a positive sentiment for all 30 MSPE narratives, but their rank orders produced no significant correlations between different MLMs, or when compared with faculty rankings. Conclusions The rubric for manual grading provided reliable overall scoring and ranking of MSPEs. The MLMs accurately detected positive sentiment in the MSPEs but were unable to provide reliable rank ordering.
Collapse
Affiliation(s)
- Jonathan Kibble
- Both authors are with University of Central Florida College of Medicine. Jonathan Kibble, PhD, is Professor of Medical Education; and
| | | |
Collapse
|
18
|
Valentine N, Durning SJ, Shanahan EM, Schuwirth L. Fairness in Assessment: Identifying a Complex Adaptive System. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:315-326. [PMID: 37520508 PMCID: PMC10377744 DOI: 10.5334/pme.993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 07/02/2023] [Indexed: 08/01/2023]
Abstract
Introduction Assessment design in health professions education is continuously evolving. There is an increasing desire to better embrace human judgement in assessment. Thus, it is essential to understand what makes this judgement fair. This study builds upon existing literature by studying how assessment leaders conceptualise the characteristics of fair judgement. Methods Sixteen assessment leaders from 15 medical schools in Australia and New Zealand participated in online focus groups. Data collection and analysis occurred concurrently and iteratively. We used the constant comparison method to identify themes and build on an existing conceptual model of fair judgement in assessment. Results Fairness is a multi-dimensional construct with components at environment, system and individual levels. Components influencing fairness include articulated and agreed learning outcomes relating to the needs of society, a culture which allows for learner support, stakeholder agency and learning (environmental level), collection, interpretation and combination of evidence, procedural strategies (system level) and appropriate individual assessments and assessor expertise and agility (individual level). Discussion We observed that within the data at fractal, that is an infinite pattern repeating at different scales, could be seen suggesting fair judgement should be considered a complex adaptive system. Within complex adaptive systems, it is primarily the interaction between the entities which influences the outcome it produces, not simply the components themselves. Viewing fairness in assessment through a lens of complexity rather than as a linear, causal model has significant implications for how we design assessment programs and seek to utilise human judgement in assessment.
Collapse
Affiliation(s)
- Nyoli Valentine
- Prideaux Discipline of Clinical Education, Flinders University, Bedford Park, South Australia, Australia
| | - Steven J. Durning
- Department of Medicine, Director, Center for Health Professions Education, Uniformed Services University of the Health Sciences, Bethesda, MD, United States
| | | | - Lambert Schuwirth
- Prideaux Discipline of Clinical Education, Flinders University, Bedford Park, South Australia, Australia
| |
Collapse
|
19
|
Kozato A, Shikino K, Matsuyama Y, Hayashi M, Kondo S, Uchida S, Stanyon M, Ito S. A qualitative study examining the critical differences in the experience of and response to formative feedback by undergraduate medical students in Japan and the UK. BMC MEDICAL EDUCATION 2023; 23:408. [PMID: 37277728 PMCID: PMC10240445 DOI: 10.1186/s12909-023-04257-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 04/13/2023] [Indexed: 06/07/2023]
Abstract
BACKGROUND Formative feedback plays a critical role in guiding learners to gain competence, serving as an opportunity for reflection and feedback on their learning progress and needs. Medical education in Japan has historically been dominated by a summative paradigm within assessment, as opposed to countries such as the UK where there are greater opportunities for formative feedback. How this difference affects students' interaction with feedback has not been studied. We aim to explore the difference in students' perception of feedback in Japan and the UK. METHODS The study is designed and analysed with a constructivist grounded theory lens. Medical students in Japan and the UK were interviewed on the topic of formative assessment and feedback they received during clinical placements. We undertook purposeful sampling and concurrent data collection. Data analysis through open and axial coding with iterative discussion among research group members was conducted to develop a theoretical framework. RESULTS Japanese students perceived feedback as a model answer provided by tutors which they should not critically question, which contrasted with the views of UK students. Japanese students viewed formative assessment as an opportunity to gauge whether they are achieving the pass mark, while UK students used the experience for reflective learning. CONCLUSIONS The Japanese student experience of formative assessment and feedback supports the view that medical education and examination systems in Japan are focused on summative assessment, which operates alongside culturally derived social pressures including the expectation to correct mistakes. These findings provide new insights in supporting students to learn from formative feedback in both Japanese and UK contexts.
Collapse
Affiliation(s)
- An Kozato
- Postgraduate Education Centre, Ipswich Hospital NHS Trust, Ipswich, UK.
- Ipswich Hospital, Heath Rd, IP4 5PD, Ipswich, UK.
| | - Kiyoshi Shikino
- Health Professional Development Center, Chiba University Hospital, Chiba, Japan
- Department of General Medicine, Chiba University Hospital, Chiba, Japan
| | | | - Mikio Hayashi
- Center for Medical Education, Kansai Medical University, Osaka, Japan
- Master of Medical Sciences in Medical Education, Harvard Medical School, Boston, MA, USA
| | - Satoshi Kondo
- Department of Medical Education Studies, Graduate School of Medicine, International Research Center for Medical Education, The University of Tokyo, Tokyo, Japan
- Center for Medical Education and Career Development, Graduate School of Medicine, University of Toyama, Toyama, Japan
| | - Shun Uchida
- Health Professional Development Center, Chiba University Hospital, Chiba, Japan
| | - Maham Stanyon
- Center for Medical Education and Career Development, Fukushima Medical University, Fukushima, Japan
| | - Shoichi Ito
- Health Professional Development Center, Chiba University Hospital, Chiba, Japan
- Department of Medical Education, Graduate School of Medicine, Chiba University, Chiba, Japan
| |
Collapse
|
20
|
Sjoquist LK, Surowiec SM, Guy JW. A Pharmacy Drug Knowledge Assessment Pilot: Who Will Fly Farthest and What Downs the Plane? PHARMACY 2023; 11:pharmacy11030085. [PMID: 37218967 DOI: 10.3390/pharmacy11030085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 05/07/2023] [Accepted: 05/10/2023] [Indexed: 05/24/2023] Open
Abstract
OBJECTIVE To evaluate the effectiveness of a sequenced drug knowledge pilot in third professional year students in a capstone course. METHODS A three-phase drug knowledge pilot was conducted in spring 2022. Students completed a total of thirteen assessments, including nine low-stakes quizzes, three formative tests, and a final summative comprehensive exam. Results from the previous year's cohort (historical control) who only completed a summative comprehensive exam were compared to the pilot (test group) results to assess effectiveness. The faculty spent over 300 h developing content for the test group. RESULTS The pilot group had a mean score of 80.9% on the final competency exam, which was one percent lower than the control group who had a less rigorous intervention. A sub-analysis was conducted that removed the students who failed (<73%) the final competency exam, and no significant difference in the exam score was found. One practice drug exam was found to be moderately correlated and significant (r = 0.62) with the final knowledge exam performance in the control. The number of attempts on the low-stakes assessments had a low correlation with the final exam score in the test group compared to the control (r = 0.24). CONCLUSION The results of this study suggest a need to further investigate the best practices for knowledge-based drug characteristic assessments.
Collapse
Affiliation(s)
- Laura K Sjoquist
- College of Pharmacy, The University of Findlay, Findlay, OH 45840, USA
| | | | - Jason W Guy
- College of Pharmacy, The University of Findlay, Findlay, OH 45840, USA
| |
Collapse
|
21
|
Blanchette P, Poitras ME, St-Onge C. Assessing trainee's performance using reported observations: Perceptions of nurse meta-assessors. NURSE EDUCATION TODAY 2023; 126:105836. [PMID: 37167832 DOI: 10.1016/j.nedt.2023.105836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 04/12/2023] [Accepted: 04/30/2023] [Indexed: 05/13/2023]
Abstract
BACKGROUND Educational and health care organizations who prepare meta-assessors to fulfill their role in the assessment of trainees' performance based on reported observations have little literature to rely on. While the assessment of trainees' performance based on reported observations has been operationalized, we have yet to understand the elements that can affect its quality fully. Closing this gap in the literature will provide valuable insight that could inform the implementation and quality monitoring of the assessment of trainees' performance based on reported observations. OBJECTIVES The purpose of this study was to explore the elements to consider in the assessment of trainees' performance based on reported observations from the perspectives of meta-assessors. METHODS Design, Settings, Participants, data collection and analysis. The authors adopted Sandelowski's qualitative descriptive approach to interview nurse meta-assessors from two nursing programs. A semi-structured interview guide was used to document the elements to consider in the assessment of nursing trainees' performance based on reported observations, and a survey was used to collect sociodemographic data. The authors conducted a thematic analysis of the interview transcripts. RESULTS Thirteen meta-assessors participated in the study. Three core themes were identified: (1) meta-assessors' appropriation of their perceived assessment roles and activities, (2) team climate of information sharing, and (3) challenges associated with the assessment of trainees' performance based on reported observations. Each theme is comprised of several sub themes. CONCLUSIONS To optimize the quality of the assessment of the trainee's performance based on reported observations and ratings, HPE programs might consider how to clarify better the meta-assessor's roles and activities, as well as how interventions could be created to promote a climate of information sharing and to address the challenges identified. This work will guide educational and health care organizations for better preparation and support for meta-assessors and preceptors.
Collapse
Affiliation(s)
| | - Marie-Eve Poitras
- Department of Family Medicine and Emergency Medicine, University of Sherbrooke, Sherbrooke, Quebec, Canada.
| | - Christina St-Onge
- Department of Medicine, University of Sherbrooke, Sherbrooke, Quebec, Canada
| |
Collapse
|
22
|
Wadi MM, Yusoff MSB, Taha MH, Shorbagi S, Nik Lah NAZ, Abdul Rahim AF. The framework of Systematic Assessment for Resilience (SAR): development and validation. BMC MEDICAL EDUCATION 2023; 23:213. [PMID: 37016407 PMCID: PMC10073620 DOI: 10.1186/s12909-023-04177-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Accepted: 03/20/2023] [Indexed: 06/19/2023]
Abstract
BACKGROUND Burnout and depression among health professions education (HPE) students continue to rise, leading to unwanted effects that ultimately jeopardise optimal medical care and patient health. Promoting the resilience of medical students is one solution to this issue. Several interventions have been implemented to foster resilience, but they focus on aspects other than the primary cause: the assessment system. The purpose of this study is to develop a framework to promote resilience in assessment planning and practice. METHODS We followed the guidelines suggested by Whetten for constructing a theoretical model for framework development. There were four phases in the model development. In the first phase, different literature review methods were used, and additional students' perspectives were collected through focus group discussions. Then, using the data, we constructed the theoretical model in the second phase. In the third phase, we validated the newly developed model and its related guidelines. Finally, we performed response process validation of the model with a group of medical teachers. RESULTS The developed systematic assessment resilience framework (SAR) promotes four constructs: self-control, management, engagement, and growth, through five phases of assessment: assessment experience, assessment direction, assessment preparation, examiner focus, and student reflection. Each phase contains a number of practical guidelines to promote resilience. We rigorously triangulated each approach with its theoretical foundations and evaluated it on the basis of its content and process. The model showed high levels of content and face validity. CONCLUSIONS The SAR model offers a novel guideline for fostering resilience through assessment planning and practice. It includes a number of attainable and practical guidelines for enhancing resilience. In addition, it opens a new horizon for HPE students' future use of this framework in the new normal condition (post COVID 19).
Collapse
Affiliation(s)
- Majed Mohammed Wadi
- Medical Education Department, College of Medicine, Qassim University, Buraydah, Saudi Arabia
| | - Muhamad Saiful Bahri Yusoff
- Medical Education Department, School of Medical Sciences, Universiti Sains Malaysia, Kota Bharu, Kelantan Malaysia
| | - Mohamed Hassan Taha
- College of Medicine and Center of Medical Education, University of Sharjah, Sharjah, United Arab Emirates
| | - Sarra Shorbagi
- Department of Family and Community Medicine and Behavioral Science, College of Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Nik Ahmad Zuky Nik Lah
- Obstetrics and Gynecology Department, School of Medical Sciences, Universiti Sains Malaysia, Kota Bharu, Kelantan Malaysia
| | - Ahmad Fuad Abdul Rahim
- Medical Education Department, School of Medical Sciences, Universiti Sains Malaysia, Kota Bharu, Kelantan Malaysia
| |
Collapse
|
23
|
Renes J, van der Vleuten CPM, Collares CF. Utility of a multimodal computer-based assessment format for assessment with a higher degree of reliability and validity. MEDICAL TEACHER 2023; 45:433-441. [PMID: 36306368 DOI: 10.1080/0142159x.2022.2137011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Multiple choice questions (MCQs) suffer from cueing, item quality and factual knowledge testing. This study presents a novel multimodal test containing alternative item types in a computer-based assessment (CBA) format, designated as Proxy-CBA. The Proxy-CBA was compared to a standard MCQ-CBA, regarding validity, reliability, standard error of measurement, and cognitive load, using a quasi-experimental crossover design. Biomedical students were randomized into two groups to sit a 65-item formative exam starting with the MCQ-CBA followed by the Proxy-CBA (group 1, n = 38), or the reverse (group 2, n = 35). Subsequently, a questionnaire on perceived cognitive load was taken, answered by 71 participants. Both CBA formats were analyzed according to parameters of the Classical Test Theory and the Rasch model. Compared to the MCQ-CBA, the Proxy-CBA had lower raw scores (p < 0.001, η2 = 0.276), higher reliability estimates (p < 0.001, η2 = 0.498), lower SEM estimates (p < 0.001, η2 = 0.807), and lower theta ability scores (p < 0.001, η2 = 0.288). The questionnaire revealed no significant differences between both CBA tests regarding perceived cognitive load. Compared to the MCQ-CBA, the Proxy-CBA showed increased reliability and a higher degree of validity with similar cognitive load, suggesting its utility as an alternative assessment format.
Collapse
Affiliation(s)
- Johan Renes
- Department of Human Biology, Maastricht University, The Netherlands
| | - Cees P M van der Vleuten
- Department of Educational Research and Development, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Carlos F Collares
- Department of Educational Research and Development, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
- European Board of Medical Assessors, Edinburgh, UK
- Stichting Aphasia.help, Maastricht, The Netherlands
| |
Collapse
|
24
|
Bladt F, Khanal P, Prabhu AM, Hauke E, Kingsbury M, Saleh SN. Medical students' perception of changes in assessments implemented during the COVID-19 pandemic. BMC MEDICAL EDUCATION 2022; 22:844. [PMID: 36476483 PMCID: PMC9727955 DOI: 10.1186/s12909-022-03787-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 09/29/2022] [Indexed: 06/04/2023]
Abstract
BACKGROUND COVID-19 posed many challenges to medical education in the United Kingdom (UK). This includes implementing assessments during 4 months of national lockdowns within a 2-year period, where in-person education was prohibited. This study aimed to identify medical school assessment formats emerging during COVID-19 restrictions, investigate medical students' perspectives on these and identify influencing factors. METHODS The study consisted of two phases: a questionnaire asking medical students about assessment changes they experienced, satisfaction with these changes and preference regarding different assessments that emerged. The second phase involved semi-structured interviews with medical students across the UK to provide a deeper contextualized understanding of the complex factors influencing their perspectives. RESULTS In the questionnaire responses, open-book assessments had the highest satisfaction, and were the preferred option indicated. Furthermore, in the case of assessment cancellation, an increase in weighting of future assessments was preferred over increase in weighting of past assessments. Students were also satisfied with formative or pass-fail assessments. Interview analyses indicate that although cancellation or replacement of summative assessments with formative assessments reduced heightened anxiety from additional COVID-19 stressors, students worried about possible future knowledge gaps resulting from reduced motivation for assessment-related study. Students' satisfaction level was also affected by timeliness of communication from universities regarding changes, and student involvement in the decision-making processes. Perceived fairness and standardisation of test-taking conditions were ranked as the most important factors influencing student satisfaction, followed closely by familiarity with the format. In contrast, technical issues, lack of transparency about changes, perceived unfairness around invigilation, and uncertainty around changes in assessment format and weighting contributed to dissatisfaction. CONCLUSIONS Online open-book assessments were seen as the most ideal amongst all participants, and students who experienced these were the most satisfied with their assessment change. They were perceived as most fair and authentic compared to real-life medical training. We seek to inform educators about student perceptions of successful assessment strategies under COVID-19 restrictions and provide evidence to allow debate on ongoing assessment reform and innovation. While this work looks specifically at assessment changes during COVID-19, understanding factors affecting student perception of assessment is applicable to examinations beyond COVID-19.
Collapse
Affiliation(s)
- Francesca Bladt
- Imperial College School of Medicine, Imperial College London, London, UK
| | - Prakriti Khanal
- Imperial College School of Medicine, Imperial College London, London, UK
| | | | - Elizabeth Hauke
- Centre for Languages, Culture, and Communication, Imperial College London, London, UK
| | - Martyn Kingsbury
- Centre for Higher Education Research, Imperial College London, London, UK
| | - Sohag Nafis Saleh
- Imperial College School of Medicine, Imperial College London, London, UK.
| |
Collapse
|
25
|
Affiliation(s)
- Erin S. Barry
- Erin S. Barry, MS, is Assistant Professor, Department of Military & Emergency Medicine and Department of Anesthesiology, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences, and Doctoral Candidate, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands
| | - Jerusalem Merkebu
- Jerusalem Merkebu, PhD, is Assistant Professor, Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences
| | - Lara Varpio
- Lara Varpio, PhD, is Professor of Medicine, Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences
| |
Collapse
|
26
|
Barry ES, Merkebu J, Varpio L. State-of-the-art literature review methodology: A six-step approach for knowledge synthesis. PERSPECTIVES ON MEDICAL EDUCATION 2022; 11:281-288. [PMID: 36063310 PMCID: PMC9582072 DOI: 10.1007/s40037-022-00725-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 07/25/2022] [Accepted: 07/27/2022] [Indexed: 06/01/2023]
Abstract
INTRODUCTION Researchers and practitioners rely on literature reviews to synthesize large bodies of knowledge. Many types of literature reviews have been developed, each targeting a specific purpose. However, these syntheses are hampered if the review type's paradigmatic roots, methods, and markers of rigor are only vaguely understood. One literature review type whose methodology has yet to be elucidated is the state-of-the-art (SotA) review. If medical educators are to harness SotA reviews to generate knowledge syntheses, we must understand and articulate the paradigmatic roots of, and methods for, conducting SotA reviews. METHODS We reviewed 940 articles published between 2014-2021 labeled as SotA reviews. We (a) identified all SotA methods-related resources, (b) examined the foundational principles and techniques underpinning the reviews, and (c) combined our findings to inductively analyze and articulate the philosophical foundations, process steps, and markers of rigor. RESULTS In the 940 articles reviewed, nearly all manuscripts (98%) lacked citations for how to conduct a SotA review. The term "state of the art" was used in 4 different ways. Analysis revealed that SotA articles are grounded in relativism and subjectivism. DISCUSSION This article provides a 6-step approach for conducting SotA reviews. SotA reviews offer an interpretive synthesis that describes: This is where we are now. This is how we got here. This is where we could be going. This chronologically rooted narrative synthesis provides a methodology for reviewing large bodies of literature to explore why and how our current knowledge has developed and to offer new research directions.
Collapse
Affiliation(s)
- Erin S Barry
- Department of Anesthesiology, F. Edward Hébert School of Medicine, Uniformed Services University, Bethesda, MD, USA.
- School of Health Professions Education (SHE), Maastricht University, Maastricht, The Netherlands.
| | - Jerusalem Merkebu
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University, Bethesda, MD, USA
| | - Lara Varpio
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University, Bethesda, MD, USA
| |
Collapse
|
27
|
Phinney LB, Fluet A, O'Brien BC, Seligman L, Hauer KE. Beyond Checking Boxes: Exploring Tensions With Use of a Workplace-Based Assessment Tool for Formative Assessment in Clerkships. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:1511-1520. [PMID: 35703235 DOI: 10.1097/acm.0000000000004774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE To understand the role of a workplace-based assessment (WBA) tool in facilitating feedback for medical students, this study explored changes and tensions in a clerkship feedback activity system through the lens of cultural historical activity theory (CHAT) over 2 years of tool implementation. METHOD This qualitative study uses CHAT to explore WBA use in core clerkships by identifying feedback activity system elements (e.g., community, tools, rules, objects) and tensions among these elements. University of California, San Francisco core clerkship students were invited to participate in semistructured interviews eliciting experience with a WBA tool intended to enhance direct observation and feedback in year 1 (2019) and year 2 (2020) of implementation. In year 1, the WBA tool required supervisor completion in the school's evaluation system on a computer. In year 2, both students and supervisors had WBA completion abilities and could access the form via a smartphone separate from the school's evaluation system. RESULTS Thirty-five students participated in interviews. The authors identified tensions that shifted with time and tool iterations. Year 1 students described tensions related to cumbersome tool design, fear of burdening supervisors, confusion over WBA purpose, WBA as checking boxes, and WBA usefulness depending on clerkship context and culture. Students perceived dissatisfaction with the year 1 tool version among peers and supervisors. The year 2 mobile-based tool and student completion capabilities helped to reduce many of the tensions noted in year 1. Students expressed wider WBA acceptance among peers and supervisors in year 2 and reported understanding WBA to be for low-stakes feedback, thereby supporting formative assessment for learning. CONCLUSIONS Using CHAT to explore changes in a feedback activity system with WBA tool iterations revealed elements important to WBA implementation, including designing technology for tool efficiency and affording students autonomy to document feedback with WBAs.
Collapse
Affiliation(s)
- Lauren B Phinney
- L.B. Phinney is a first-year internal medicine resident, Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, California
| | - Angelina Fluet
- A. Fluet is a fourth-year medical student, University of California, San Francisco School of Medicine, San Francisco, California
| | - Bridget C O'Brien
- B.C. O'Brien is professor of medicine and education scientist, Department of Medicine and Center for Faculty Educators, University of California, San Francisco School of Medicine, San Francisco, California
| | - Lee Seligman
- L. Seligman is a second-year internal medicine resident, Department of Medicine, New York-Presbyterian Hospital, Columbia University Irving Medical Center, New York, New York
| | - Karen E Hauer
- K.E. Hauer is associate dean for competency assessment and professional standards and professor, Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, California
| |
Collapse
|
28
|
Cain J, Medina M, Romanelli F, Persky A. Deficiencies of Traditional Grading Systems and Recommendations for the Future. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2022; 86:8850. [PMID: 34815216 PMCID: PMC10159463 DOI: 10.5688/ajpe8850] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Accepted: 11/15/2021] [Indexed: 05/06/2023]
Abstract
Objective. To review issues surrounding the use of grades in the educational process and provide evidence-based recommendations for how to redesign grading practices for optimal value.Findings. Traditional tiered grading systems (ie, A, B, C, etc) have historically been a major component of the formal educational process. The way grades are used and interpreted are typically based on some commonly held assumptions, including that they are accurate measures of learning, that they motivate students to learn, and that they provide feedback to learners. However, much of the research regarding grades indicates that flaws exist in these assumptions. Grades may not always accurately measure learning, they can have adverse effects on student motivation, and they are not a good form of feedback.Summary. The Academy should consider the evidence regarding the purpose, effects, and interpretation of grades in the educational process. Despite barriers and potential pushback, pharmacy educators should revise grading practices to be more accurate, interpretable, and beneficial to learner development.
Collapse
Affiliation(s)
- Jeff Cain
- University of Kentucky, College of Pharmacy, Lexington, Kentucky
| | - Melissa Medina
- University of Oklahoma, College of Pharmacy, Oklahoma City, Oklahoma
- Associate Editor, American Journal of Pharmaceutical Education, Arlington, Virginia
| | - Frank Romanelli
- University of Kentucky, College of Pharmacy, Lexington, Kentucky
- Executive Associate Editor, American Journal of Pharmaceutical Education, Arlington, Virginia
| | - Adam Persky
- University of North Carolina at Chapel Hill, Chapel Hill, North Carolina
- Associate Editor, American Journal of Pharmaceutical Education, Arlington, Virginia
| |
Collapse
|
29
|
Fawns T, Schaepkens S. A Matter of Trust: Online Proctored Exams and the Integration of Technologies of Assessment in Medical Education. TEACHING AND LEARNING IN MEDICINE 2022; 34:444-453. [PMID: 35466830 DOI: 10.1080/10401334.2022.2048832] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 02/14/2022] [Indexed: 06/14/2023]
Abstract
ISSUE Technology is pervasive in medicine, but we too rarely examine how it shapes assessment, learning, knowledge, and performance. Cultures of assessment also shape identities, social relations, and the knowledge and behavior recognized as legitimate by a profession. Therefore, the combination of technology and assessment within medical education is worthy of review. Online proctoring services have become more prevalent during the Covid-19 pandemic, as a means of continuing high-stakes invigilated examinations online. With criticisms about increased surveillance, discrimination, and the outsourcing of control to commercial vendors, is this simply "moving exams online", or are there more serious implications? What can this extreme example tell us about how our technologies of assessment influence relationships between trainees and medical education institutions? EVIDENCE We combine postdigital and postphenomenology approaches to analyze the written component of the 2020 online proctored United Kingdom Royal College of Physicians (MRCP) membership exam. We examine the scripts, norms, and trust relations produced through this example of online proctoring, and then locate them in historical and economic contexts. We find that the proctoring service projects a false objectivity that is undermined by the tight script with which examinees must comply in an intensified norm of surveillance, and by the interpretation of digital data by unseen human proctors. Nonetheless, such proctoring services are promoted by an image of data-driven innovation, a rhetoric of necessity in response to a growing problem of online cheating, and an aversion, within medical education institutions, to changing assessment formats (and thus the need to accept different forms of knowledge as legitimate). IMPLICATIONS The use of online proctoring technology by medical education institutions intensifies established norms, already present within examinations, of surveillance and distrust. Moreover, it exacerbates tensions between conflicting agendas of commercialization, accountability, and the education of trustworthy professionals. Our analysis provides an example of why it is important to stop and consider the holistic implications of introducing technological "solutions", and to interrogate the intersection of technology and assessment practices in relation to the wider goals of medical education.
Collapse
Affiliation(s)
- Tim Fawns
- Edinburgh Medical School, University of Edinburgh, Edinburgh, United Kingdom
| | - Sven Schaepkens
- Department of General Practice, Erasmus University Medical Center, Rotterdam, The Netherlands
| |
Collapse
|
30
|
From Traditional to Programmatic Assessment in Three (Not So) Easy Steps. EDUCATION SCIENCES 2022. [DOI: 10.3390/educsci12070487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Programmatic assessment (PA) has strong theoretical and pedagogical underpinnings, but its practical implementation brings a number of challenges—particularly in traditional university settings involving large cohort sizes. This paper presents a detailed case report of an in-progress programmatic assessment implementation involving a decade of assessment innovation occurring in three significant and transformative steps. The starting position and subsequent changes represented in each step are reflected against the framework of established principles and implementation themes of PA. This case report emphasises the importance of ongoing innovation and evaluative research, the advantage of a dedicated team with a cohesive plan, and the fundamental necessity of electronic data collection. It also highlights the challenge of traditional university cultures, the potential advantage of a major pandemic disruption, and the necessity for curriculum renewal to support significant assessment change. Our PA implementation began with a plan to improve the learning potential of individual assessments and over the subsequent decade expanded to encompass a cohesive and course wide assessment program involving meaningful aggregation of assessment data. In our context (large cohort sizes and university-wide assessment policy) regular progress review meetings and progress decisions based on aggregated qualitative and quantitative data (rather than assessment format) remain local challenges.
Collapse
|
31
|
Norman G, Sherbino J, Varpio L. The scope of health professions education requires complementary and diverse approaches to knowledge synthesis. PERSPECTIVES ON MEDICAL EDUCATION 2022; 11:139-143. [PMID: 35389196 PMCID: PMC9240133 DOI: 10.1007/s40037-022-00706-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 02/09/2022] [Accepted: 02/10/2022] [Indexed: 06/14/2023]
Affiliation(s)
- Geoffrey Norman
- McMaster Education Research, Innovation and Theory (MERIT) program, Hamilton, Ontario, Canada.
| | - Jonathan Sherbino
- McMaster Education Research, Innovation and Theory (MERIT) program, Hamilton, Ontario, Canada
- Department of Medicine, McMaster University, Hamilton, Ontario, Canada
| | - Lara Varpio
- McMaster Education Research, Innovation and Theory (MERIT) program, Hamilton, Ontario, Canada
- Center for Health Professions Education, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| |
Collapse
|
32
|
Saiyad S, Bhagat P, Virk A, Mahajan R, Singh T. Changing Assessment Scenarios: Lessons for Changing Practice. Int J Appl Basic Med Res 2021; 11:206-213. [PMID: 34912682 PMCID: PMC8633695 DOI: 10.4103/ijabmr.ijabmr_334_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Revised: 08/03/2021] [Accepted: 09/02/2021] [Indexed: 11/04/2022] Open
Abstract
Assessment is a process that includes ascertainment of improvement in the performance of students over time, motivation of students to study, evaluation of teaching methods, and ranking of student capabilities. It is an important component of the educational process influencing student learning. Although we have embarked on a new curricular model, assessment has remained largely ignored despite being the hallmark of competency-based education. During the earlier stages, the assessment was considered akin to "measurement," believing that competence is "generic, fixed and transferable across content," could be measured quantitatively and can be expressed as a single score. The objective assessment was the norm and subjective tools were considered unreliable and biased. It was soon realized that "competence is specific and nontransferable," mandating the use of multiple assessment tools across multiple content areas using multiple assessors. A paradigm change through "programmatic assessment" only occurred with the understanding that competence is "dynamic, incremental and contextual." Here, information about the students' competence and progress is gathered continually over time, analysed and supplemented with purposefully collected additional information when needed, using carefully selected combination of tools and assessor expertise, leading to an authentic, observation-driven, institutional assessment system. In the conduct of any performance assessment, the assessor remains an important part of the process, therefore making assessor training indispensable. In this paper, we look at the changing paradigms of our understanding of clinical competence, corresponding global changes in assessment and then try to make out a case for adopting the prevailing trends in the assessment of clinical competence.
Collapse
Affiliation(s)
- Shaista Saiyad
- Department of Physiology, Smt N H L Municipal Medical College, Ahmedabad, Gujarat, India
| | - Purvi Bhagat
- M and J Western Regional Institute of Ophthalmology, B. J. Medical College, Ahmedabad, Gujarat, India
| | - Amrit Virk
- Department of Community Medicine, Adesh Medical College and Hospital, Kurukshetra, Haryana, India
| | - Rajiv Mahajan
- Department of Pharmacology, Adesh Institute of Medical Sciences and Research, Bathinda, Punjab, India
| | - Tejinder Singh
- Department of Medical Education, Sri Guru Ram Das Institute of Medical Sciences and Research, Amritsar, Punjab, India
| |
Collapse
|
33
|
Roberts C, Khanna P, Lane AS, Reimann P, Schuwirth L. Exploring complexities in the reform of assessment practice: a critical realist perspective. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2021; 26:1641-1657. [PMID: 34431028 DOI: 10.1007/s10459-021-10065-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 08/08/2021] [Indexed: 06/13/2023]
Abstract
Although the principles behind assessment for and as learning are well-established, there can be a struggle when reforming traditional assessment of learning to a program which encompasses assessment for and as learning. When introducing and reporting reforms, tensions in faculty may arise because of differing beliefs about the relationship between assessment and learning and the rules for the validity of assessments. Traditional systems of assessment of learning privilege objective, structured quantification of learners' performances, and are done to the students. Newer systems of assessment promote assessment for learning, emphasise subjectivity, collate data from multiple sources, emphasise narrative-rich feedback to promote learner agency, and are done with the students. This contrast has implications for implementation and evaluative research. Research of assessment which is done to students typically asks, "what works", whereas assessment that is done with the students focuses on more complex questions such as "what works, for whom, in which context, and why?" We applied such a critical realist perspective drawing on the interplay between structure and agency, and a systems approach to explore what theory says about introducing programmatic assessment in the context of pre-existing traditional approaches. Using a reflective technique, the internal conversation, we developed four factors that can assist educators considering major change to assessment practice in their own contexts. These include enabling positive learner agency and engagement; establishing argument-based validity frameworks; designing purposeful and eclectic evidence-based assessment tasks; and developing a shared narrative that promotes reflexivity in appreciating the complex relationships between assessment and learning.
Collapse
Affiliation(s)
- Chris Roberts
- Faculty of Medicine and Health, Education Office, Sydney Medical School, The University of Sydney, Sydney, NSW, Australia.
| | - Priya Khanna
- Faculty of Medicine and Health, Education Office, Sydney Medical School, The University of Sydney, Sydney, NSW, Australia
| | - Andrew Stuart Lane
- Faculty of Medicine and Health, Education Office, Sydney Medical School, The University of Sydney, Sydney, NSW, Australia
| | - Peter Reimann
- Centre for Research on Learning and Innovation (CRLI), The University of Sydney, Sydney, NSW, Australia
| | - Lambert Schuwirth
- Prideaux Discipline of Clinical Education, College of Medicine and Public Health, Flinders University, Adelaide, South Australia, Australia
| |
Collapse
|
34
|
Hamdy H, Sreedharan J, Rotgans JI, Zary N, Bahous SA, Venkatramana M, AbdelFattah Elzayat E, Lamba P, Sebastian SK, Momen NKA. Virtual Clinical Encounter Examination (VICEE): A novel approach for assessing medical students' non-psychomotor clinical competency. MEDICAL TEACHER 2021; 43:1203-1209. [PMID: 34130589 DOI: 10.1080/0142159x.2021.1935828] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
INTRODUCTION The Corona Virus Disease-19 (COVID-19) pandemic disrupted medical education across the world. Online teaching has grown rapidly under lockdown. Yet the online approach for assessment presents a number of challenges, particularly when evaluating clinical competencies. The aim of this study was to investigate the feasibility, acceptability, reliability and validity of an online Virtual Clinical Encounter Examination (VICEE) to assess non-psychomotor competencies (non-procedure or manual skills) of medical students. METHOD Sixty-one final year medical students took the VICEE as part of the final summative examination. A panel of faculty experts developed the exam cases and competencies. They administered the test online via real-time interaction with artificial intelligence (AI) based virtual patients, along with faculty and IT support. RESULTS Student and faculty surveys demonstrated satisfaction with the experience. Confirmatory factor analysis supported convergent validity of VICEE with Direct Observation Clinical Encounter Examination (DOCEE), a previously validated clinical examination. The observed sensitivity was 81.8%, specificity 64.1% and likelihood ratio 12.6, supporting the ability of VICEE to diagnose 'clinical incompetence' among students. CONCLUSION Our results suggest that online AI-based virtual patient high fidelity simulation may be used as an alternative tool to assess some aspects of non-psychometric competencies.
Collapse
Affiliation(s)
- Hossam Hamdy
- College of Medicine, Gulf Medical University, Ajman, United Arab Emirates
| | | | - Jerome I Rotgans
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Nabil Zary
- Institute for Excellence in Health Professions Education, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai, UAE
| | - Sola Aoun Bahous
- School of Medicine Lebanese American University, Byblos, Lebanon
| | - Manda Venkatramana
- College of Medicine, Gulf Medical University, Ajman, United Arab Emirates
| | | | - Pankaj Lamba
- College of Medicine, Gulf Medical University, Ajman, United Arab Emirates
| | - Suraj K Sebastian
- College of Medicine, Gulf Medical University, Ajman, United Arab Emirates
| | | |
Collapse
|
35
|
Valentine N, Shanahan EM, Durning SJ, Schuwirth L. Making it fair: Learners' and assessors' perspectives of the attributes of fair judgement. MEDICAL EDUCATION 2021; 55:1056-1066. [PMID: 34060124 DOI: 10.1111/medu.14574] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 05/19/2021] [Accepted: 05/26/2021] [Indexed: 06/12/2023]
Abstract
INTRODUCTION Optimising the use of subjective human judgement in assessment requires understanding what makes judgement fair. Whilst fairness cannot be simplistically defined, the underpinnings of fair judgement within the literature have been previously combined to create a theoretically-constructed conceptual model. However understanding assessors' and learners' perceptions of what is fair human judgement is also necessary. The aim of this study is to explore assessors' and learners' perceptions of fair human judgement, and to compare these to the conceptual model. METHODS A thematic analysis approach was used. A purposive sample of twelve assessors and eight post-graduate trainees undertook semi-structured interviews using vignettes. Themes were identified using the process of constant comparison. Collection, analysis and coding of the data occurred simultaneously in an iterative manner until saturation was reached. RESULTS This study supported the literature-derived conceptual model suggesting fairness is a multi-dimensional construct with components at individual, system and environmental levels. At an individual level, contextual, longitudinally-collected evidence, which is supported by narrative, and falls within ill-defined boundaries is essential for fair judgement. Assessor agility and expertise are needed to interpret and interrogate evidence, identify boundaries and provide narrative feedback to allow for improvement. At a system level, factors such as multiple opportunities to demonstrate competence and improvement, multiple assessors to allow for different perspectives to be triangulated, and documentation are needed for fair judgement. These system features can be optimized through procedural fairness. Finally, appropriate learning and working environments which considers patient needs and learners personal circumstances are needed for fair judgments. DISCUSSION This study builds on the theory-derived conceptual model demonstrating the components of fair judgement can be explicitly articulated whilst embracing the complexity and contextual nature of health-professions assessment. Thus it provides a narrative to support dialogue between learner, assessor and institutions about ensuring fair judgements in assessment.
Collapse
Affiliation(s)
- Nyoli Valentine
- Prideaux Discipline of Clinical Education, Flinders University, SA, Australia
| | | | - Steven J Durning
- Center for Health Professions Education, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Lambert Schuwirth
- Prideaux Discipline of Clinical Education, Flinders University, SA, Australia
| |
Collapse
|
36
|
Schumacher DJ, Turner DA. Entrustable Professional Activities: Reflecting on Where We Are to Define a Path for the Next Decade. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S1-S5. [PMID: 34183594 DOI: 10.1097/acm.0000000000004097] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Affiliation(s)
- Daniel J Schumacher
- D.J. Schumacher is associate professor of pediatrics, Cincinnati Children's Hospital Medical Center and University of Cincinnati College of Medicine, Cincinnati, Ohio
| | - David A Turner
- D.A. Turner is vice president for competency-based medical education, American Board of Pediatrics, Chapel Hill, North Carolina
| |
Collapse
|
37
|
Lentz A, Siy JO, Carraccio C. AI-ssessment: Towards Assessment As a Sociotechnical System for Learning. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S87-S88. [PMID: 34183608 DOI: 10.1097/acm.0000000000004104] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Two decades ago, the advent of competency-based medical education (CBME) marked a paradigm shift in assessment. Now, medical education is on the cusp of another transformation driven by advances in the field of artificial intelligence (AI). In this article, the authors explore the potential value of AI in advancing CBME and entrustable professional activities by shifting the focus of education from assessment of learning to assessment for learning. The thoughtful integration of AI technologies in observation is proposed to aid in restructuring our current system around the goal of assessment for learning by creating continuous, tight feedback loops that were not before possible. The authors argued that this personalized and less judgmental relationship between learner and machine could shift today's dominating mindset on grades and performance to one of growth and mastery learning that leads to expertise. However, because AI is neither objective nor value free, the authors stress the need for continuous co-production and evaluation of the technology with geographically and culturally diverse stakeholders to define desired behavior of the machine and assess its performance.
Collapse
Affiliation(s)
- Alison Lentz
- A. Lentz is senior staff strategist, Google Research, Mountain View, California
| | - J Oliver Siy
- J.O. Siy is staff user experience researcher, Google Research, Mountain View, California
| | - Carol Carraccio
- C. Carraccio is a former pediatrician, clinician educator, program director, and researcher with a focus on medical education
| |
Collapse
|
38
|
Bandiera G, Hall AK. Capturing the forest and the trees: workplace-based assessment tools in emergency medicine. CAN J EMERG MED 2021; 23:265-266. [PMID: 33959929 DOI: 10.1007/s43678-021-00125-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Accepted: 03/23/2021] [Indexed: 10/21/2022]
Affiliation(s)
- Glen Bandiera
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada.,Unity Health Toronto (St. Michael's), Toronto, ON, Canada
| | - Andrew K Hall
- Department of Emergency Medicine, Queen's University, Kingston, ON, Canada. .,Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada.
| |
Collapse
|
39
|
Evans DJR. Assessing the Wider Outcomes of Anatomy Education. ANATOMICAL SCIENCES EDUCATION 2021; 14:275-276. [PMID: 33768720 DOI: 10.1002/ase.2076] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 03/22/2021] [Indexed: 06/12/2023]
Affiliation(s)
- Darrell J R Evans
- School of Medicine and Public Health, The University of Newcastle, Callaghan, New South Wales, Australia
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, Victoria, Australia
| |
Collapse
|
40
|
Wilby KJ, Paravattil B. Cognitive load theory: Implications for assessment in pharmacy education. Res Social Adm Pharm 2020; 17:1645-1649. [PMID: 33358136 DOI: 10.1016/j.sapharm.2020.12.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2019] [Revised: 11/09/2020] [Accepted: 12/15/2020] [Indexed: 11/28/2022]
Abstract
The concept of mental workload is well studied from a learner's perspective but has yet to be better understood from the perspective of an assessor. Mental workload is largely associated with cognitive load theory, which describes three different types of load. Intrinsic load deals with the complexity of the task, extraneous load describes distractors to the task at hand, and germane load focuses on the development of schemas in working memory for future recall. Studies from medical education show that all three types of load are relevant when considering rater -based assessment (e.g. Objective Structured Clinical Examinations (OSCEs), or experiential training). Assessments with high intrinsic and extraneous load may interfere with assessors' attention and working memory and result in poorer quality assessment. Reducing these loads within assessment tasks should therefore be a priority for pharmacy educators. This commentary aims to provide a theoretical overview of mental workload in assessment, outline research findings from the medical education context, and propose strategies to be considered for reducing mental workload in rater-based assessments relevant to pharmacy education. Suggestions for future research are also addressed.
Collapse
Affiliation(s)
- Kyle John Wilby
- School of Pharmacy, University of Otago, PO Box 56, Dunedin, 9054, New Zealand.
| | | |
Collapse
|
41
|
Ellaway R, Tolsgaard M, Martimianakis MA. What divides us and what unites us? ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2020; 25:1019-1023. [PMID: 33258050 DOI: 10.1007/s10459-020-10016-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 11/04/2020] [Indexed: 06/12/2023]
Affiliation(s)
- Rachel Ellaway
- Department of Community Health Sciences, and Office of Health and Medical Education Scholarship, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada.
| | - Martin Tolsgaard
- Copenhagen Academy for Medical Education and Simulation, and Centre for Fetal Medicine, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|