1
|
Kinnear B, Schumacher DJ, Varpio L, Driessen EW, Konopasky A. Legitimation Without Argumentation: An Empirical Discourse Analysis of 'Validity as an Argument' in Assessment. PERSPECTIVES ON MEDICAL EDUCATION 2024; 13:469-480. [PMID: 39372230 PMCID: PMC11451546 DOI: 10.5334/pme.1404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 09/20/2024] [Indexed: 10/08/2024]
Abstract
Introduction Validity is frequently conceptualized in health professions education (HPE) assessment as an argument that supports the interpretation and uses of data. However, previous work has shown that many validity scholars believe argument and argumentation are relatively lacking in HPE. To better understand HPE's discourse around argument and argumentation with regard to assessment validity, the authors explored the discourses present in published HPE manuscripts. Methods The authors used a bricolage of critical discourse analysis approaches to understand how the language in influential peer reviewed manuscripts has shaped HPE's understanding of validity arguments and argumentation. The authors used multiple search strategies to develop a final corpus of 39 manuscripts that were seen as influential in how validity arguments are conceptualized within HPE. An analytic framework drawing on prior research on Argumentation Theory was used to code manuscripts before developing themes relevant to the research question. Results The authors found that the elaboration of argument and argumentation within HPE's validity discourse is scant, with few components of Argumentation Theory (such as intended audience) existing within the discourse. The validity as an argument discourse was legitimized via authorization (reference to authority), rationalization (reference to institutionalized action), and mythopoesis (narrative building). This legitimation has cemented the validity as an argument discourse in HPE despite minimal exploration of what argument and argumentation are. Discussion This study corroborates previous work showing the dearth of argument and argumentation present within HPE's validity discourse. An opportunity exists to use Argumentation Theory in HPE to better develop validation practices that support use of argument.
Collapse
Affiliation(s)
- Benjamin Kinnear
- Department of Pediatrics at University of Cincinnati College of Medicine in Cincinnati, OH, USA
| | - Daniel J. Schumacher
- Department of Pediatrics at Cincinnati Children’s Hospital Medical Center/University of Cincinnati College of Medicine in Cincinnati, OH, USA
| | - Lara Varpio
- Department of Pediatrics at the Perelman School of Medicine, University of Pennsylvania, USA
- Children’s Hospital of Philadelphia in Philadelphia, PA, USA
| | - Erik W. Driessen
- School of Health Professions Education (SHE) at Faculty of Health at the Medicine and Life Sciences of Maastricht University in Maastricht, NL
| | - Abigail Konopasky
- Geisel School of Medicine at Dartmouth in Hanover, New Hampshire, USA
| |
Collapse
|
2
|
Kinnear B, St-Onge C, Schumacher DJ, Marceau M, Naidu T. Validity in the Next Era of Assessment: Consequences, Social Impact, and Equity. PERSPECTIVES ON MEDICAL EDUCATION 2024; 13:452-459. [PMID: 39280703 PMCID: PMC11396166 DOI: 10.5334/pme.1150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 08/12/2024] [Indexed: 09/18/2024]
Abstract
Validity has long held a venerated place in education, leading some authors to refer to it as the "sine qua non" or "cardinal virtue" of assessment. And yet, validity has not held a fixed meaning; rather it has shifted in its definition and scope over time. In this Eye Opener, the authors explore if and how current conceptualizations of validity fit a next era of assessment that prioritizes patient care and learner equity. They posit that health profession education's conceptualization of validity will change in three related but distinct ways. First, consequences of assessment decisions will play a central role in validity arguments. Second, validity evidence regarding impacts of assessment on patients and society will be prioritized. Third, equity will be seen as part of validity rather than an unrelated concept. The authors argue that health professions education has the agency to change its ideology around validity, and to align with values that predominate the next era of assessment such as high-quality care and equity for learners and patients.
Collapse
Affiliation(s)
- Benjamin Kinnear
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| | - Christina St-Onge
- Department of Medicine, Researcher at the Center for Health Sciences Pedagogy, Université de Sherbrooke, Sherbrooke, Québec, Canada
| | - Daniel J Schumacher
- Department of Pediatrics, University of Cincinnati College of Medicine/Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - Mélanie Marceau
- School of Nursing, Université de Sherbrooke, Sherbrooke, Québec, Canada
| | - Thirusha Naidu
- Department of Innovation in Medical Education, Faculty of Medicine, University of Ottawa, Canada
- Department of Psychiatry, University of KwaZulu-Natal, South Africa
| |
Collapse
|
3
|
Wespi R, Schwendimann L, Neher A, Birrenbach T, Schauber SK, Manser T, Sauter TC, Kämmer JE. TEAMs go VR-validating the TEAM in a virtual reality (VR) medical team training. Adv Simul (Lond) 2024; 9:38. [PMID: 39261889 PMCID: PMC11389291 DOI: 10.1186/s41077-024-00309-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 08/29/2024] [Indexed: 09/13/2024] Open
Abstract
BACKGROUND Inadequate collaboration in healthcare can lead to medical errors, highlighting the importance of interdisciplinary teamwork training. Virtual reality (VR) simulation-based training presents a promising, cost-effective approach. This study evaluates the effectiveness of the Team Emergency Assessment Measure (TEAM) for assessing healthcare student teams in VR environments to improve training methodologies. METHODS Forty-two medical and nursing students participated in a VR-based neurological emergency scenario as part of an interprofessional team training program. Their performances were assessed using a modified TEAM tool by two trained coders. Reliability, internal consistency, and concurrent validity of the tool were evaluated using intraclass correlation coefficients (ICC) and Cronbach's alpha. RESULTS Rater agreement on TEAM's leadership, teamwork, and task management domains was high, with ICC values between 0.75 and 0.90. Leadership demonstrated strong internal consistency (Cronbach's alpha = 0.90), while teamwork and task management showed moderate to acceptable consistency (alpha = 0.78 and 0.72, respectively). Overall, the TEAM tool exhibited high internal consistency (alpha = 0.89) and strong concurrent validity with significant correlations to global performance ratings. CONCLUSION The TEAM tool proved to be a reliable and valid instrument for evaluating team dynamics in VR-based training scenarios. This study highlights VR's potential in enhancing medical education, especially in remote or distanced learning contexts. It demonstrates a dependable approach for team performance assessment, adding value to VR-based medical training. These findings pave the way for more effective, accessible interdisciplinary team assessments, contributing significantly to the advancement of medical education.
Collapse
Affiliation(s)
- Rafael Wespi
- Department of Emergency Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
- Graduate School for Health Sciences, University of Bern, Bern, Switzerland.
| | - Lukas Schwendimann
- Department of Emergency Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Andrea Neher
- Department of Emergency Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Graduate School for Health Sciences, University of Bern, Bern, Switzerland
| | - Tanja Birrenbach
- Department of Emergency Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Stefan K Schauber
- Centre for Educational Measurement (CEMO) & Unit for Health Sciences Education, University of Oslo, Oslo, Norway
| | - Tanja Manser
- FHNW School of Applied Psychology, University of Applied Sciences and Arts, Northwestern Switzerland, Olten, Switzerland
- Division of Anesthesiology and Intensive Care, Department of Clinical Sciences, Intervention and Technology, Karolinska Institutet, Huddinge, Sweden
| | - Thomas C Sauter
- Department of Emergency Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Juliane E Kämmer
- Department of Emergency Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Department of Social and Communication Psychology, University of Göttingen, Göttingen, Germany
| |
Collapse
|
4
|
Marceau M, Young M, Gallagher F, St-Onge C. Eight ways to get a grip on validity as a social imperative. CANADIAN MEDICAL EDUCATION JOURNAL 2024; 15:100-103. [PMID: 39114780 PMCID: PMC11302757 DOI: 10.36834/cmej.77727] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 08/10/2024]
Abstract
Validity as a social imperative foregrounds the social consequences of assessment and highlights the importance of building quality into the assessment development and monitoring processes. Validity as a social imperative is informed by current assessment trends such as programmatic-, longitudinal-, and rater-based assessment, and is one of the conceptualizations of validity currently at play in the Health Professions Education (HPE) literature. This Black Ice is intended to help readers to get a grip on how to embed principles of validity as a social imperative in the development and quality monitoring of an assessment. This piece draws on a program of work investigating validity as a social imperative, key HPE literature, and data generated through stakeholder interviews. We describe eight ways to implement validation practices that align with validity as a social imperative.
Collapse
Affiliation(s)
- Mélanie Marceau
- Faculty of Medicine and Health Sciences, Université de Sherbrooke, Québec, Canada
| | - Meredith Young
- Institute of Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Québec, Canada
| | - Frances Gallagher
- Faculty of Medicine and Health Sciences, Université de Sherbrooke, Québec, Canada
| | - Christina St-Onge
- Faculty of Medicine and Health Sciences, Université de Sherbrooke, Québec, Canada
| |
Collapse
|
5
|
Saberzadeh-Ardestani B, Sima AR, Khosravi B, Young M, Mortaz Hejri S. The impact of prior performance information on subsequent assessment: is there evidence of retaliation in an anonymous multisource assessment system? ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024; 29:531-550. [PMID: 37488326 DOI: 10.1007/s10459-023-10267-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 07/16/2023] [Indexed: 07/26/2023]
Abstract
Few studies have engaged in data-driven investigations of the presence, or frequency, of what could be considered retaliatory assessor behaviour in Multi-source Feedback (MSF) systems. In this study, authors explored how assessors scored others if, before assessing others, they received their own assessment score. The authors examined assessments from an established MSF system in which all clinical team members - medical students, interns, residents, fellows, and supervisors - anonymously assessed each other. The authors identified assessments in which an assessor (i.e., any team member providing a score to another) gave an aberrant score to another individual. An aberrant score was defined as one that was more than two standard deviations from the assessment receiver's average score. Assessors who gave aberrant scores were categorized according to whether their behaviour was preceded by: (1) receiving a score or not from another individual in the MSF system (2) whether the score they received was aberrant or not. The authors used a multivariable logistic regression model to investigate the association between the type of score received and the type of score given by that same individual. In total, 367 unique assessors provided 6091 scores on the performance of 484 unique individuals. Aberrant scores were identified in 250 forms (4.1%). The chances of giving an aberrant score were 2.3 times higher for those who had received a score, compared to those who had not (odds ratio 2.30, 95% CI:1.54-3.44, P < 0.001). Individuals who had received an aberrant score were also 2.17 times more likely to give an aberrant score to others compared to those who had received a non-aberrant score (2.17, 95% CI:1.39-3.39, P < 0.005) after adjusting for all other variables. This study documents an association between receiving scores within an anonymous multi-source feedback (MSF) system and providing aberrant scores to team members. These findings suggest care must be given to designing MSF systems to protect against potential downstream consequences of providing and receiving anonymous feedback.
Collapse
Affiliation(s)
- Bahar Saberzadeh-Ardestani
- Digestive Disease Research Center, Digestive Disease Research Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Ali Reza Sima
- Digestive Disease Research Center, Digestive Disease Research Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Bardia Khosravi
- Digestive Disease Research Center, Digestive Disease Research Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Meredith Young
- Institute of Health Sciences Education, McGill University, Montreal, QC, Canada
| | - Sara Mortaz Hejri
- Department of Medical Education, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
6
|
Tavares W, Pearce J. Attending to Variable Interpretations of Assessment Science and Practice. TEACHING AND LEARNING IN MEDICINE 2024; 36:244-252. [PMID: 37431929 DOI: 10.1080/10401334.2023.2231923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 05/31/2023] [Indexed: 07/12/2023]
Abstract
Issue: The way educators think about the nature of competence, the approaches one selects for the assessment of competence, what generated data implies, and what counts as good assessment now involve broader and more diverse interpretive processes. Broadening philosophical positions in assessment has educators applying different interpretations to similar assessment concepts. As a result, what is claimed through assessment, including what counts as quality, can be different for each of us despite using similar activities and language. This is leading to some uncertainty on how to proceed or worse, provides opportunities for questioning the legitimacy of any assessment activity or outcome. While some debate in assessment is inevitable, most have been within philosophical positions (e.g., how best to minimize error), whereas newer debates are happening across philosophical positions (e.g., whether error is a useful concept). As new ways of approaching assessment have emerged, the interpretive nature of underlying philosophical positions has not been sufficiently attended to. Evidence: We illustrate interpretive processes of assessment in action by: (a) summarizing the current health professions assessment context from a philosophical perspective as a way of describing its evolution; (b) demonstrating implications in practice using two examples (i.e., analysis of assessment work and validity claims); and (c) examining pragmatism to demonstrate how even within specific philosophical positions opportunities for variable interpretations still exist. Implications: Our concern is not that assessment designers and users have different assumptions, but that practically, educators may unknowingly (or insidiously) apply different assumptions, and methodological and interpretive norms, and subsequently settle on different views on what serves as quality assessment even for the same assessment program or event. With the state of assessment in health professions in flux, we conclude by calling for a philosophically explicit approach to assessment, and underscore assessment as, fundamentally, an interpretive process - one which demands the careful elucidation of philosophical assumptions to promote understanding and ultimately defensibility of assessment processes and outcomes.
Collapse
Affiliation(s)
- Walter Tavares
- The Wilson Centre for Health Professions Education Research, and Post-Graduate Medical Education, Toronto, Canada
- Temerty Faculty of Medicine, University Health Network and University of Toronto, Toronto, Canada
- Department of Health and Society, University of Toronto, Toronto, Canada
- York Region Paramedic Services, Community Health Services, Regional Municipality of York, Newmarket, Canada
| | - Jacob Pearce
- Tertiary Education, Australian Council for Educational Research, Camberwell, Australia
| |
Collapse
|
7
|
Wilson AB, Brooks WS, Edwards DN, Deaver J, Surd JA, Pirlo OJ, Byrd WA, Meyer ER, Beresheim A, Cuskey SL, Tsintolas JG, Norrell ES, Fisher HC, Skaggs CW, Mysak D, Levin SR, Escutia Rosas CE, Cale AS, Karim MN, Pollock J, Kakos NJ, O'Brien MS, Lufler RS. Survey response rates in health sciences education research: A 10-year meta-analysis. ANATOMICAL SCIENCES EDUCATION 2024; 17:11-23. [PMID: 37850629 DOI: 10.1002/ase.2345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 09/08/2023] [Accepted: 09/18/2023] [Indexed: 10/19/2023]
Abstract
Growth in the online survey market may be increasing response burden and possibly jeopardizing higher response rates. This meta-analysis evaluated survey trends over one decade (2011-2020) to determine: (1) changes in survey publication rates over time, (2) changes in response rates over time, (3) typical response rates within health sciences education research, (4) the factors influencing survey completion levels, and (5) common gaps in survey methods and outcomes reporting. Study I estimated survey publication trends between 2011 and 2020 using articles published in the top three health sciences education research journals. Study II searched the anatomical sciences education literature across six databases and extracted study/survey features and survey response rates. Time plots and a proportional meta-analysis were performed. Per 2926 research articles, the annual estimated proportion of studies with survey methodologies has remained constant, with no linear trend (p > 0.050) over time (Study I). Study II reported a pooled absolute response rate of 67% (95% CI = 63.9-69.0) across 360 studies (k), totaling 115,526 distributed surveys. Despite response rate oscillations over time, no significant linear trend (p = 0.995) was detected. Neither survey length, incentives, sponsorship, nor population type affected absolute response rates (p ≥ 0.070). Only 35% (120 of 339) of studies utilizing a Likert scale reported evidence of survey validity. Survey response rates and the prevalence of studies with survey methodologies have remained stable with no linear trends over time. We recommend researchers strive for a typical absolute response rate of 67% or higher and clearly document evidence of survey validity for empirical studies.
Collapse
Affiliation(s)
- Adam B Wilson
- Department of Anatomy and Cell Biology, Rush University, Chicago, Illinois, USA
| | - William S Brooks
- Department of Cell, Developmental, and Integrative Biology, University of Alabama at Birmingham, Heersink School of Medicine, Birmingham, Alabama, USA
| | - Danielle N Edwards
- Department of Cell, Developmental, and Integrative Biology, University of Alabama at Birmingham, Heersink School of Medicine, Birmingham, Alabama, USA
| | - Jill Deaver
- Lister Hill Library of the Health Sciences Clinical, Academic, & Research Engagement (CARE) Department, University of Alabama at Birmingham Libraries, Birmingham, Alabama, USA
| | - Jessica A Surd
- Department of Cell, Developmental, and Integrative Biology, University of Alabama at Birmingham, Heersink School of Medicine, Birmingham, Alabama, USA
| | - Obadiah J Pirlo
- School of Dentistry, University of Texas Health Science Center at San Antonio, San Antonio, Texas, USA
| | - William A Byrd
- Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Edgar R Meyer
- Department of Advanced Biomedical Education, University of Mississippi Medical Center, Jackson, Mississippi, USA
| | - Amy Beresheim
- Department of Anatomy and Cell Biology, Rush University, Chicago, Illinois, USA
| | | | | | - Eric S Norrell
- Rush Medical College, Rush University, Chicago, Illinois, USA
| | | | | | - Dmytro Mysak
- Rush Medical College, Rush University, Chicago, Illinois, USA
| | | | | | - Andrew S Cale
- Department of Anatomy, Cell Biology, and Physiology, Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - Md Nazmul Karim
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
| | | | | | | | - Rebecca S Lufler
- Department of Medical Education, Tufts University School of Medicine, Boston, Massachusetts, USA
| |
Collapse
|
8
|
Carrillo-Avalos BA, Leenen I, Trejo-Mejía JA, Sánchez-Mendiola M. Bridging Validity Frameworks in Assessment: Beyond Traditional Approaches in Health Professions Education. TEACHING AND LEARNING IN MEDICINE 2023:1-10. [PMID: 38108266 DOI: 10.1080/10401334.2023.2293871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Accepted: 11/28/2023] [Indexed: 12/19/2023]
Abstract
Construct: High-stakes assessments measure several constructs, such as knowledge, competencies, and skills. In this case, validity evidence for test scores' uses and interpretations is of utmost importance, because of the consequences for everyone involved in their development and implementation. Background: Educational assessment requires an appropriate understanding and use of validity frameworks; however, health professions educators still struggle with the conceptual challenges of validity, and frequently validity analyses have a narrow focus. Important obstacles are the plurality of validity frameworks and the difficulty of grounding these abstract concepts in practice. Approach: We reviewed the validity frameworks literature to identify the main elements of frequently used models (Messick and Kane's) and proposed linking frameworks including Russell's recent overarching proposal. Examples are provided with commonly used assessment instruments in health professions education. Findings: Several elements in these frameworks can be integrated into a common approach, matching and aligning Messick's sources of validity with Kane's four inference types. Conclusions: This proposal to contribute evidence for assessment inferences may provide guidance to understanding the use of validity evidence in applied settings. The evolving field of validity research provides opportunities for its integration and practical use in health professions education.
Collapse
Affiliation(s)
| | - Iwin Leenen
- Faculty of Psychology, National Autonomous University of Mexico (UNAM), Mexico City, Mexico
| | | | - Melchor Sánchez-Mendiola
- Faculty of Medicine, UNAM, Mexico City, Mexico
- Educational Innovation and Distance Education, UNAM, Coordination of Open University, Mexico City, Mexico
| |
Collapse
|
9
|
Choo EK, Woods R, Walker ME, O’Brien JM, Chan TM. The Quality of Assessment for Learning score for evaluating written feedback in anesthesiology postgraduate medical education: a generalizability and decision study. CANADIAN MEDICAL EDUCATION JOURNAL 2023; 14:78-85. [PMID: 38226296 PMCID: PMC10787859 DOI: 10.36834/cmej.75876] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
Background Competency based residency programs depend on high quality feedback from the assessment of entrustable professional activities (EPA). The Quality of Assessment for Learning (QuAL) score is a tool developed to rate the quality of narrative comments in workplace-based assessments; it has validity evidence for scoring the quality of narrative feedback provided to emergency medicine residents, but it is unknown whether the QuAL score is reliable in the assessment of narrative feedback in other postgraduate programs. Methods Fifty sets of EPA narratives from a single academic year at our competency based medical education post-graduate anesthesia program were selected by stratified sampling within defined parameters [e.g. resident gender and stage of training, assessor gender, Competency By Design training level, and word count (≥17 or <17 words)]. Two competency committee members and two medical students rated the quality of narrative feedback using a utility score and QuAL score. We used Kendall's tau-b co-efficient to compare the perceived utility of the written feedback to the quality assessed with the QuAL score. The authors used generalizability and decision studies to estimate the reliability and generalizability coefficients. Results Both the faculty's utility scores and QuAL scores (r = 0.646, p < 0.001) and the trainees' utility scores and QuAL scores (r = 0.667, p < 0.001) were moderately correlated. Results from the generalizability studies showed that utility scores were reliable with two raters for both faculty (Epsilon=0.87, Phi=0.86) and trainees (Epsilon=0.88, Phi=0.88). Conclusions The QuAL score is correlated with faculty- and trainee-rated utility of anesthesia EPA feedback. Both faculty and trainees can reliability apply the QuAL score to anesthesia EPA narrative feedback. This tool has the potential to be used for faculty development and program evaluation in Competency Based Medical Education. Other programs could consider replicating our study in their specialty.
Collapse
Affiliation(s)
- Eugene K Choo
- Department of Anesthesiology, College of Medicine, University of Saskatchewan, Saskatchewan, Canada;
| | - Rob Woods
- Department of Emergency Medicine, College of Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Mary Ellen Walker
- Department of Anesthesiology, College of Medicine, University of Saskatchewan, Saskatchewan, Canada;
| | - Jennifer M O’Brien
- Department of Anesthesiology, College of Medicine, University of Saskatchewan, Saskatchewan, Canada;
| | - Teresa M Chan
- Department of Medicine (Division of Emergency Medicine; Division of Education & Innovation), Michael G. DeGroote School of Medicine, Faculty of Health Sciences, McMaster University and Office of Continuing Professional Development & McMaster Education Research, Innovation, and Theory (MERIT) Program, Faculty of Health Sciences, McMaster University, Ontario, Canada
| |
Collapse
|
10
|
Tavares W, Kinnear B, Schumacher DJ, Forte M. "Rater training" re-imagined for work-based assessment in medical education. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:1697-1709. [PMID: 37140661 DOI: 10.1007/s10459-023-10237-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 04/30/2023] [Indexed: 05/05/2023]
Abstract
In this perspective, the authors critically examine "rater training" as it has been conceptualized and used in medical education. By "rater training," they mean the educational events intended to improve rater performance and contributions during assessment events. Historically, rater training programs have focused on modifying faculty behaviours to achieve psychometric ideals (e.g., reliability, inter-rater reliability, accuracy). The authors argue these ideals may now be poorly aligned with contemporary research informing work-based assessment, introducing a compatibility threat, with no clear direction on how to proceed. To address this issue, the authors provide a brief historical review of "rater training" and provide an analysis of the literature examining the effectiveness of rater training programs. They focus mainly on what has served to define effectiveness or improvements. They then draw on philosophical and conceptual shifts in assessment to demonstrate why the function, effectiveness aims, and structure of rater training requires reimagining. These include shifting competencies for assessors, viewing assessment as a complex cognitive task enacted in a social context, evolving views on biases, and reprioritizing which validity evidence should be most sought in medical education. The authors aim to advance the discussion on rater training by challenging implicit incompatibility issues and stimulating ways to overcome them. They propose that "rater training" (a moniker they suggest be reserved for strong psychometric aims) be augmented with "assessor readiness" programs that link to contemporary assessment science and enact the principle of compatibility between that science and ways of engaging with advances in real-world faculty-learner contexts.
Collapse
Affiliation(s)
- Walter Tavares
- Department of Health and Society, Wilson Centre, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada.
| | - Benjamin Kinnear
- Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Daniel J Schumacher
- Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Milena Forte
- Department of Family and Community Medicine, Temerty Faculty of Medicine, Mount Sinai Hospital, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
11
|
Josiah Macy Jr. Foundation Conference on Ensuring Fairness in Medical Education Assessment: Conference Recommendations Report. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S3-S15. [PMID: 37070828 DOI: 10.1097/acm.0000000000005243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
|
12
|
Burk-Rafel J, Sebok-Syer SS, Santen SA, Jiang J, Caretta-Weyer HA, Iturrate E, Kelleher M, Warm EJ, Schumacher DJ, Kinnear B. TRainee Attributable & Automatable Care Evaluations in Real-time (TRACERs): A Scalable Approach for Linking Education to Patient Care. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:149-159. [PMID: 37215538 PMCID: PMC10198229 DOI: 10.5334/pme.1013] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Accepted: 04/30/2023] [Indexed: 05/24/2023]
Abstract
Competency-based medical education (CBME) is an outcomes-based approach to education and assessment that focuses on what competencies trainees need to learn in order to provide effective patient care. Despite this goal of providing quality patient care, trainees rarely receive measures of their clinical performance. This is problematic because defining a trainee's learning progression requires measuring their clinical performance. Traditional clinical performance measures (CPMs) are often met with skepticism from trainees given their poor individual-level attribution. Resident-sensitive quality measures (RSQMs) are attributable to individuals, but lack the expeditiousness needed to deliver timely feedback and can be difficult to automate at scale across programs. In this eye opener, the authors present a conceptual framework for a new type of measure - TRainee Attributable & Automatable Care Evaluations in Real-time (TRACERs) - attuned to both automation and trainee attribution as the next evolutionary step in linking education to patient care. TRACERs have five defining characteristics: meaningful (for patient care and trainees), attributable (sufficiently to the trainee of interest), automatable (minimal human input once fully implemented), scalable (across electronic health records [EHRs] and training environments), and real-time (amenable to formative educational feedback loops). Ideally, TRACERs optimize all five characteristics to the greatest degree possible. TRACERs are uniquely focused on measures of clinical performance that are captured in the EHR, whether routinely collected or generated using sophisticated analytics, and are intended to complement (not replace) other sources of assessment data. TRACERs have the potential to contribute to a national system of high-density, trainee-attributable, patient-centered outcome measures.
Collapse
Affiliation(s)
- Jesse Burk-Rafel
- Division of Hospital Medicine, NYU Langone Health, and assistant director of Precision Medical Education, Institute for Innovations in Medical Education, NYU Grossman School of Medicine, New York, USA
| | - Stefanie S. Sebok-Syer
- Department of Emergency Medicine, Stanford University School of Medicine, Stanford, California, USA
| | - Sally A. Santen
- University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| | - Joshua Jiang
- University of California Los Angeles, Los Angeles, California. At the time of this work he was a medical student, NYU Grossman School of Medicine, New York, USA
| | - Holly A. Caretta-Weyer
- Department of Emergency Medicine, Stanford University School of Medicine, Stanford, California, USA
| | | | - Matthew Kelleher
- Internal Medicine and Pediatrics, Department of Pediatrics, Cincinnati Children’s Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| | - Eric J. Warm
- University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| | - Daniel J. Schumacher
- Department of Pediatrics, director of Education Research Unit, Cincinnati Children’s Hospital Medical Center/ University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| | - Benjamin Kinnear
- Internal Medicine and Pediatrics, Department of Pediatrics, Cincinnati Children’s Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| |
Collapse
|
13
|
Wagner M, Gomez-Garibello C, Seymour N, Okrainec A, Vassiliou M. An argument-based validation study of the fundamentals of laparoscopic surgery (FLS) program. Surg Endosc 2023:10.1007/s00464-023-10020-1. [PMID: 36997649 DOI: 10.1007/s00464-023-10020-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Accepted: 03/12/2023] [Indexed: 04/01/2023]
Abstract
BACKGROUND The Fundamentals of Laparoscopy Surgery (FLS) program was launched over 15 years ago. Since then, there has been an exponential rise in advancements of laparoscopy and its uses. In response, we conducted an argument-based validation study of FLS. The purpose of this paper is to exemplify this approach to validation for surgical education researchers using FLS as an illustrative case. METHODS The argument-based approach to validation involves three key actions: (1) developing interpretation and use arguments; (2) research; and (3) building a validity argument. Drawing from the validation study of FLS each step is exemplified. RESULTS Qualitative and quantitative data sources from the FLS validity examination study provided evidence that both supported claims, but also generated backing for rebuttals. Some of the key findings were synthesized in a validity argument to illustrate its structure. DISCUSSION The argument-based validation approach described numerous advantages over other validation approaches: (1) it is endorsed by the foundational documents in assessment and evaluation research; (2) its specific language of claims, inferences, warrants, assumptions and rebuttals provides a systematic and unified way to communicate both the processes and outcomes of validation; and (3) the use of logic reasoning in building the validity document clearly delineates the relationship between evidence and the inferences made to support desired uses and interpretations from assessments.
Collapse
Affiliation(s)
- Maryam Wagner
- Faculty of Medicine and Health Sciences, Institute of Health Sciences Education, McGill University, Lady Meredith House, 1110 Pine Avenue West, Montreal, QC, H3A 1A3, Canada.
| | - Carlos Gomez-Garibello
- Faculty of Medicine and Health Sciences, Institute of Health Sciences Education, McGill University, Lady Meredith House, 1110 Pine Avenue West, Montreal, QC, H3A 1A3, Canada
| | - Neal Seymour
- Department of Surgery, University of Massachusetts Chan Medical School-Baystate, Worcester, USA
| | - Allan Okrainec
- Department of Surgery, University of Toronto, Toronto, Canada
| | | |
Collapse
|
14
|
Roberge-Dao J, Maggio LA, Zaccagnini M, Rochette A, Shikako K, Boruff J, Thomas A. Challenges and future directions in the measurement of evidence-based practice: Qualitative analysis of umbrella review findings. J Eval Clin Pract 2023; 29:218-227. [PMID: 36440876 DOI: 10.1111/jep.13790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 10/16/2022] [Accepted: 10/25/2022] [Indexed: 11/29/2022]
Abstract
RATIONALE, AIMS AND OBJECTIVES: An important aspect of scholarly discussions about evidence-based practice (EBP) is how EBP is measured. Given the conceptual and empirical developments in the study of EBP over the last 3 decades, there is a need to better understand how to best measure EBP in educational and clinical contexts. The aim of this study was to identify and describe the main challenges, recommendations for practice, and areas of future research in the measurement of EBP across the health professions as reported by systematic reviews (SRs). METHODS We conducted a secondary analysis of qualitative data obtained in the context of a previously published umbrella review that aimed to compare SRs on EBP measures. Two reviewers independently extracted excerpts from the results and discussion/conclusion sections of the 10 included SRs that aligned with the three research aims. An iterative six-phase reflexive thematic analysis according to Braun and Clarke was conducted. RESULTS Our thematic analysis produced five themes describing the main challenges associated with measuring EBP, four themes outlining main recommendations for practice, and four themes representing areas of future research. Challenges include limited psychometric testing and validity evidence for existing EBP measures; limitations with the self-report format; lack of construct clarity of EBP measures; inability to capture the complexity of the EBP process and outcomes; and the context-specific nature of EBP measures. Reported recommendations for practice include acknowledging the multidimensionality of EBP; adapting EBP measures to the context and re-examining the validity argument; and considering the feasibility and acceptability of measures. Areas of future research included the development of comprehensive, multidimensional EBP measures and the need for expert consensus on the operationalization of EBP. CONCLUSIONS This study suggests that existing measures may be insufficient in capturing the multidimensional, contextual and dynamic nature of EBP. There is a need for a clear operationalization of EBP and an improved understanding and application of validity theory.
Collapse
Affiliation(s)
- Jacqueline Roberge-Dao
- School of Physical and Occupational Therapy, McGill University, Montréal, Canada and Centre for Interdisciplinary Research in Rehabilitation of Greater Montréal, Montréal, Canada
| | - Lauren A Maggio
- Medicine and Health Professions Education, Uniformed Services University, Bethesda, Maryland, USA
| | - Marco Zaccagnini
- School of Physical and Occupational Therapy, McGill University, Montréal, Canada and Centre for Interdisciplinary Research in Rehabilitation of Greater Montréal, Montréal, Canada
| | - Annie Rochette
- School of Rehabilitation, Université de Montréal, Montréa, Canada and Centre for Interdisciplinary Research in Rehabilitation of Greater Montréal, Université de Montréal, Montréal, Canada
| | - Keiko Shikako
- School of Physical and Occupational Therapy, McGill University, Montréal, Canada and Centre for Interdisciplinary Research in Rehabilitation of Greater Montréal, Montréal, Canada
| | - Jill Boruff
- Schulich Library of Physical Sciences, Life Sciences, and Engineering, McGill University, Montréal, Canada
| | - Aliki Thomas
- School of Physical and Occupational Therapy and The Institute of Health Sciences Education, McGill University, Montréal, Canada and Centre for Interdisciplinary Research in Rehabilitation of Greater Montréal, Montréal, Canada
| |
Collapse
|
15
|
Toews H, Pearce J, Tavares W. Recasting Assessment in Continuing Professional Development as a Person-Focused Activity. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2023; 43:S35-S40. [PMID: 38054490 DOI: 10.1097/ceh.0000000000000538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
ABSTRACT In this article, we examine assessment as conceptualized and enacted in continuing professional development (CPD). Assessment is pervasive throughout the life of an individual health professional, serving many different purposes compounded by varied and unique contexts, each with their own drivers and consequences, usually casting the person as the object of assessment. Assessment is often assumed as an included part in CPD development conceptualization. Research on assessment in CPD is often focused on systems, utility, and quality instead of intentionally examining the link between assessment and the person. We present an alternative view of assessment in CPD as person-centered, practice-informed, situated and bound by capability, and enacted in social and material contexts. With this lens of assessment as an inherently personal experience, we introduce the concept of subjectification, as described by educationalist Gert Biesta. We propose that subjectification may be a fruitful way of examining assessment in a CPD context. Although the CPD community, researchers, and educators consider this further, we offer some early implications of adopting a subjectification lens on the design and enactment of assessment in CPD.
Collapse
Affiliation(s)
- Helen Toews
- Toews: Registered Dietitian, The Wilson Centre, University of Toronto, Toronto, Ontario, Canada. Dr. Pearce: Principal Research Fellow, Specialist and Professional Assessment, Tertiary Education, Australian Council for Educational, Research, Camberwell, Australia. Dr. Tavares: Scientist|Assistant Professor, Department of Health and Society, The Wilson Centre, Department of Medicine, University of Toronto, Scarborough, Ontario, Canada
| | | | | |
Collapse
|
16
|
Kim ME, Tretter J, Wilmot I, Hahn E, Redington A, McMahon CJ. Entrustable Professional Activities and Their Relevance to Pediatric Cardiology Training. Pediatr Cardiol 2022; 44:757-768. [PMID: 36576524 PMCID: PMC9795145 DOI: 10.1007/s00246-022-03067-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Accepted: 11/29/2022] [Indexed: 12/29/2022]
Abstract
Entrustable professional activities (EPAs) have become a popular framework for medical trainee assessment and a supplemental component for milestone and competency assessment. EPAs were developed to facilitate assessment of competencies and furthermore to facilitate translation into clinical practice. In this review, we explore the rationale for the introduction of EPAs, examine whether they fulfill the promise expected of them, and contemplate further developments in their application with specific reference to training in pediatric cardiology.
Collapse
Affiliation(s)
- Michael E. Kim
- Department of Pediatrics, College of Medicine, Heart Institute, Cincinnati Children’s Hospital Medical Center, University of Cincinnati, Cincinnati, OH USA
| | - Justin Tretter
- Department of Pediatric Cardiology, Pediatric Institute, Cleveland Clinic Children’s, and The Heart, Vascular, and Thoracic Institute, Cleveland Clinic, 9500 Euclid Avenue, M-41, Cleveland, OH 44195 USA
| | - Ivan Wilmot
- Department of Pediatrics, College of Medicine, Heart Institute, Cincinnati Children’s Hospital Medical Center, University of Cincinnati, Cincinnati, OH USA
| | - Eunice Hahn
- Department of Pediatrics, College of Medicine, Heart Institute, Cincinnati Children’s Hospital Medical Center, University of Cincinnati, Cincinnati, OH USA
| | - Andrew Redington
- Department of Pediatrics, College of Medicine, Heart Institute, Cincinnati Children’s Hospital Medical Center, University of Cincinnati, Cincinnati, OH USA
| | - Colin J. McMahon
- Department of Paediatric Cardiology, Children’s Health Ireland at Crumlin, Crumlin, Dublin Ireland ,School of Medicine, University College Dublin, Dublin 4, Belfield, Ireland ,School of Health Professions Education, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
17
|
Bouzid D, Mullaert J, Ghazali A, Ferré VM, Mentré F, Lemogne C, Ruszniewski P, Faye A, Dinh AT, Mirault T. eOSCE stations live versus remote evaluation and scores variability. BMC MEDICAL EDUCATION 2022; 22:861. [PMID: 36514011 PMCID: PMC9745699 DOI: 10.1186/s12909-022-03919-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 11/23/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND Objective structured clinical examinations (OSCEs) are known to be a fair evaluation method. These recent years, the use of online OSCEs (eOSCEs) has spread. This study aimed to compare remote versus live evaluation and assess the factors associated with score variability during eOSCEs. METHODS We conducted large-scale eOSCEs at the medical school of the Université de Paris Cité in June 2021 and recorded all the students' performances, allowing a second evaluation. To assess the agreement in our context of multiple raters and students, we fitted a linear mixed model with student and rater as random effects and the score as an explained variable. RESULTS One hundred seventy observations were analyzed for the first station after quality control. We retained 192 and 110 observations for the statistical analysis of the two other stations. The median score and interquartile range were 60 out of 100 (IQR 50-70), 60 out of 100 (IQR 54-70), and 53 out of 100 (IQR 45-62) for the three stations. The score variance proportions explained by the rater (ICC rater) were 23.0, 16.8, and 32.8%, respectively. Of the 31 raters, 18 (58%) were male. Scores did not differ significantly according to the gender of the rater (p = 0.96, 0.10, and 0.26, respectively). The two evaluations showed no systematic difference in scores (p = 0.92, 0.053, and 0.38, respectively). CONCLUSION Our study suggests that remote evaluation is as reliable as live evaluation for eOSCEs.
Collapse
Affiliation(s)
- Donia Bouzid
- Université Paris Cité and Université Sorbonne Paris Nord, Inserm IAME, F-75018, Paris, France.
- Emergency Department, Bichat-Claude Bernard University Hospital AP-HP, Paris, France.
| | - Jimmy Mullaert
- Université Paris Cité and Université Sorbonne Paris Nord, Inserm IAME, F-75018, Paris, France
| | - Aiham Ghazali
- Université Paris Cité and Université Sorbonne Paris Nord, Inserm IAME, F-75018, Paris, France
| | - Valentine Marie Ferré
- Université Paris Cité and Université Sorbonne Paris Nord, Inserm IAME, F-75018, Paris, France
- Virology laboratory, Bichat-Claude Bernard University Hospital AP-HP, Paris, France
| | - France Mentré
- Université Paris Cité and Université Sorbonne Paris Nord, Inserm IAME, F-75018, Paris, France
- Département d'Épidémiologie, Biostatistique et Recherche Clinique, Bichat-Claude Bernard University Hospital AP-HP, Paris, France
- UFR de Médecine, Université Paris Cité, Paris, France
| | - Cédric Lemogne
- UFR de Médecine, Université Paris Cité, Paris, France
- Université Paris Cité, INSERM U1266, Institut de Psychiatrie et Neuroscience de Paris, F-75014, Paris, France
- Service de Psychiatrie de l'adulte, AP-HP, Hôpital Hôtel-Dieu, F-75004, Paris, France
| | - Philippe Ruszniewski
- UFR de Médecine, Université Paris Cité, Paris, France
- Service de gastro-entérologie et pancréatologie, Hôpital Beaujon AP-HP, Paris, France
| | - Albert Faye
- UFR de Médecine, Université Paris Cité, Paris, France
- Service de Pédiatrie Générale, Hôpital Robert Debré AP-HP, Paris, France
| | - Alexy Tran Dinh
- UFR de Médecine, Université Paris Cité, Paris, France
- Département d'Anesthésie-Réanimation, Hôpital Bichat-Claude Bernard, AP-HP, Paris, France
| | - Tristan Mirault
- UFR de Médecine, Université Paris Cité, Paris, France
- Département de médecine vasculaire, Hôpital Européen Georges Pompidou AP-HP, Paris, France
- Université Paris Cité, PARCC team 5, INSERM U970, F-75015, Paris, France
| |
Collapse
|
18
|
Zaccagnini M, Bussières A, Mak S, Boruff J, West A, Thomas A. Scholarly practice in healthcare professions: findings from a scoping review. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2022:10.1007/s10459-022-10180-0. [PMID: 36456756 DOI: 10.1007/s10459-022-10180-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 10/16/2022] [Indexed: 06/17/2023]
Abstract
Scholarly practitioners are broadly defined as healthcare professionals that address critical practice problems using theory, scientific evidence, and practice-based knowledge. Though scholarly practice is included in most competency frameworks, it is unclear what scholarly practice is, how it develops and how it is operationalized in clinical practice. The aim of this review was to determine what is known about scholarly practice in healthcare professionals. We conducted a scoping review and searched MEDLINE, EMBASE, CINAHL from inception to May 2020. We included papers that explored, described, or defined scholarly practice, scholar or scholarly practitioner, and/or related concepts in healthcare professionals. We included a total of 90 papers. Thirty percent of papers contained an explicit definition of scholarly practice. Conceptualizations of scholarly practice were organized into three themes: the interdependent relationship between scholarship and practice; advancing the profession's field; and core to being a healthcare practitioner. Attributes of scholarly practitioners clustered around five themes: commitment to excellence in practice; collaborative nature; presence of virtuous characteristics; effective communication skills; and adaptive change ethos. No single unified definition of scholarly practice exists within the literature. The variability in terms used to describe scholarly practice suggests that it is an overarching concept rather than a definable entity. There are similarities between scholarly practitioners and knowledge brokers regarding attributes and how scholarly practice is operationalized. Individuals engaged in the teaching, research and/or assessment of scholarly practice should make explicit their definitions and expectations for healthcare professionals.
Collapse
Affiliation(s)
- Marco Zaccagnini
- School of Physical and Occupational Therapy, McGill University, 3654 Promenade Sir William Osler, Montréal, QC, H3G 1Y5, Canada
- Centre for Interdisciplinary Research in Rehabilitation of Greater Montréal, Montréal, QC, Canada
| | - André Bussières
- School of Physical and Occupational Therapy, McGill University, 3654 Promenade Sir William Osler, Montréal, QC, H3G 1Y5, Canada
- Centre for Interdisciplinary Research in Rehabilitation of Greater Montréal, Montréal, QC, Canada
- Département Chiropratique, Université du Québec à Trois-Rivières, Trois-Rivières, QC, Canada
| | - Susanne Mak
- School of Physical and Occupational Therapy, McGill University, 3654 Promenade Sir William Osler, Montréal, QC, H3G 1Y5, Canada
- Centre for Interdisciplinary Research in Rehabilitation of Greater Montréal, Montréal, QC, Canada
- Institute of Health Sciences Education, McGill University, Montréal, QC, Canada
| | - Jill Boruff
- Schulich Library of Physical Sciences, Life Sciences, and Engineering, McGill University, Montréal, QC, Canada
| | - Andrew West
- The Canadian Society of Respiratory Therapists, Saint John, NB, Canada
| | - Aliki Thomas
- School of Physical and Occupational Therapy, McGill University, 3654 Promenade Sir William Osler, Montréal, QC, H3G 1Y5, Canada.
- Centre for Interdisciplinary Research in Rehabilitation of Greater Montréal, Montréal, QC, Canada.
- Institute of Health Sciences Education, McGill University, Montréal, QC, Canada.
| |
Collapse
|
19
|
Kinnear B, Schumacher DJ, Driessen EW, Varpio L. How argumentation theory can inform assessment validity: A critical review. MEDICAL EDUCATION 2022; 56:1064-1075. [PMID: 35851965 PMCID: PMC9796688 DOI: 10.1111/medu.14882] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 07/07/2022] [Accepted: 07/15/2022] [Indexed: 05/21/2023]
Abstract
INTRODUCTION Many health professions education (HPE) scholars frame assessment validity as a form of argumentation in which interpretations and uses of assessment scores must be supported by evidence. However, what are purported to be validity arguments are often merely clusters of evidence without a guiding framework to evaluate, prioritise, or debate their merits. Argumentation theory is a field of study dedicated to understanding the production, analysis, and evaluation of arguments (spoken or written). The aim of this study is to describe argumentation theory, articulating the unique insights it can offer to HPE assessment, and presenting how different argumentation orientations can help reconceptualize the nature of validity in generative ways. METHODS The authors followed a five-step critical review process consisting of iterative cycles of focusing, searching, appraising, sampling, and analysing the argumentation theory literature. The authors generated and synthesised a corpus of manuscripts on argumentation orientations deemed to be most applicable to HPE. RESULTS We selected two argumentation orientations that we considered particularly constructive for informing HPE assessment validity: New rhetoric and informal logic. In new rhetoric, the goal of argumentation is to persuade, with a focus on an audience's values and standards. Informal logic centres on identifying, structuring, and evaluating arguments in real-world settings, with a variety of normative standards used to evaluate argument validity. DISCUSSION Both new rhetoric and informal logic provide philosophical, theoretical, or practical groundings that can advance HPE validity argumentation. New rhetoric's foregrounding of audience aligns with HPE's social imperative to be accountable to specific stakeholders such as the public and learners. Informal logic provides tools for identifying and structuring validity arguments for analysis and evaluation.
Collapse
Affiliation(s)
- Benjamin Kinnear
- Department of PediatricsUniversity of Cincinnati College of MedicineCincinnatiOhioUSA
- School of Health Professions Education (SHE)Maastricht UniversityMaastrichtThe Netherlands
| | - Daniel J. Schumacher
- Department of PediatricsUniversity of Cincinnati College of MedicineCincinnatiOhioUSA
| | - Erik W. Driessen
- School of Health Professions Education Faculty of HealthMedicine and Life Sciences of Maastricht UniversityMaastrichtThe Netherlands
| | - Lara Varpio
- Uniformed Services University of the Health SciencesBethesdaMarylandUSA
| |
Collapse
|
20
|
Fuller R, Goddard VCT, Nadarajah VD, Treasure-Jones T, Yeates P, Scott K, Webb A, Valter K, Pyorala E. Technology enhanced assessment: Ottawa consensus statement and recommendations. MEDICAL TEACHER 2022; 44:836-850. [PMID: 35771684 DOI: 10.1080/0142159x.2022.2083489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
INTRODUCTION In 2011, a consensus report was produced on technology-enhanced assessment (TEA), its good practices, and future perspectives. Since then, technological advances have enabled innovative practices and tools that have revolutionised how learners are assessed. In this updated consensus, we bring together the potential of technology and the ultimate goals of assessment on learner attainment, faculty development, and improved healthcare practices. METHODS As a material for the report, we used the scholarly publications on TEA in both HPE and general higher education, feedback from 2020 Ottawa Conference workshops, and scholarly publications on assessment technology practices during the Covid-19 pandemic. RESULTS AND CONCLUSION The group identified areas of consensus that remained to be resolved and issues that arose in the evolution of TEA. We adopted a three-stage approach (readiness to adopt technology, application of assessment technology, and evaluation/dissemination). The application stage adopted an assessment 'lifecycle' approach and targeted five key foci: (1) Advancing authenticity of assessment, (2) Engaging learners with assessment, (3) Enhancing design and scheduling, (4) Optimising assessment delivery and recording learner achievement, and (5) Tracking learner progress and faculty activity and thereby supporting longitudinal learning and continuous assessment.
Collapse
Affiliation(s)
- Richard Fuller
- Christie Education, The Christie NHS Foundation Trust, Manchester, UK
| | | | | | | | - Peter Yeates
- School of Medicine, University of Keele, Keele, UK
| | - Karen Scott
- Faculty of Medicine and Health, University of Sydney, Sydney, Australia
| | - Alexandra Webb
- College of Health and Medicine, Australian National University, Canberra, Australia
| | - Krisztina Valter
- John Curtin School of Medical Research, Australian National University, Canberra, Australia
| | - Eeva Pyorala
- Center for University Teaching and Learning, University of Helsinki, Helsinki, Finland
| |
Collapse
|
21
|
St-Onge C, Boileau E, Langevin S, Nguyen LHP, Drescher O, Bergeron L, Thomas A. Stakeholders' perception on the implementation of Developmental Progress Assessment: using the Theoretical Domains Framework to document behavioral determinants. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2022; 27:735-759. [PMID: 35624332 DOI: 10.1007/s10459-022-10119-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Accepted: 04/23/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND The widespread implementation of longitudinal assessment (LA) to document trainees' progression to independent practice rests more on speculative rather than evidence-based benefits. We aimed to document stakeholders' knowledge of- and attitudes towards LA, and identify how the supports and barriers can help or hinder the uptake and sustainable use of LA. METHODS We interviewed representatives from four stakeholder groups involved in LA. The interview protocols were based on the Theoretical Domains Framework (TDF), which contains a total of 14 behaviour change determinants. Two team members coded the interviews deductively to the TDF, with a third resolving differences in coding. The qualitative data analysis was completed with iterative consultations and discussions with team members until consensus was achieved. Saliency analysis was used to identify dominant domains. RESULTS Forty-one individuals participated in the study. Three dominant domains were identified. Participants perceive that LA has more positive than negative consequences and requires substantial ressources. All the elements and characteristics of LA are present in our data, with differences between stakeholders. CONCLUSION Going forward, we could develop and implement tailored and theory driven interventions to promote a shared understanding of LA, and maintain potential positive outcomes while reducing negative ones. Furthermore, ressources to support LA implementation need to be addressed to facilitate its uptake.
Collapse
Affiliation(s)
- Christina St-Onge
- Université de Sherbrooke, Christina St-Onge, 3001 12e Avenue Nord, Sherbrooke, QC, J1H 5N4, Canada.
| | - Elisabeth Boileau
- Université de Sherbrooke, Christina St-Onge, 3001 12e Avenue Nord, Sherbrooke, QC, J1H 5N4, Canada
| | - Serge Langevin
- Université de Sherbrooke, Christina St-Onge, 3001 12e Avenue Nord, Sherbrooke, QC, J1H 5N4, Canada
| | | | | | - Linda Bergeron
- Université de Sherbrooke, Christina St-Onge, 3001 12e Avenue Nord, Sherbrooke, QC, J1H 5N4, Canada
| | | |
Collapse
|
22
|
Marceau M, St-Onge C, Gallagher F, Young M. Validity as a social imperative: users' and leaders' perceptions. CANADIAN MEDICAL EDUCATION JOURNAL 2022; 13:22-36. [PMID: 35875440 PMCID: PMC9297243 DOI: 10.36834/cmej.73518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Recently, validity as a social imperative was proposed as an emerging conceptualization of validity in the assessment literature in health professions education (HPE). To further develop our understanding, we explored the perceived acceptability and anticipated feasibility of validity as a social imperative with users and leaders engaged with assessment in HPE in Canada. METHODS We conducted a qualitative interpretive description study. Purposeful and snowball sampling were used to recruit participants for semi-structured individual interviews and focus groups. Each transcript was analyzed by two team members and discussed with the team until consensus was reached. RESULTS We conducted five focus group and eleven interviews with two different stakeholder groups (users and leaders). Our findings suggest that the participants perceived the concept of validity as a social imperative as acceptable. Regardless of group, participants shared similar considerations regarding: the limits of traditional validity models, the concept's timeliness and relevance, the need to clarify some terms used to characterize the concept, the similarities with modern theories of validity, and the anticipated challenges in applying the concept in practice. In addition, participants discussed some limits with current approaches to validity in the context of workplace-based and programmatic assessment. CONCLUSION Validity as a social imperative can be interwoven throughout existing theories of validity and may represent how HPE is adapting traditional models of validity in order to respond to the complexity of assessment in HPE; however, challenges likely remain in operationalizing the concept prior to its implementation.
Collapse
Affiliation(s)
- Mélanie Marceau
- School of Nursing, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Quebec, Canada
| | - Christina St-Onge
- Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Quebec, Canada
| | - Frances Gallagher
- School of Nursing, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Quebec, Canada
| | - Meredith Young
- Institute of Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Québec, Canada
| |
Collapse
|
23
|
Roberge-Dao J, Maggio LA, Zaccagnini M, Rochette A, Shikako-Thomas K, Boruff J, Thomas A. Quality, methods, and recommendations of systematic reviews on measures of evidence-based practice: an umbrella review. JBI Evid Synth 2022; 20:1004-1073. [PMID: 35220381 DOI: 10.11124/jbies-21-00118] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
OBJECTIVES The objective of the review was to estimate the quality of systematic reviews on evidence-based practice measures across health care professions and identify differences between systematic reviews regarding approaches used to assess the adequacy of evidence-based practice measures and recommended measures. INTRODUCTION Systematic reviews on the psychometric properties of evidence-based practice measures guide researchers, clinical managers, and educators in selecting an appropriate measure for use. The lack of psychometric standards specific to evidence-based practice measures, in addition to recent findings suggesting the low methodological quality of psychometric systematic reviews, calls into question the quality and methods of systematic reviews examining evidence-based practice measures. INCLUSION CRITERIA We included systematic reviews that identified measures that assessed evidence-based practice as a whole or of constituent parts (eg, knowledge, attitudes, skills, behaviors), and described the psychometric evidence for any health care professional group irrespective of assessment context (education or clinical practice). METHODS We searched five databases (MEDLINE, Embase, CINAHL, PsycINFO, and ERIC) on January 18, 2021. Two independent reviewers conducted screening, data extraction, and quality appraisal following the JBI approach. A narrative synthesis was performed. RESULTS Ten systematic reviews, published between 2006 and 2020, were included and focused on the following groups: all health care professionals (n = 3), nurses (n = 2), occupational therapists (n = 2), physical therapists (n = 1), medical students (n = 1), and family medicine residents (n = 1). The overall quality of the systematic reviews was low: none of the reviews assessed the quality of primary studies or adhered to methodological guidelines, and only one registered a protocol. Reporting of psychometric evidence and measurement characteristics differed. While all the systematic reviews discussed internal consistency, feasibility was only addressed by three. Many approaches were used to assess the adequacy of measures, and five systematic reviews referenced tools. Criteria for the adequacy of individual properties and measures varied, but mainly followed standards for patient-reported outcome measures or the Standards of Educational and Psychological Testing. There were 204 unique measures identified across 10 reviews. One review explicitly recommended measures for occupational therapists, three reviews identified adequate measures for all health care professionals, and one review identified measures for medical students. The 27 measures deemed adequate by these five systematic reviews are described. CONCLUSIONS Our results suggest a need to improve the overall methodological quality and reporting of systematic reviews on evidence-based practice measures to increase the trustworthiness of recommendations and allow comprehensive interpretation by end users. Risk of bias is common to all the included systematic reviews, as the quality of primary studies was not assessed. The diversity of tools and approaches used to evaluate the adequacy of evidence-based practice measures reflects tensions regarding the conceptualization of validity, suggesting a need to reflect on the most appropriate application of validity theory to evidence-based practice measures. SYSTEMATIC REVIEW REGISTRATION NUMBER PROSPERO CRD42020160874.
Collapse
Affiliation(s)
- Jacqueline Roberge-Dao
- School of Physical and Occupational Therapy, McGill University, Montréal, QC, Canada
- Centre for Interdisciplinary Research in Rehabilitation of Greater Montréal, Montréal, QC, Canada
| | - Lauren A Maggio
- Medicine and Health Professions Education, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Marco Zaccagnini
- School of Physical and Occupational Therapy, McGill University, Montréal, QC, Canada
- Centre for Interdisciplinary Research in Rehabilitation of Greater Montréal, Montréal, QC, Canada
| | - Annie Rochette
- Centre for Interdisciplinary Research in Rehabilitation of Greater Montréal, Montréal, QC, Canada
- School of Rehabilitation, Université: de Montréal, Montréal, QC, Canada
| | - Keiko Shikako-Thomas
- School of Physical and Occupational Therapy, McGill University, Montréal, QC, Canada
- Centre for Interdisciplinary Research in Rehabilitation of Greater Montréal, Montréal, QC, Canada
| | - Jill Boruff
- Schulich Library of Physical Sciences, Life Sciences, and Engineering, McGill University, Montreal, QC, Canada
| | - Aliki Thomas
- School of Physical and Occupational Therapy, McGill University, Montréal, QC, Canada
- Centre for Interdisciplinary Research in Rehabilitation of Greater Montréal, Montréal, QC, Canada
| |
Collapse
|
24
|
Gordon D, Rencic JJ, Lang VJ, Thomas A, Young M, Durning SJ. Advancing the assessment of clinical reasoning across the health professions: Definitional and methodologic recommendations. PERSPECTIVES ON MEDICAL EDUCATION 2022; 11:108-114. [PMID: 35254653 PMCID: PMC8940991 DOI: 10.1007/s40037-022-00701-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 01/24/2022] [Accepted: 02/04/2022] [Indexed: 06/14/2023]
Abstract
The importance of clinical reasoning in patient care is well-recognized across all health professions. Validity evidence supporting high quality clinical reasoning assessment is essential to ensure health professional schools are graduating learners competent in this domain. However, through the course of a large scoping review, we encountered inconsistent terminology for clinical reasoning and inconsistent reporting of methodology, reflecting a somewhat fractured body of literature on clinical reasoning assessment. These inconsistencies impeded our ability to synthesize across studies and appropriately compare assessment tools. More specifically, we encountered: 1) a wide array of clinical reasoning-like terms that were rarely defined or informed by a conceptual framework, 2) limited details of assessment methodology, and 3) inconsistent reporting of the steps taken to establish validity evidence for clinical reasoning assessments. Consolidating our experience in conducting this review, we provide recommendations on key definitional and methodologic elements to better support the development, description, study, and reporting of clinical reasoning assessments.
Collapse
Affiliation(s)
- David Gordon
- Division of Emergency Medicine, Duke University, Durham, NC, USA.
| | - Joseph J Rencic
- Department of Medicine, Boston University School of Medicine, Boston, MA, USA
| | - Valerie J Lang
- Division of Hospital Medicine, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA
| | - Aliki Thomas
- School of Physical and Occupational Therapy, Institute of Health Sciences Education, McGill University, Montreal, QC, Canada
| | - Meredith Young
- Department of Medicine and Institute of Health Sciences Education, McGill University, Montreal, QC, Canada
| | - Steven J Durning
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| |
Collapse
|
25
|
Roberts C, Khanna P, Lane AS, Reimann P, Schuwirth L. Exploring complexities in the reform of assessment practice: a critical realist perspective. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2021; 26:1641-1657. [PMID: 34431028 DOI: 10.1007/s10459-021-10065-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 08/08/2021] [Indexed: 06/13/2023]
Abstract
Although the principles behind assessment for and as learning are well-established, there can be a struggle when reforming traditional assessment of learning to a program which encompasses assessment for and as learning. When introducing and reporting reforms, tensions in faculty may arise because of differing beliefs about the relationship between assessment and learning and the rules for the validity of assessments. Traditional systems of assessment of learning privilege objective, structured quantification of learners' performances, and are done to the students. Newer systems of assessment promote assessment for learning, emphasise subjectivity, collate data from multiple sources, emphasise narrative-rich feedback to promote learner agency, and are done with the students. This contrast has implications for implementation and evaluative research. Research of assessment which is done to students typically asks, "what works", whereas assessment that is done with the students focuses on more complex questions such as "what works, for whom, in which context, and why?" We applied such a critical realist perspective drawing on the interplay between structure and agency, and a systems approach to explore what theory says about introducing programmatic assessment in the context of pre-existing traditional approaches. Using a reflective technique, the internal conversation, we developed four factors that can assist educators considering major change to assessment practice in their own contexts. These include enabling positive learner agency and engagement; establishing argument-based validity frameworks; designing purposeful and eclectic evidence-based assessment tasks; and developing a shared narrative that promotes reflexivity in appreciating the complex relationships between assessment and learning.
Collapse
Affiliation(s)
- Chris Roberts
- Faculty of Medicine and Health, Education Office, Sydney Medical School, The University of Sydney, Sydney, NSW, Australia.
| | - Priya Khanna
- Faculty of Medicine and Health, Education Office, Sydney Medical School, The University of Sydney, Sydney, NSW, Australia
| | - Andrew Stuart Lane
- Faculty of Medicine and Health, Education Office, Sydney Medical School, The University of Sydney, Sydney, NSW, Australia
| | - Peter Reimann
- Centre for Research on Learning and Innovation (CRLI), The University of Sydney, Sydney, NSW, Australia
| | - Lambert Schuwirth
- Prideaux Discipline of Clinical Education, College of Medicine and Public Health, Flinders University, Adelaide, South Australia, Australia
| |
Collapse
|
26
|
Kinnear B, Kelleher M, May B, Sall D, Schauer DP, Schumacher DJ, Warm EJ. Constructing a Validity Map for a Workplace-Based Assessment System: Cross-Walking Messick and Kane. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S64-S69. [PMID: 34183604 DOI: 10.1097/acm.0000000000004112] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PROBLEM Health professions education has shifted to a competency-based paradigm in which many programs rely heavily on workplace-based assessment (WBA) to produce data for summative decisions about learners. However, WBAs are complex and require validity evidence beyond psychometric analysis. Here, the authors describe their use of a rhetorical argumentation process to develop a map of validity evidence for summative decisions in an entrustment-based WBA system. APPROACH To organize evidence, the authors cross-walked 2 contemporary validity frameworks, one that emphasizes sources of evidence (Messick) and another that stresses inferences in an argument (Kane). They constructed a validity map using 4 steps: (1) Asking critical questions about the stated interpretation and use, (2) Seeking validity evidence as a response, (3) Categorizing evidence using both Messick's and Kane's frameworks, and (4) Building a visual representation of the collected and organized evidence. The authors used an iterative approach, adding new critical questions and evidence over time. OUTCOMES The first map draft produced 25 boxes of evidence that included all 5 sources of evidence detailed by Messick and spread across all 4 inferences described by Kane. The rhetorical question-response process allowed for structured critical appraisal of the WBA system, leading to the identification of evidentiary gaps. NEXT STEPS Future map iterations will integrate evidence quality indicators and allow for deeper dives into the evidence. The authors intend to share their map with graduate medical education stakeholders (e.g., accreditors, institutional leaders, learners, patients) to understand if it adds value for evaluating their WBA programs' validity arguments.
Collapse
Affiliation(s)
- Benjamin Kinnear
- B. Kinnear is associate professor of internal medicine and pediatrics, Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0003-0052-4130
| | - Matthew Kelleher
- M. Kelleher is assistant professor of internal medicine and pediatrics, Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio
| | - Brian May
- B. May is assistant professor of internal medicine and pediatrics, Department of Internal Medicine, University of Alabama Birmingham School of Medicine, Birmingham, Alabama
| | - Dana Sall
- D. Sall is program director, HonorHealth Internal Medicine Residency Program, Scottsdale, Arizona, and assistant professor of internal medicine, University of Arizona College of Medicine, Phoenix, Arizona
| | - Daniel P Schauer
- D.P. Schauer is associate professor of internal medicine and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0003-3264-8154
| | - Daniel J Schumacher
- D.J. Schumacher is associate professor of pediatrics at Cincinnati Children's Hospital Medical Center/University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0001-5507-8452
| | - Eric J Warm
- E.J. Warm is professor of internal medicine and program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-6088-2434
| |
Collapse
|
27
|
Touchie C, Kinnear B, Schumacher D, Caretta-Weyer H, Hamstra SJ, Hart D, Gruppen L, Ross S, Warm E, Ten Cate O. On the validity of summative entrustment decisions. MEDICAL TEACHER 2021; 43:780-787. [PMID: 34020576 DOI: 10.1080/0142159x.2021.1925642] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Health care revolves around trust. Patients are often in a position that gives them no other choice than to trust the people taking care of them. Educational programs thus have the responsibility to develop physicians who can be trusted to deliver safe and effective care, ultimately making a final decision to entrust trainees to graduate to unsupervised practice. Such entrustment decisions deserve to be scrutinized for their validity. This end-of-training entrustment decision is arguably the most important one, although earlier entrustment decisions, for smaller units of professional practice, should also be scrutinized for their validity. Validity of entrustment decisions implies a defensible argument that can be analyzed in components that together support the decision. According to Kane, building a validity argument is a process designed to support inferences of scoring, generalization across observations, extrapolation to new instances, and implications of the decision. A lack of validity can be caused by inadequate evidence in terms of, according to Messick, content, response process, internal structure (coherence) and relationship to other variables, and in misinterpreted consequences. These two leading frameworks (Kane and Messick) in educational and psychological testing can be well applied to summative entrustment decision-making. The authors elaborate the types of questions that need to be answered to arrive at defensible, well-argued summative decisions regarding performance to provide a grounding for high-quality safe patient care.
Collapse
Affiliation(s)
- Claire Touchie
- Medical Council of Canada, Ottawa, Canada
- The University of Ottawa, Ottawa, Canada
| | - Benjamin Kinnear
- Internal Medicine and Pediatrics, University of Cincinnati College of Medicine/Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Daniel Schumacher
- Pediatrics, Hospital Medical Center/University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Holly Caretta-Weyer
- Emergency Medicine, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Stanley J Hamstra
- University of Toronto, Toronto, Ontario, Canada
- Accreditation Council for Graduate Medical Education, Chicago, IL, USA
| | - Danielle Hart
- Emergency Medicine, Hennepin Healthcare and the University of Minnesota, Minneapolis, MN, USA
| | - Larry Gruppen
- Learning Health Sciences, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Shelley Ross
- Department of Family Medicine, University of Alberta, Edmonton, AB, Canada
| | - Eric Warm
- University of Cincinnati College of Medicine Center, Cincinnati, OH, USA
| | - Olle Ten Cate
- Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
28
|
Bégin P, Gagnon R, Leduc JM, Paradis B, Renaud JS, Beauchamp J, Rioux R, Carrier MP, Hudon C, Vautour M, Ouellet A, Bourget M, Bourdy C. Accuracy of rating scale interval values used in multiple mini-interviews: a mixed methods study. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2021; 26:37-51. [PMID: 32378151 DOI: 10.1007/s10459-020-09970-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Accepted: 04/27/2020] [Indexed: 06/11/2023]
Abstract
When determining the score given to candidates in multiple mini-interview (MMI) stations, raters have to translate a narrative judgment to an ordinal rating scale. When adding individual scores to calculate final ranking, it is generally presumed that the values of possible scores on the evaluation grid are separated by constant intervals, following a linear function, although this assumption is seldom validated with raters themselves. Inaccurate interval values could lead to systemic bias that could potentially distort candidates' final cumulative scores. The aim of this study was to establish rating scale values based on rater's intent, to validate these with an independent quantitative method, to explore their impact on final score, and to appraise their meaning according to experienced MMI interviewers. A 4-round consensus-group exercise was independently conducted with 42 MMI interviewers who were asked to determine relative values for the 6-point rating scale (from A to F) used in the Canadian integrated French MMI (IFMMI). In parallel, relative values were also calculated for each option of the scale by comparing the average scores concurrently given to the same individual in other stations every time that option was selected during three consecutive IFMMI years. Data from the same three cohorts was used to simulate the impact of using new score values on final rankings. Comments from the consensus group exercise were reviewed independently by two authors to explore raters' rationale for choosing specific values. Relative to the maximum (A = 100%) and minimum (F = 0%), experienced raters concluded to values of 86.7% (95% CI 86.3-87.1), 69.5% (68.9-70.1), 51.2% (50.6-51.8), and 29.3% (28.1-30.5), for scores of B, C, D and E respectively. The concurrent score approach was based on 43,412 IFMMI stations performed by 4345 medical school applicants. It provided quasi-identical values of 87.1% (82.4-91.5), 70.4% (66.1-74.7), 51.2% (47.1-55.3) and 31.8% (27.9-35.7), respectively. Qualitative analysis explained that while high scores are usually based on minor details of relatively low importance, low scores are usually attributed for more serious offenses and were assumed by the raters to carry more weight in the final score. Individual drop or increase in final MMI ranking with the use of new scale values ranged from - 21 to + 5 percentiles, with the average candidate changing by ± 1.4 percentiles. Consulting with experienced interviewers is a simple and effective approach to establish rating scale values that truly reflects raters' intent in MMI, thus improving the accuracy of the instrument and contributing to the general fairness of the process.
Collapse
Affiliation(s)
- Philippe Bégin
- Faculty of Medicine, Université de Montréal, Montreal, Canada.
- CHU Sainte-Justine, 3175 Chemin de la Côte-Sainte-Catherine, Montreal, QC, H3T 1C5, Canada.
| | - Robert Gagnon
- Faculty of Medicine, Université de Montréal, Montreal, Canada
| | | | | | | | - Jacinthe Beauchamp
- Faculty of Medicine, Université Sherbrooke, Sherbrooke, Canada
- Centre de Formation Médicale du Nouveau-Brunswick, Moncton, Canada
| | - Richard Rioux
- Faculty of Social Science, Université du Québec à Montréal, Montreal, Canada
| | | | - Claire Hudon
- Faculty of Medicine, Université Laval, Quebec City, Canada
| | - Marc Vautour
- Faculty of Medicine, Université Sherbrooke, Sherbrooke, Canada
- Centre de Formation Médicale du Nouveau-Brunswick, Moncton, Canada
| | - Annie Ouellet
- Faculty of Medicine, Université Sherbrooke, Sherbrooke, Canada
| | | | | |
Collapse
|
29
|
St-Onge C, Young M, Renaud JS, Cummings BA, Drescher O, Varpio L. Sound Practices: An Exploratory Study of Building and Monitoring Multiple-Choice Exams at Canadian Undergraduate Medical Education Programs. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:271-277. [PMID: 32769474 DOI: 10.1097/acm.0000000000003659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PURPOSE Written examinations such as multiple-choice question (MCQ) exams are a key assessment strategy in health professions education (HPE), frequently used to provide feedback, to determine competency, or for licensure decisions. However, traditional psychometric approaches for monitoring the quality of written exams, defined as items that are discriminant and contribute to increase the overall reliability and validity of the exam scores, usually warrant larger samples than are typically available in HPE contexts. The authors conducted a descriptive exploratory study to document how undergraduate medical education (UME) programs ensure the quality of their written exams, particularly MCQs. METHOD Using a qualitative descriptive methodology, the authors conducted semistructured interviews with 16 key informants from 10 Canadian UME programs in 2018. Interviews were transcribed, anonymized, coded by the primary investigator, and co-coded by a second team member. Data collection and analysis were conducted iteratively. Research team members engaged in analysis across phases, and consensus was reached on the interpretation of findings via group discussion. RESULTS Participants focused their answers around MCQ-related practices, reporting using several indicators of quality such as alignment between items and course objectives and psychometric properties (difficulty and discrimination). The authors clustered findings around 5 main themes: processes for creating MCQ exams, processes for building quality MCQ exams, processes for monitoring the quality of MCQ exams, motivation to build quality MCQ exams, and suggestions for improving processes. CONCLUSIONS Participants reported engaging multiple strategies to ensure the quality of MCQ exams. Assessment quality considerations were integrated throughout the development and validation phases, reflecting recent work regarding validity as a social imperative.
Collapse
Affiliation(s)
- Christina St-Onge
- C. St-Onge is professor, Department of Medicine, Faculty of Medicine and Health Sciences, and Chaire de recherche en pédagogie médicale Paul Grand'Maison de la Société des médecins de l'Université de Sherbrooke, Université de Sherbrooke, Sherbrooke, Quebec, Canada; ORCID: http://orcid.org/0000-0001-5313-0456
| | - Meredith Young
- M. Young is associate professor, Institute of Health Sciences Education and Department of Medicine, McGill University, Montreal, Canada; ORCID: http://orcid.org/0000-0002-2036-2119
| | - Jean-Sebastien Renaud
- J.-S. Renaud is associate professor, Department of Family and Emergency Medicine, and Office of Education and Continuing Professional Development, Laval University, Quebec, Canada; ORCID: https://orcid.org/0000-0002-2816-0773
| | - Beth-Ann Cummings
- B.-A. Cummings is associate professor, Department of Medicine, McGill University, associate member, Institute of Health Sciences Education, and former associate dean for undergraduate medical education, McGill University, Montreal, Canada; ORCID: http://orcid.org/0000-0001-6565-6930
| | - Olivia Drescher
- O. Drescher is a research professional, Department of Family and Emergency Medicine, and Office of Education and Continuing Professional Development, Laval University, Quebec, Canada
| | - Lara Varpio
- L. Varpio is professor of medicine and associate director of research, Health Professions Education graduate degree program, Uniformed Services University of the Health Sciences, Bethesda, Maryland; ORCID: https://orcid.org/0000-0002-1412-4341
| |
Collapse
|
30
|
Felthun JZ, Taylor S, Shulruf B, Allen DW. Assessment methods and the validity and reliability of measurement tools in online objective structured clinical examinations: a systematic scoping review. JOURNAL OF EDUCATIONAL EVALUATION FOR HEALTH PROFESSIONS 2021; 18:11. [PMID: 34058802 PMCID: PMC8212027 DOI: 10.3352/jeehp.2021.18.11] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Accepted: 05/18/2021] [Indexed: 05/21/2023]
Abstract
The coronavirus disease 2019 (COVID-19) pandemic has required educators to adapt the in-person objective structured clinical examination (OSCE) to online settings in order for it to remain a critical component of the multifaceted assessment of a student’s competency. This systematic scoping review aimed to summarize the assessment methods and validity and reliability of the measurement tools used in current online OSCE (hereafter, referred to as teleOSCE) approaches. A comprehensive literature review was undertaken following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews guidelines. Articles were eligible if they reported any form of performance assessment, in any field of healthcare, delivered in an online format. Two reviewers independently screened the results and analyzed relevant studies. Eleven articles were included in the analysis. Pre-recorded videos were used in 3 studies, while observations by remote examiners through an online platform were used in 7 studies. Acceptability as perceived by students was reported in 2 studies. This systematic scoping review identified several insights garnered from implementing teleOSCEs, the components transferable from telemedicine, and the need for systemic research to establish the ideal teleOSCE framework. TeleOSCEs may be able to improve the accessibility and reproducibility of clinical assessments and equip students with the requisite skills to effectively practice telemedicine in the future.
Collapse
Affiliation(s)
| | - Silas Taylor
- Office of Medical Education, University of New South Wales, Sydney, NSW, Australia
| | - Boaz Shulruf
- Office of Medical Education, University of New South Wales, Sydney, NSW, Australia
- Centre for Medical and Health Sciences Education, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Digby Wigram Allen
- School of Medicine, The University of New South Wales, Kensington, NSW, Australia
- Corresponding
| |
Collapse
|
31
|
Tavares W, Rowland P, Dagnone D, McEwen LA, Billett S, Sibbald M. Translating outcome frameworks to assessment programmes: Implications for validity. MEDICAL EDUCATION 2020; 54:932-942. [PMID: 32614480 DOI: 10.1111/medu.14287] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 06/14/2020] [Accepted: 06/24/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVES Competency-based medical education (CBME) requires that educators structure assessment of clinical competence using outcome frameworks. Although these frameworks may serve some outcomes well (e.g. represent eventual practice), translating these into workplace-based assessment plans may undermine validity and, therefore, trustworthiness of assessment decisions due to a number of competing factors that may not always be visible or their impact knowable. Explored here is the translation process from outcome framework to formative and summative assessment plans in postgraduate medical education (PGME) in three Canadian universities. METHODS We conducted a qualitative study involving in-depth semi-structured interviews with leaders of PGME programmes involved in assessment and/or CBME implementation, with a focus on their assessment-based translational activities and evaluation strategies. Interviews were informed by Callon's theory of translation. Our analytical strategy involved directed content analysis, allowing us to be guided by Kane's validity framework, whilst still participating in open coding and analytical memo taking. We then engaged in axial coding to systematically explore themes across the dataset, various situations and our conceptual framework. RESULTS Twenty-four interviews were conducted involving 15 specialties across three universities. Our results suggest: (i) using outcomes frameworks for assessment is necessary for good assessment but are also viewed as incomplete constructs; (ii) there are a number of social and practical negotiations with competing factors that displace validity as a core influencer in assessment planning, including implementation, accreditation and technology; and (iii) validity exists as threatened, uncertain and assumed due to a number of unchecked assumptions and reliance on surrogates. CONCLUSIONS Translational processes in CBME involve negotiating with numerous influencing actors and institutions that, from an assessment perspective, provide challenges for assessment scientists, institutions and educators to contend with. These processes are challenging validity as a core element of assessment designs. Educators must reconcile these influences when preparing for or structuring validity arguments.
Collapse
Affiliation(s)
- Walter Tavares
- The Wilson Centre and Post-MD Education, University Health Network, University of Toronto, Toronto, ON, Canada
| | - Paula Rowland
- The Wilson Centre and Post-MD Education, University Health Network, University of Toronto, Toronto, ON, Canada
| | - Damon Dagnone
- School of Medicine, Queens University, Kingston, ON, Canada
| | - Laura A McEwen
- School of Medicine, Queens University, Kingston, ON, Canada
| | - Stephen Billett
- School of Education and Professional Studies, Griffith University, Mount Gravatt, QLD, Australia
| | - Matthew Sibbald
- Department of Medicine, Centre for Simulation Based Learning, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
32
|
St-Onge C, Vachon Lachiver É, Langevin S, Boileau E, Bernier F, Thomas A. Lessons from the implementation of developmental progress assessment: A scoping review. MEDICAL EDUCATION 2020; 54:878-887. [PMID: 32083743 DOI: 10.1111/medu.14136] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2019] [Revised: 01/21/2020] [Accepted: 02/06/2020] [Indexed: 06/10/2023]
Abstract
OBJECTIVES Educators and researchers recently implemented developmental progress assessment (DPA) in the context of competency-based education. To reap its anticipated benefits, much still remains to be understood about its implementation. In this study, we aimed to determine the nature and extent of the current evidence on DPA, in an effort to broaden our understanding of the major goals and intended outcomes of DPA as well as the lessons learned from how it has been executed in, or applied across, educational contexts. METHODS We conducted a scoping study based on the methodology of Arksey and O'Malley. Our search strategy yielded 2494 articles. These articles were screened for inclusion and exclusion (90% agreement), and numerical and qualitative data were extracted from 56 articles based on a pre-defined set of charting categories. The thematic analysis of the qualitative data was completed with iterative consultations and discussions until consensus was achieved for the interpretation of the results. RESULTS Tools used to document DPA include scales, milestones and portfolios. Performances were observed in clinical or standardised contexts. We identified seven major themes in our qualitative thematic analysis: (a) underlying aims of DPA; (b) sources of information; (c) barriers; (d) contextual factors that can act as barriers or facilitators to the implementation of DPA; (e) facilitators; (f) observed outcomes, and (g) documented validity evidences. CONCLUSIONS Developmental progress assessment seems to fill a need in the training of future competent health professionals. However, moving forward with a widespread implementation of DPA, factors such as lack of access to user-friendly technology and time to observe performance may render its operationalisation burdensome in the context of competency-based medical education.
Collapse
Affiliation(s)
- Christina St-Onge
- Department of Medicine, Faculty of Medicine and Health Sciences, University of Sherbrooke, Sherbrooke, Québec, Canada
| | - Élise Vachon Lachiver
- Faculty of Medicine and Health Sciences, University of Sherbrooke, Sherbrooke, Québec, Canada
| | - Serge Langevin
- Department of Medicine, Faculty of Medicine and Health Sciences, University of Sherbrooke, Sherbrooke, Québec, Canada
| | - Elisabeth Boileau
- Department of Family and Emergency Medicine, Faculty of Medicine and Health Sciences, University of Sherbrooke, Sherbrooke, Québec, Canada
| | - Frédéric Bernier
- Department of Medicine, Faculty of Medicine and Health Sciences, University of Sherbrooke, Sherbrooke, Québec, Canada
- Faculty of Medicine and Health Sciences, University of Sherbrooke, Sherbrooke, Québec, Canada
- Research Center - Sherbrooke University Hospital Center (CHUS), Integrated Health and Social Service Centers (CISSS) and Integrated University Health and Social Service Centres (CIUSSS), Sherbrooke, Québec, Canada
| | - Aliki Thomas
- School of Physical and Occupational Therapy, McGill University, Montreal, Québec, Canada
| |
Collapse
|
33
|
Tavares W, Kuper A, Kulasegaram K, Whitehead C. The compatibility principle: on philosophies in the assessment of clinical competence. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2020; 25:1003-1018. [PMID: 31677146 DOI: 10.1007/s10459-019-09939-9] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Accepted: 10/25/2019] [Indexed: 06/10/2023]
Abstract
The array of different philosophical positions underlying contemporary views on competence, assessment strategies and justification have led to advances in assessment science. Challenges may arise when these philosophical positions are not considered in assessment design. These can include (a) a logical incompatibility leading to varied or difficult interpretations of assessment results, (b) an "anything goes" approach, and (c) uncertainty regarding when and in what context various philosophical positions are appropriate. We propose a compatibility principle that recognizes that different philosophical positions commit assessors/assessment researchers to particular ideas, assumptions and commitments, and applies ta logic of philosophically-informed, assessment-based inquiry. Assessment is optimized when its underlying philosophical position produces congruent, aligned and coherent views on constructs, assessment strategies, justification and their interpretations. As a way forward we argue that (a) there can and should be variability in the philosophical positions used in assessment, and these should be clearly articulated to promote understanding of assumptions and make sense of justifications; (b) we focus on developing the merits, boundaries and relationships within and/or between philosophical positions in assessment; (c) we examine a core set of principles related to the role and relevance of philosophical positions; (d) we elaborate strategies and criteria to delineate compatible from incompatible; and (f) we articulate a need to broaden knowledge/competencies related to these issues. The broadened use of philosophical positions in assessment in the health professions affect the "state of play" and can undermine assessment programs. This may be overcome with attention to the alignment between underlying assumptions/commitments.
Collapse
Affiliation(s)
- Walter Tavares
- The Wilson Centre, Department of Medicine, University of Toronto/University Health Network, 200 Elizabeth Street, 1ES-565, Toronto, ON, M5G 2C4, Canada.
- Post-MD Education (Post-Graduate Medical Education/Continued Professional Development), University of Toronto, Toronto, ON, Canada.
| | - Ayelet Kuper
- The Wilson Centre, Department of Medicine, University of Toronto/University Health Network, 200 Elizabeth Street, 1ES-565, Toronto, ON, M5G 2C4, Canada
- Division of General Internal Medicine, Sunnybrook Health Sciences Centre, Toronto, Canada
- Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Kulamakan Kulasegaram
- The Wilson Centre, Department of Medicine, University of Toronto/University Health Network, 200 Elizabeth Street, 1ES-565, Toronto, ON, M5G 2C4, Canada
- Department of Family and Community Medicine, Women's College Hospital, Faculty of Medicine, University of Toronto, Toronto, Canada
- MD Program, University of Toronto, Toronto, Canada
| | - Cynthia Whitehead
- The Wilson Centre, Department of Medicine, University of Toronto/University Health Network, 200 Elizabeth Street, 1ES-565, Toronto, ON, M5G 2C4, Canada
- Department of Family and Community Medicine, Women's College Hospital, Faculty of Medicine, University of Toronto, Toronto, Canada
| |
Collapse
|
34
|
Generalizability Theory's Role in Validity Research: Innovative Applications in Health Science Education. HEALTH PROFESSIONS EDUCATION 2020. [DOI: 10.1016/j.hpe.2020.02.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
35
|
Rauvola RS, Briggs EP, Hinyard LJ. Nomology, validity, and interprofessional research: The missing link(s). J Interprof Care 2020; 34:545-556. [DOI: 10.1080/13561820.2020.1712333] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Rachel S. Rauvola
- Center for Interprofessional Education and Research, Saint Louis University, St. Louis, MO, USA
- Department of Psychology, Saint Louis University, St. Louis, MO, USA
| | - Erick P. Briggs
- Department of Psychology, Saint Louis University, St. Louis, MO, USA
| | - Leslie J. Hinyard
- Center for Interprofessional Education and Research & Center for Health Outcomes Research, Saint Louis University, St. Louis, MO, USA
| |
Collapse
|
36
|
Razack S, Risør T, Hodges B, Steinert Y. Beyond the cultural myth of medical meritocracy. MEDICAL EDUCATION 2020; 54:46-53. [PMID: 31464349 DOI: 10.1111/medu.13871] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2018] [Revised: 01/14/2019] [Accepted: 02/12/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND We examine the cultural myth of the medical meritocracy, whereby the "best and the brightest" are admitted and promoted within the profession. We explore how this narrative guides medical practice in ways that may no longer be adequate in the contexts of practice today. METHODS Narrative analysis of medical students' and physicians' stories. RESULTS Hierarchies of privilege within medicine are linked to meritocracy and the trope of the "hero's story" in literature. Gender and other forms of difference are generally excluded from narratives of excellence, which suggests operative mechanisms that may be contributory to observed differences in attainment. We discuss how the notion of diversity is formulated in medicine as a "problem" to be accommodated within merit, and posit that medical practice today requires a reformulation of the notion of merit in medicine, valorising a diversity of life experience and skills, rather than "retrofitting" diversity concerns as problems to be accommodated within current constructs of merit. CONCLUSIONS Three main action-oriented outcomes for a better formulation of merit relevant to medical practice today are suggested: (a) development of assessors' critical consciousness regarding the structural issues in merit assignment; (b) alignment of merit criteria with relevant societal outcomes, and (c) developing inclusive leadership to accommodate the greater diversity of excellence needed in today's context of medical practice. A reformulation of the stories through which medical practitioners and educators communicate and validate aspects of medical practice will be required in order for the profession to continue to have relevance to the diverse societies it serves.
Collapse
Affiliation(s)
- Saleem Razack
- Department of Pediatrics and Centre for Medical Education, McGill University, Montreal, Quebec, Canada
| | - Torsten Risør
- Department of Community Medicine, Faculty of Health Sciences, UiT The Arctic University of Norway, and Norwegian Centre for E-health Research, Tromso, Norway
| | - Brian Hodges
- Department of Psychiatry, Faculties of Medicine and the Ontario Institute for Studies in Education, University of Toronto, Toronto, Ontario,, Canada
| | - Yvonne Steinert
- Family Medicine, Centre for Medical Education, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
37
|
van der Meulen MW, Smirnova A, Heeneman S, Oude Egbrink MGA, van der Vleuten CPM, Lombarts KMJMH. Exploring Validity Evidence Associated With Questionnaire-Based Tools for Assessing the Professional Performance of Physicians: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:1384-1397. [PMID: 31460937 DOI: 10.1097/acm.0000000000002767] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE To collect and examine-using an argument-based validity approach-validity evidence of questionnaire-based tools used to assess physicians' clinical, teaching, and research performance. METHOD In October 2016, the authors conducted a systematic search of the literature seeking articles about questionnaire-based tools for assessing physicians' professional performance published from inception to October 2016. They included studies reporting on the validity evidence of tools used to assess physicians' clinical, teaching, and research performance. Using Kane's validity framework, they conducted data extraction based on four inferences in the validity argument: scoring, generalization, extrapolation, and implications. RESULTS They included 46 articles on 15 tools assessing clinical performance and 72 articles on 38 tools assessing teaching performance. They found no studies on research performance tools. Only 12 of the tools (23%) gathered evidence on all four components of Kane's validity argument. Validity evidence focused mostly on generalization and extrapolation inferences. Scoring evidence showed mixed results. Evidence on implications was generally missing. CONCLUSIONS Based on the argument-based approach to validity, not all questionnaire-based tools seem to support their intended use. Evidence concerning implications of questionnaire-based tools is mostly lacking, thus weakening the argument to use these tools for formative and, especially, for summative assessments of physicians' clinical and teaching performance. More research on implications is needed to strengthen the argument and to provide support for decisions based on these tools, particularly for high-stakes, summative decisions. To meaningfully assess academic physicians in their tripartite role as doctor, teacher, and researcher, additional assessment tools are needed.
Collapse
Affiliation(s)
- Mirja W van der Meulen
- M.W. van der Meulen is PhD candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands, and member, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0003-3636-5469. A. Smirnova is PhD graduate and researcher, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands, and member, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0003-4491-3007. S. Heeneman is professor, Department of Pathology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0002-6103-8075. M.G.A. oude Egbrink is professor, Department of Physiology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0002-5530-6598. C.P.M. van der Vleuten is professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0001-6802-3119. K.M.J.M.H. Lombarts is professor, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0001-6167-0620
| | | | | | | | | | | |
Collapse
|
38
|
Roberts C, Wilkinson TJ, Norcini J, Patterson F, Hodges BD. The intersection of assessment, selection and professionalism in the service of patient care. MEDICAL TEACHER 2019; 41:243-248. [PMID: 30663488 DOI: 10.1080/0142159x.2018.1554898] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Affiliation(s)
- Chris Roberts
- a Sydney Medical School, Faculty of Medicine and Health , University of Sydney , Sydney , Australia
| | | | | | | | - Brian D Hodges
- e University Health Network and University of Toronto , Toronto , Canada
| |
Collapse
|
39
|
Cumyn A, Ouellet K, Côté AM, Francoeur C, St-Onge C. Role of Researchers in the Ethical Conduct of Research: A Discourse Analysis From Different Stakeholder Perspectives. ETHICS & BEHAVIOR 2018. [DOI: 10.1080/10508422.2018.1539671] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Annabelle Cumyn
- Department of Medecine, Faculty of Medicine and Health Sciences, Université de Sherbrooke
| | - Kathleen Ouellet
- Centre for Health Sciences Education, Faculty of Medicine and Health Sciences, Université de Sherbrooke
| | - Anne-Marie Côté
- Department of Medicine, Division of Nephrology, Faculty of Medicine and Health Sciences, Université de Sherbrooke
| | - Caroline Francoeur
- Direction de la coordination de la mission universitaire du CIUSSS de l'Estrie-CHUS, Centre intégré universitaire de santé et des services sociaux de l’Estrie-Centre hospitalier universitaire de Sherbrooke
| | - Christina St-Onge
- Centre for Health Sciences Education, Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke
| |
Collapse
|
40
|
Rousselot N, Tombrey T, Zongo D, Mouillet E, Joseph JP, Gay B, Salmi LR. Development and pilot testing of a tool to assess evidence-based practice skills among French general practitioners. BMC MEDICAL EDUCATION 2018; 18:254. [PMID: 30413196 PMCID: PMC6234795 DOI: 10.1186/s12909-018-1368-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Accepted: 10/31/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND There is currently an absence of valid and relevant instruments to evaluate how Evidence-based Practice (EBP) training improves, beyond knowledge, physicians' skills. Our aim was to develop and test a tool to assess physicians' EBP skills. METHODS The tool we developed includes four parts to assess the necessary skills for applying EBP steps: clinical question formulation; literature search; critical appraisal of literature; synthesis and decision making. We evaluated content and face validity, then tested applicability of the tool and whether external observers could reliably use it to assess acquired skills. We estimated Kappa coefficients to measure concordance between raters. RESULTS Twelve general practice (GP) residents and eleven GP teachers from the University of Bordeaux, France, were asked to: formulate four clinical questions (diagnostic, prognosis, treatment, and aetiology) from a proposed clinical vignette, find articles or guidelines to answer four relevant provided questions, analyse an original article answering one of these questions, synthesize knowledge from provided synopses, and decide about the four clinical questions. Concordance between two external raters was excellent for their assessment of participants' appraisal of the significance of article results (K = 0.83), and good for assessment of the formulation of a diagnostic question (K = 0.76), PubMed/Medline (K = 0.71) or guideline (K = 0.67) search, and of appraisal of methodological validity of articles (K = 0.68). CONCLUSIONS Our tool allows an in-depth analysis of EBP skills, thus could supplement existing instruments focused on knowledge or specific EBP step. The actual usefulness of such tools to improve care and population health remains to be evaluated.
Collapse
Affiliation(s)
- Nicolas Rousselot
- Department of General Practice, University of Bordeaux, F-33000 Bordeaux, France
- Département de Médecine Générale, Université de Bordeaux, Case 148. 146 rue Léo Saignat, 33076 Bordeaux cedex, France
| | - Thomas Tombrey
- Department of General Practice, University of Bordeaux, F-33000 Bordeaux, France
| | - Drissa Zongo
- ISPED/Bordeaux School of Public Health, University of Bordeaux, F-33000 Bordeaux, France
- Centre INSERM U-1219 Bordeaux Population Health, F-33000 Bordeaux, France
| | - Evelyne Mouillet
- ISPED/Bordeaux School of Public Health, University of Bordeaux, F-33000 Bordeaux, France
- Centre INSERM U-1219 Bordeaux Population Health, F-33000 Bordeaux, France
| | - Jean-Philippe Joseph
- Department of General Practice, University of Bordeaux, F-33000 Bordeaux, France
- Centre INSERM U-1219 Bordeaux Population Health, F-33000 Bordeaux, France
| | - Bernard Gay
- Department of General Practice, University of Bordeaux, F-33000 Bordeaux, France
| | - Louis Rachid Salmi
- ISPED/Bordeaux School of Public Health, University of Bordeaux, F-33000 Bordeaux, France
- Centre INSERM U-1219 Bordeaux Population Health, F-33000 Bordeaux, France
- CHU de Bordeaux, Pole de sante publique, Service d’information médicale, F-33000 Bordeaux, France
| |
Collapse
|
41
|
Young M, Thomas A, Lubarsky S, Ballard T, Gordon D, Gruppen LD, Holmboe E, Ratcliffe T, Rencic J, Schuwirth L, Durning SJ. Drawing Boundaries: The Difficulty in Defining Clinical Reasoning. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:990-995. [PMID: 29369086 DOI: 10.1097/acm.0000000000002142] [Citation(s) in RCA: 80] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Clinical reasoning is an essential component of a health professional's practice. Yet clinical reasoning research has produced a notably fragmented body of literature. In this article, the authors describe the pause-and-reflect exercise they undertook during the execution of a synthesis of the literature on clinical reasoning in the health professions. Confronted with the challenge of establishing a shared understanding of the nature and relevant components of clinical reasoning, members of the review team paused to independently generate their own personal definitions and conceptualizations of the construct. Here, the authors describe the variability of definitions and conceptualizations of clinical reasoning present within their own team. Drawing on an analogy from mathematics, they hypothesize that the presence of differing "boundary conditions" could help explain individuals' differing conceptualizations of clinical reasoning and the fragmentation at play in the wider sphere of research on clinical reasoning. Specifically, boundary conditions refer to the practice of describing the conditions under which a given theory is expected to hold, or expected to have explanatory power. Given multiple theoretical frameworks, research methodologies, and assessment approaches contained within the clinical reasoning literature, different boundary conditions are likely at play. Open acknowledgment of different boundary conditions and explicit description of the conceptualization of clinical reasoning being adopted within a given study would improve research communication, support comprehensive approaches to teaching and assessing clinical reasoning, and perhaps encourage new collaborative partnerships among researchers who adopt different boundary conditions.
Collapse
Affiliation(s)
- Meredith Young
- M. Young is assistant professor, Department of Medicine, and research scientist, Centre for Medical Education, Faculty of Medicine, McGill University, Montreal, Quebec, Canada. A. Thomas is assistant professor, School of Physical and Occupational Therapy, and research scientist, Centre for Medical Education, Faculty of Medicine, McGill University; and researcher, Centre for Interdisciplinary Research in Rehabilitation of Greater Montreal, Montreal, Quebec, Canada. S. Lubarsky is assistant professor, Department of Neurology, and core member, Centre for Medical Education, Faculty of Medicine, McGill University, Montreal, Quebec, Canada. T. Ballard is a plastic surgery resident, University of Michigan, Ann Arbor, Michigan. D. Gordon is associate professor, Division of Emergency Medicine, Department of Surgery, Duke University School of Medicine, Durham, North Carolina. L.D. Gruppen is professor, Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, Michigan, United States. E. Holmboe is senior vice president for milestone evaluation and development, Accreditation Council for Graduate Medical Education, Chicago, Illinois, and adjunct professor of medicine, Yale University, New Haven, Connecticut, and Feinberg School of Medicine, Northwestern University, Chicago, Illinois. T. Ratcliffe is associate professor, Department of Medicine, University of Texas Health Science Center, San Antonio, Texas. J. Rencic is associate professor of medicine, Tufts University School of Medicine, and member, Division of General Internal Medicine, Tufts Medical Center, Boston, Massachusetts. L. Schuwirth is professor of medical education, Flinders University, and director, Flinders University Prideaux Centre for Research in Health Professions Education, Adelaide, South Australia, Australia; and professor of medical education, Maastricht University, Maastricht, the Netherlands; Chang Gung University, Taoyuan City, Taiwan; and Uniformed Services University of the Health Sciences, Bethesda, Maryland. S.J. Durning is professor of medicine and director of graduate programs in health professions education, Uniformed Services University of the Health Sciences, Bethesda, Maryland
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
42
|
Marceau M, Gallagher F, Young M, St-Onge C. Validity as a social imperative for assessment in health professions education: a concept analysis. MEDICAL EDUCATION 2018; 52:641-653. [PMID: 29878449 DOI: 10.1111/medu.13574] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2017] [Revised: 10/02/2017] [Accepted: 01/30/2018] [Indexed: 06/08/2023]
Abstract
CONTEXT Assessment can have far-reaching consequences for future health care professionals and for society. Thus, it is essential to establish the quality of assessment. Few modern approaches to validity are well situated to ensure the quality of complex assessment approaches, such as authentic and programmatic assessments. Here, we explore and delineate the concept of validity as a social imperative in the context of assessment in health professions education (HPE) as a potential framework for examining the quality of complex and programmatic assessment approaches. METHODS We conducted a concept analysis using Rodgers' evolutionary method to describe the concept of validity as a social imperative in the context of assessment in HPE. Supported by an academic librarian, we developed and executed a search strategy across several databases for literature published between 1995 and 2016. From a total of 321 citations, we identified 67 articles that met our inclusion criteria. Two team members analysed the texts using a specified approach to qualitative data analysis. Consensus was achieved through full team discussions. RESULTS Attributes that characterise the concept were: (i) demonstration of the use of evidence considered credible by society to document the quality of assessment; (ii) validation embedded through the assessment process and score interpretation; (iii) documented validity evidence supporting the interpretation of the combination of assessment findings, and (iv) demonstration of a justified use of a variety of evidence (quantitative and qualitative) to document the quality of assessment strategies. CONCLUSIONS The emerging concept of validity as a social imperative highlights some areas of focus in traditional validation frameworks, whereas some characteristics appear unique to HPE and move beyond traditional frameworks. The study reflects the importance of embedding consideration for society and societal concerns throughout the assessment and validation process, and may represent a potential lens through which to examine the quality of complex and programmatic assessment approaches.
Collapse
Affiliation(s)
- Mélanie Marceau
- Department of Nursing, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | - Frances Gallagher
- Department of Nursing, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | - Meredith Young
- Department of Medicine and Center for Medical Education, Faculty of Medicine, McGill University, Montreal, Quebec, Canada
| | - Christina St-Onge
- Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| |
Collapse
|
43
|
Young M, St-Onge C, Xiao J, Vachon Lachiver E, Torabi N. Characterizing the literature on validity and assessment in medical education: a bibliometric study. PERSPECTIVES ON MEDICAL EDUCATION 2018; 7:182-191. [PMID: 29796976 PMCID: PMC6002290 DOI: 10.1007/s40037-018-0433-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
INTRODUCTION Assessment in Medical Education fills many roles and is under constant scrutiny. Assessments must be of good quality, and supported by validity evidence. Given the high-stakes consequences of assessment, and the many audiences within medical education (e. g., training level, specialty-specific), we set out to document the breadth, scope, and characteristics of the literature reporting on validation of assessments within medical education. METHOD Searches in Medline (Ovid), Web of Science, ERIC, EMBASE (Ovid), and PsycINFO (Ovid) identified articles reporting on assessment of learners in medical education published since 1999. Included articles were coded for geographic origin, journal, journal category, targeted assessment, and authors. A map of collaborations between prolific authors was generated. RESULTS A total of 2,863 articles were included. The majority of articles were from the United States, with Canada producing the most articles per medical school. Most articles were published in journals with medical categorizations (73.1% of articles), but Medical Education was the most represented journal (7.4% of articles). Articles reported on a variety of assessment tools and approaches, and 89 prolific authors were identified, with a total of 228 collaborative links. DISCUSSION Literature reporting on validation of assessments in medical education is heterogeneous. Literature is produced by a broad array of authors and collaborative networks, reported to a broad audience, and is primarily generated in North American and European contexts. Our findings speak to the heterogeneity of the medical education literature on assessment validation, and suggest that this heterogeneity may stem, at least in part, from differences in constructs measured, assessment purposes, or conceptualizations of validity.
Collapse
Affiliation(s)
- Meredith Young
- Department of Medicine, McGill University, Montreal, Canada.
- Centre for Medical Education, McGill University, Montreal, Canada.
| | - Christina St-Onge
- Department of Medicine, Université de Sherbrooke, Sherbrooke, Canada
- Health Profession Education Center, Université de Sherbrooke, Sherbrooke, Canada
| | - Jing Xiao
- Centre for Medical Education, McGill University, Montreal, Canada
| | | | - Nazi Torabi
- Library for Health Sciences, McGill University, Montreal, Canada
| |
Collapse
|
44
|
Tavares W, Brydges R, Myre P, Prpic J, Turner L, Yelle R, Huiskamp M. Applying Kane's validity framework to a simulation based assessment of clinical competence. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2018; 23:323-338. [PMID: 29079933 DOI: 10.1007/s10459-017-9800-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2017] [Accepted: 10/22/2017] [Indexed: 05/13/2023]
Abstract
Assessment of clinical competence is complex and inference based. Trustworthy and defensible assessment processes must have favourable evidence of validity, particularly where decisions are considered high stakes. We aimed to organize, collect and interpret validity evidence for a high stakes simulation based assessment strategy for certifying paramedics, using Kane's validity framework, which some report as challenging to implement. We describe our experience using the framework, identifying challenges, decisions points, interpretations and lessons learned. We considered data related to four inferences (scoring, generalization, extrapolation, implications) occurring during assessment and treated validity as a series of assumptions we must evaluate, resulting in several hypotheses and proposed analyses. We then interpreted our findings across the four inferences, judging if the evidence supported or refuted our proposed uses of the assessment data. Data evaluating "Scoring" included: (a) desirable tool characteristics, with acceptable inter-item correlations (b) strong item-total correlations (c) low error variance for items and raters, and (d) strong inter-rater reliability. Data evaluating "Generalizability" included: (a) a robust sampling strategy capturing the majority of relevant medical directives, skills and national competencies, and good overall and inter-station reliability. Data evaluating "Extrapolation" included: low correlations between assessment scores by dimension and clinical errors in practice. Data evaluating "Implications" included low error rates in practice. Interpreting our findings according to Kane's framework, we suggest the evidence for scoring, generalization and implications supports use of our simulation-based paramedic assessment strategy as a certifying exam; however, the extrapolation evidence was weak, suggesting exam scores did not predict clinical error rates. Our analysis represents a worked example others can follow when using Kane's validity framework to evaluate, and iteratively develop and refine assessment strategies.
Collapse
Affiliation(s)
- Walter Tavares
- The Wilson Centre, Department of Medicine, University of Toronto/University Health Network, 200 Elizabeth Street, 1ES-565, Toronto, ON, M5G 2C4, Canada.
- Post-MD Education (Post-Graduate Medical Education/Continued Professional Development), University of Toronto, Toronto, ON, Canada.
- Paramedic and Senior Services, Community and Health Services Department, Regional Municipality of York, Newmarket, ON, Canada.
| | - Ryan Brydges
- The Wilson Centre, Department of Medicine, University of Toronto/University Health Network, 200 Elizabeth Street, 1ES-565, Toronto, ON, M5G 2C4, Canada
| | - Paul Myre
- Health Sciences North Base Hospital, Sudbury, ON, Canada
| | - Jason Prpic
- Health Sciences North Base Hospital, Sudbury, ON, Canada
| | | | - Richard Yelle
- Ornge Transport Medicine, Base Hospital and Clinical Affairs, Mississauga, ON, Canada
| | | |
Collapse
|
45
|
Hodges BD. Rattling minds: the power of discourse analysis in a post-truth world. MEDICAL EDUCATION 2017; 51:235-237. [PMID: 28211149 DOI: 10.1111/medu.13255] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
|
46
|
Eva KW. What's in a name? Definitional clarity and its unintended consequences. MEDICAL EDUCATION 2017; 51:1-2. [PMID: 27981659 DOI: 10.1111/medu.13233] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
|