1
|
Tavares W, Pearce J. Attending to Variable Interpretations of Assessment Science and Practice. TEACHING AND LEARNING IN MEDICINE 2024; 36:244-252. [PMID: 37431929 DOI: 10.1080/10401334.2023.2231923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 05/31/2023] [Indexed: 07/12/2023]
Abstract
Issue: The way educators think about the nature of competence, the approaches one selects for the assessment of competence, what generated data implies, and what counts as good assessment now involve broader and more diverse interpretive processes. Broadening philosophical positions in assessment has educators applying different interpretations to similar assessment concepts. As a result, what is claimed through assessment, including what counts as quality, can be different for each of us despite using similar activities and language. This is leading to some uncertainty on how to proceed or worse, provides opportunities for questioning the legitimacy of any assessment activity or outcome. While some debate in assessment is inevitable, most have been within philosophical positions (e.g., how best to minimize error), whereas newer debates are happening across philosophical positions (e.g., whether error is a useful concept). As new ways of approaching assessment have emerged, the interpretive nature of underlying philosophical positions has not been sufficiently attended to. Evidence: We illustrate interpretive processes of assessment in action by: (a) summarizing the current health professions assessment context from a philosophical perspective as a way of describing its evolution; (b) demonstrating implications in practice using two examples (i.e., analysis of assessment work and validity claims); and (c) examining pragmatism to demonstrate how even within specific philosophical positions opportunities for variable interpretations still exist. Implications: Our concern is not that assessment designers and users have different assumptions, but that practically, educators may unknowingly (or insidiously) apply different assumptions, and methodological and interpretive norms, and subsequently settle on different views on what serves as quality assessment even for the same assessment program or event. With the state of assessment in health professions in flux, we conclude by calling for a philosophically explicit approach to assessment, and underscore assessment as, fundamentally, an interpretive process - one which demands the careful elucidation of philosophical assumptions to promote understanding and ultimately defensibility of assessment processes and outcomes.
Collapse
Affiliation(s)
- Walter Tavares
- The Wilson Centre for Health Professions Education Research, and Post-Graduate Medical Education, Toronto, Canada
- Temerty Faculty of Medicine, University Health Network and University of Toronto, Toronto, Canada
- Department of Health and Society, University of Toronto, Toronto, Canada
- York Region Paramedic Services, Community Health Services, Regional Municipality of York, Newmarket, Canada
| | - Jacob Pearce
- Tertiary Education, Australian Council for Educational Research, Camberwell, Australia
| |
Collapse
|
2
|
Kinnear B, Beck J, Schumacher DJ, Zhou C, Balmer D. Building a Solid House of Scholarship: The Importance of Foundational Worldviews. Hosp Pediatr 2024; 14:e189-e193. [PMID: 38384255 DOI: 10.1542/hpeds.2023-007515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2024]
Affiliation(s)
- Benjamin Kinnear
- Departments of Pediatrics, Cincinnati Children's Hospital Medical Center
- Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio
| | - Jimmy Beck
- Department of Pediatrics, Seattle Children's Hospital, University of Washington School of Medicine, Seattle, Washington
| | | | - Christine Zhou
- Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio
| | - Dorene Balmer
- Department of Pediatrics, Children's Hospital of Philadelphia, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
3
|
Lorello GR, Kuper A. What's in a name? Internal coherence as a marker of rigour in research. J Clin Anesth 2024; 92:111216. [PMID: 37487864 DOI: 10.1016/j.jclinane.2023.111216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 07/18/2023] [Indexed: 07/26/2023]
Affiliation(s)
- Gianni R Lorello
- Department of Anesthesia and Pain Management, University Health Network - Toronto Western Hospital, Toronto, ON, Canada; Department of Anesthesiology and Pain Medicine, University of Toronto, Toronto, ON, Canada; The Wilson Centre, University of Toronto - Toronto General Hospital, Toronto, ON, Canada; Women's College Research Institute, Women's College Hospital, Toronto, ON, Canada.
| | - Ayelet Kuper
- The Wilson Centre, University of Toronto - Toronto General Hospital, Toronto, ON, Canada; Division of General Internal Medicine, Sunnybrook Health Sciences Centre, Toronto, ON, Canada; Department of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
4
|
Tavares W, Kinnear B, Schumacher DJ, Forte M. "Rater training" re-imagined for work-based assessment in medical education. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:1697-1709. [PMID: 37140661 DOI: 10.1007/s10459-023-10237-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 04/30/2023] [Indexed: 05/05/2023]
Abstract
In this perspective, the authors critically examine "rater training" as it has been conceptualized and used in medical education. By "rater training," they mean the educational events intended to improve rater performance and contributions during assessment events. Historically, rater training programs have focused on modifying faculty behaviours to achieve psychometric ideals (e.g., reliability, inter-rater reliability, accuracy). The authors argue these ideals may now be poorly aligned with contemporary research informing work-based assessment, introducing a compatibility threat, with no clear direction on how to proceed. To address this issue, the authors provide a brief historical review of "rater training" and provide an analysis of the literature examining the effectiveness of rater training programs. They focus mainly on what has served to define effectiveness or improvements. They then draw on philosophical and conceptual shifts in assessment to demonstrate why the function, effectiveness aims, and structure of rater training requires reimagining. These include shifting competencies for assessors, viewing assessment as a complex cognitive task enacted in a social context, evolving views on biases, and reprioritizing which validity evidence should be most sought in medical education. The authors aim to advance the discussion on rater training by challenging implicit incompatibility issues and stimulating ways to overcome them. They propose that "rater training" (a moniker they suggest be reserved for strong psychometric aims) be augmented with "assessor readiness" programs that link to contemporary assessment science and enact the principle of compatibility between that science and ways of engaging with advances in real-world faculty-learner contexts.
Collapse
Affiliation(s)
- Walter Tavares
- Department of Health and Society, Wilson Centre, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada.
| | - Benjamin Kinnear
- Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Daniel J Schumacher
- Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Milena Forte
- Department of Family and Community Medicine, Temerty Faculty of Medicine, Mount Sinai Hospital, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
5
|
Pearce J, Chiavaroli N, Tavares W. On the use and abuse of metaphors in assessment. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:1333-1345. [PMID: 36729196 DOI: 10.1007/s10459-022-10203-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 12/29/2022] [Indexed: 06/18/2023]
Abstract
This paper is motivated by a desire to advance assessment in the health professions through encouraging the judicious and productive use of metaphors. Through five specific examples (pixels, driving lesson/test, jury deliberations, signal processing, and assessment as a toolbox), we interrogate how metaphors are being used in assessment to consider what value they add to understanding and implementation of assessment practices. By unpacking these metaphors in action, we probe each metaphor's rationale and function, the gains each metaphor makes, and explore the unintended meanings they may carry. In summarizing common uses of metaphors, we elucidate how there may be both advantages and/or disadvantages. Metaphors can play important roles in simplifying, complexifying, communicating, translating, encouraging reflection, and convincing. They may be powerfully rhetorical, leading to intended consequences, actions, and other pragmatic outcomes. Although metaphors can be extremely helpful, they do not constitute thorough critique, justified evidence or argumentation. We argue that although metaphors have utility, they must be carefully considered if they are to serve assessment needs in intended ways. We should pay attention to how metaphors may be misinterpreted, what they ignore or unintentionally signal, and perhaps mitigate against this with anticipated corrections or nuanced qualifications. Failure to do so may lead to implementing practices that miss underlying and relevant complexities for assessment science and practice. Using metaphors requires careful attention with respect to their role, contributions, benefits and limitations. We highlight the value that comes from critiquing metaphors, and demonstrate the care required to ensure their continued utility.
Collapse
Affiliation(s)
- Jacob Pearce
- Tertiary Education, Australian Council for Educational Research, Camberwell, Australia.
| | - Neville Chiavaroli
- Tertiary Education, Australian Council for Educational Research, Camberwell, Australia
| | - Walter Tavares
- Department of Health and Society and Wilson Centre, Temerty Faculty of Medicine, University of Toronto, Toronto, ON., Canada
| |
Collapse
|
6
|
Rietmeijer CBT, van Esch SCM, Blankenstein AH, van der Horst HE, Veen M, Scheele F, Teunissen PW. A phenomenology of direct observation in residency: Is Miller's 'does' level observable? MEDICAL EDUCATION 2023; 57:272-279. [PMID: 36515981 PMCID: PMC10107098 DOI: 10.1111/medu.15004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 12/08/2022] [Accepted: 12/10/2022] [Indexed: 06/17/2023]
Abstract
INTRODUCTION Guidelines on direct observation (DO) present DO as an assessment of Miller's 'does' level, that is, the learner's ability to function independently in clinical situations. The literature, however, indicates that residents may behave 'inauthentically' when observed. To minimise this 'observer effect', learners are encouraged to 'do what they would normally do' so that they can receive feedback on their actual work behaviour. Recent phenomenological research on patients' experiences with DO challenges this approach; patients needed-and caused-some participation of the observing supervisor. Although guidelines advise supervisors to minimise their presence, we are poorly informed on how some deliberate supervisor participation affects residents' experience in DO situations. Therefore, we investigated what residents essentially experienced in DO situations. METHODS We performed an interpretive phenomenological interview study, including six general practice (GP) residents. We collected and analysed our data, using the four phenomenological lenses of lived body, lived space, lived time and lived relationship. We grouped our open codes by interpreting what they revealed about common structures of residents' pre-reflective experiences. RESULTS Residents experienced the observing supervisor not just as an observer or assessor. They also experienced them as both a senior colleague and as the patient's familiar GP, which led to many additional interactions. When residents tried to act as if the supervisor was not there, they could feel insecure and handicapped because the supervisor was there, changing the situation. DISCUSSION Our results indicate that the 'observer effect' is much more material than was previously understood. Consequently, observing residents' 'authentic' behaviour at Miller's 'does' level, as if the supervisor was not there, seems impossible and a misleading concept: misleading, because it may frustrate residents and cause supervisors to neglect patients' and residents' needs in DO situations. We suggest that one-way DO is better replaced by bi-directional DO in working-and-learning-together sessions.
Collapse
Affiliation(s)
- Chris B. T. Rietmeijer
- Department of General PracticeAmsterdam UMC, location Vrije Universiteit AmsterdamAmsterdamThe Netherlands
| | - Suzanne C. M. van Esch
- Department of General PracticeAmsterdam UMC, location University of AmsterdamAmsterdamThe Netherlands
| | - Annette H. Blankenstein
- Department of General PracticeAmsterdam UMC, location Vrije Universiteit AmsterdamAmsterdamThe Netherlands
| | - Henriëtte E. van der Horst
- Department of General PracticeAmsterdam UMC, location Vrije Universiteit AmsterdamAmsterdamThe Netherlands
| | - Mario Veen
- Department of General PracticeErasmus Medical CenterRotterdamThe Netherlands
| | - Fedde Scheele
- School of Medical Sciences, Athena Institute for Transdisciplinary ResearchAmsterdam UMC, location Vrije Universiteit AmsterdamAmsterdamThe Netherlands
| | - Pim W. Teunissen
- School of Health Professions EducationMaastricht UniversityMaastrichtThe Netherlands
| |
Collapse
|
7
|
Ng SL, Forsey J, Boyd VA, Friesen F, Langlois S, Ladonna K, Mylopoulos M, Steenhof N. Combining adaptive expertise and (critically) reflective practice to support the development of knowledge, skill, and society. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2022; 27:1265-1281. [PMID: 36350488 PMCID: PMC9645329 DOI: 10.1007/s10459-022-10178-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 10/08/2022] [Indexed: 06/16/2023]
Abstract
Adaptive expertise (AE) and reflective practice (RP), two influential and resonant theories of professional expertise and practice in their own right, may further benefit health professions education if carefully combined. The current societal and systemic context is primed for both AE and RP. Both bodies of work position practitioners as agentive, learning continually and thoughtfully throughout their careers, particularly in order to manage unprecedented situations well. Similar on the surface, the roots and practices of AE and RP diverge at key junctures and we will focus on RP's movement toward critically reflective practice. The roots of AE and RP, and how they relate to or diverge from present-day applications matter because in health professions education, as in all education, paradigmatic mixing should be undertaken purposefully. This paper will explore the need for AE and RP, their shared commitments, distinctive histories, pedagogical possibilities both individually and combined, and next steps for maximizing their potential to positively impact the field. We argue that this exploration is urgently needed because both AE and RP hold much promise for improving health care and yet employing them optimally-whether alone or together-requires understanding and intent. We build an interprofessional education case situated in long-term care, throughout the paper, to demonstrate the potential that AE and RP might offer to health professions education individually and combined. This exploration comes just in time. Within the realities of uncertain practice emphasized by the pandemic, practitioners were also called to act in response to complex and urgent social movements. A combined AE and RP approach, with focus on critically reflective practice in particular, would potentially prepare professionals to respond effectively, compassionately, and equitably to future health and social crises and challenges.
Collapse
Affiliation(s)
- Stella L Ng
- Centre for Advancing Collaborative Healthcare and Education, University of Toronto, Toronto, Canada.
| | - Jacquelin Forsey
- Rehabilitation Sciences Institute, University of Toronto, Toronto, Canada
| | - Victoria A Boyd
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Canada
| | - Farah Friesen
- Centre for Advancing Collaborative Healthcare and Education, University of Toronto, Toronto, Canada
| | | | - Kori Ladonna
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Canada
| | - Maria Mylopoulos
- The Wilson Centre, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Naomi Steenhof
- Leslie Dan Faculty of Pharmacy, University of Toronto, Toronto, Canada
| |
Collapse
|
8
|
Kinnear B, Schumacher DJ, Driessen EW, Varpio L. How argumentation theory can inform assessment validity: A critical review. MEDICAL EDUCATION 2022; 56:1064-1075. [PMID: 35851965 PMCID: PMC9796688 DOI: 10.1111/medu.14882] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 07/07/2022] [Accepted: 07/15/2022] [Indexed: 05/21/2023]
Abstract
INTRODUCTION Many health professions education (HPE) scholars frame assessment validity as a form of argumentation in which interpretations and uses of assessment scores must be supported by evidence. However, what are purported to be validity arguments are often merely clusters of evidence without a guiding framework to evaluate, prioritise, or debate their merits. Argumentation theory is a field of study dedicated to understanding the production, analysis, and evaluation of arguments (spoken or written). The aim of this study is to describe argumentation theory, articulating the unique insights it can offer to HPE assessment, and presenting how different argumentation orientations can help reconceptualize the nature of validity in generative ways. METHODS The authors followed a five-step critical review process consisting of iterative cycles of focusing, searching, appraising, sampling, and analysing the argumentation theory literature. The authors generated and synthesised a corpus of manuscripts on argumentation orientations deemed to be most applicable to HPE. RESULTS We selected two argumentation orientations that we considered particularly constructive for informing HPE assessment validity: New rhetoric and informal logic. In new rhetoric, the goal of argumentation is to persuade, with a focus on an audience's values and standards. Informal logic centres on identifying, structuring, and evaluating arguments in real-world settings, with a variety of normative standards used to evaluate argument validity. DISCUSSION Both new rhetoric and informal logic provide philosophical, theoretical, or practical groundings that can advance HPE validity argumentation. New rhetoric's foregrounding of audience aligns with HPE's social imperative to be accountable to specific stakeholders such as the public and learners. Informal logic provides tools for identifying and structuring validity arguments for analysis and evaluation.
Collapse
Affiliation(s)
- Benjamin Kinnear
- Department of PediatricsUniversity of Cincinnati College of MedicineCincinnatiOhioUSA
- School of Health Professions Education (SHE)Maastricht UniversityMaastrichtThe Netherlands
| | - Daniel J. Schumacher
- Department of PediatricsUniversity of Cincinnati College of MedicineCincinnatiOhioUSA
| | - Erik W. Driessen
- School of Health Professions Education Faculty of HealthMedicine and Life Sciences of Maastricht UniversityMaastrichtThe Netherlands
| | - Lara Varpio
- Uniformed Services University of the Health SciencesBethesdaMarylandUSA
| |
Collapse
|
9
|
Hatala R, Tavares W. Workplace-based licensing assessments: an idea worth considering? CANADIAN MEDICAL EDUCATION JOURNAL 2022; 13:115-116. [PMID: 36091735 PMCID: PMC9441125 DOI: 10.36834/cmej.73837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Affiliation(s)
- Rose Hatala
- Department of Medicine, University of British Columbia, British Columbia, Canada
| | - Walter Tavares
- The Wilson Centre and Temerty Faculty of Medicine, University Health Network, University of Toronto, Ontario, Canada
| |
Collapse
|
10
|
Spencer M, Sherbino J, Hatala R. Examining the validity argument for the Ottawa Surgical Competency Operating Room Evaluation (OSCORE): a systematic review and narrative synthesis. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2022; 27:659-689. [PMID: 35511356 DOI: 10.1007/s10459-022-10114-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 04/02/2022] [Indexed: 06/14/2023]
Abstract
The Ottawa Surgical Competency Operating Room Evaluation (OSCORE) is an assessment tool that has gained prominence in postgraduate competency-based training programs. We undertook a systematic review and narrative synthesis to articulate the underlying validity argument in support of this tool. Although originally developed to assess readiness for independent performance of a procedure, contemporary implementation includes using the OSCORE for entrustment supervision decisions. We used systematic review methodology to search, identify, appraise and abstract relevant articles from 2005 to September 2020, across MEDLINE, EMBASE and Google Scholar databases. Nineteen original, English-language, quantitative or qualitative articles addressing the use of the OSCORE for health professionals' assessment were included. We organized and synthesized the validity evidence according to Kane's framework, articulating the validity argument and identifying evidence gaps. We demonstrate a reasonable validity argument for the OSCORE in surgical specialties, based on assessing surgical competence as readiness for independent performance for a given procedure, which relates to ad hoc, retrospective, entrustment supervision decisions. The scoring, generalization and extrapolation inferences are well-supported. However, there is a notable lack of implications evidence focused on the impact of the OSCORE on summative decision-making within surgical training programs. In non-surgical specialties, the interpretation/use argument for the OSCORE has not been clearly articulated. The OSCORE has been reduced to a single-item global rating scale, and there is limited validity evidence to support its use in workplace-based assessment. Widespread adoption of the OSCORE must be informed by concurrent data collection in more diverse settings and specialties.
Collapse
Affiliation(s)
- Martha Spencer
- The University of British Columbia, Vancouver, BC, Canada.
| | | | - Rose Hatala
- The University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
11
|
Brydges R, Law M, Ma IWY, Gavarkovs A. On embedding assessments of self-regulated learning into licensure activities in the health professions: a call to action. CANADIAN MEDICAL EDUCATION JOURNAL 2022; 13:100-109. [PMID: 36091729 PMCID: PMC9441114 DOI: 10.36834/cmej.73855] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
How well have healthcare professionals and trainees been prepared for the inevitable demands for new learning that will arise in their future? Given the rapidity with which 'core healthcare knowledge' changes, medical educators have a responsibility to audit whether trainees have developed the capacity to effectively self-regulate their learning. Trainees who engage in effective self-regulated learning (SRL) skillfully monitor and control their cognition, motivation, behaviour, and environment to adaptively meet demands for new learning. However, medical curricula rarely assess trainees' capacity to engage in these strategic processes. In this position paper, we argue for a paradigm shift toward assessing SRL more deliberately in undergraduate and postgraduate programs, as well as in associated licensing activities. Specifically, we explore evidence supporting an innovative blend of principles from the science on SRL, and on preparation for future learning (PFL) assessments. We propose recommendations for how program designers, curriculum developers, and assessment leads in undergraduate and postgraduate training programs, and in licensing bodies can work together to develop integrated assessments that measure how and how well trainees engage in SRL. Claims about lifelong learning in health professions education have gone unmatched by responsive curricular changes for far too long. Further neglecting these important competencies represents a disservice to medical trainees and a potential risk to the future patients they will care for.
Collapse
Affiliation(s)
- Ryan Brydges
- Allan Waters Family Simulation Centre, St. Michael’s Hospital, Unity Health Toronto, Ontario, Canada
| | - Marcus Law
- MD Program, Temerty Faculty of Medicine, University of Toronto, Ontario, Canada
| | - Irene WY Ma
- Division of General Internal Medicine, Department of Medicine, Cumming School of Medicine, University of Calgary, Alberta, Canada
| | - Adam Gavarkovs
- Institute of Health Policy, Management and Evaluation, University of Toronto, Ontario, Canada
| |
Collapse
|
12
|
Jeyalingam T, Walsh CM, Tavares W, Mylopoulos M, Hodwitz K, Liu LWC, Heitman SJ, Brydges R. Variable or Fixed? Exploring Entrustment Decision Making in Workplace- and Simulation-Based Assessments. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:1057-1064. [PMID: 35263307 DOI: 10.1097/acm.0000000000004661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE Many models of competency-based medical education (CBME) emphasize assessing entrustable professional activities (EPAs). Despite the centrality of EPAs, researchers have not compared rater entrustment decisions for the same EPA across workplace- and simulation-based assessments. This study aimed to explore rater entrustment decision making across these 2 assessment settings. METHOD An interview-based study using a constructivist grounded theory approach was conducted. Gastroenterology faculty at the University of Toronto and the University of Calgary completed EPA assessments of trainees' endoscopic polypectomy performance in both workplace and simulation settings between November 2019 and January 2021. After each assessment, raters were interviewed to explore how and why they made entrustment decisions within and across settings. Transcribed interview data were coded iteratively using constant comparison to generate themes. RESULTS Analysis of 20 interviews with 10 raters found that participants (1) held multiple meanings of entrustment and expressed variability in how they justified their entrustment decisions and scoring, (2) held personal caveats for making entrustment decisions "comfortably" (i.e., authenticity, task-related variability, opportunity to assess trainee responses to adverse events, and the opportunity to observe multiple performances over time), (3) experienced cognitive tensions between formative and summative purposes when assessing EPAs, and (4) experienced relative freedom when using simulation to formatively assess EPAs but constraint when using only simulation-based assessments for entrustment decision making. CONCLUSIONS Participants spoke about and defined entrustment variably, which appeared to produce variability in how they judged entrustment across participants and within and across assessment settings. These rater idiosyncrasies suggest that programs implementing CBME must consider how such variability affects the aggregation of EPA assessments, especially those collected in different settings. Program leaders might also consider how to fulfill raters' criteria for comfortably making entrustment decisions by ensuring clear definitions and purposes when designing and integrating workplace- and simulation-based assessments.
Collapse
Affiliation(s)
- Thurarshen Jeyalingam
- T. Jeyalingam is an advanced fellow in luminal therapeutic endoscopy, University of Calgary, Calgary, Alberta, Canada; ORCID: http://orcid.org/0000-0002-7254-9639
| | - Catharine M Walsh
- C.M. Walsh is a staff gastroenterologist, Division of Gastroenterology, Hepatology and Nutrition, educational researcher, SickKids Learning Institute, scientist, Child Health Evaluative Sciences, SickKids Research Institute, Hospital for Sick Children, scientist, Wilson Centre, and associate professor of paediatrics, University of Toronto, Toronto, Ontario, Canada; ORCID: http://orcid.org/0000-0003-3928-703X
| | - Walter Tavares
- W. Tavares is assistant professor and scientist, Wilson Centre and Temerty Faculty of Medicine, University Health Network and University of Toronto, Toronto, Ontario, Canada; ORCID: http://orcid.org/0000-0001-8267-9448
| | - Maria Mylopoulos
- M. Mylopoulos is associate professor, Department of Paediatrics, and scientist and associate director, Wilson Centre, University of Toronto, Temerty Faculty of Medicine, Toronto, Ontario, Canada; ORCID: http://orcid.org/0000-0003-0012-5375
| | - Kathryn Hodwitz
- K. Hodwitz is a clinical research specialist, Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto, Toronto, Ontario, Canada; ORCID: http://orcid.org/0000-0003-3099-1709
| | - Louis W C Liu
- L.W.C. Liu is associate professor, Department of Medicine, University of Toronto, and head, Division of Gastroenterology and Hepatology, University Health Network and Sinai Health, Toronto, Ontario, Canada; ORCID: http://orcid.org/0000-0001-6899-7941
| | - Steven J Heitman
- S.J. Heitman is associate professor, Departments of Medicine and Community Health Sciences, Cumming School of Medicine, holds the N.B. Hershfield Chair in Therapeutic Endoscopy, University of Calgary, is medical director, Forzani & MacPhail Colon Cancer Screening Centre, and scientific director, Digestive Health Strategic Clinical Network, Alberta Health Services, Calgary, Alberta, Canada; ORCID: http://orcid.org/0000-0002-4952-779X
| | - Ryan Brydges
- R. Brydges is a scientist and holds the Professorship in Technology-Enabled Education, St. Michael's Hospital, Unity Health Toronto, and is associate professor, Department of Medicine, University of Toronto, Toronto, Ontario, Canada; ORCID: https://orcid.org/0000-0001-5203-7049
| |
Collapse
|
13
|
Defining Foundational Competence for Prelicensure and Graduate Nursing Students: A Concept Analysis and Conceptual Model. Nurse Educ Pract 2022; 64:103415. [DOI: 10.1016/j.nepr.2022.103415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Revised: 07/07/2022] [Accepted: 07/14/2022] [Indexed: 11/15/2022]
|
14
|
Ng SL, Crukley J, Brydges R, Boyd V, Gavarkovs A, Kangasjarvi E, Wright S, Kulasegaram K, Friesen F, Woods NN. Toward 'seeing' critically: a Bayesian analysis of the impacts of a critical pedagogy. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2022; 27:323-354. [PMID: 34973100 PMCID: PMC9117363 DOI: 10.1007/s10459-021-10087-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Accepted: 11/14/2021] [Indexed: 05/30/2023]
Abstract
Critical reflection supports enactment of the social roles of care, like collaboration and advocacy. We require evidence that links critical teaching approaches to future critically reflective practice. We thus asked: does a theory-informed approach to teaching critical reflection influence what learners talk about (i.e. topics of discussion) and how they talk (i.e. whether they talk in critically reflective ways) during subsequent learning experiences? Pre-clinical students (n = 75) were randomized into control and intervention conditions (8 groups each, of up to 5 interprofessional students). Participants completed an online Social Determinants of Health (SDoH) module, followed by either: a SDoH discussion (control) or critically reflective dialogue (intervention). Participants then experienced a common learning session (homecare curriculum and debrief) as outcome assessment, and another similar session one-week later. Blinded coders coded transcripts for what (topics) was said and how (critically reflective or not). We constructed Bayesian regression models for the probability of meaning units (unique utterances) being coded as particular what codes and as critically reflective or not (how). Groups exposed to the intervention were more likely, in a subsequent learning experience, to talk in a critically reflective manner (how) (0.096 [0.04, 0.15]) about similar content (no meaningful differences in what was said). This difference waned at one-week follow up. We showed experimentally that a particular critical pedagogical approach can make learners' subsequent talk, ways of seeing, more critically reflective even when talking about similar topics. This study offers the field important new options for studying historically challenging-to-evaluate impacts and supports theoretical assertions about the potential of critical pedagogies.
Collapse
Affiliation(s)
- Stella L Ng
- University of Toronto Centre for Interprofessional Education at University Health Network, Toronto Western Hospital, 399 Bathurst St., Nassau Annex (Entrance), Toronto, ON, M5T 2S8, Canada.
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada.
- Wilson Centre, University of Toronto, Toronto, ON, Canada.
| | - Jeff Crukley
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Data Science and Statistics, Toronto, ON, Canada
| | - Ryan Brydges
- Department of Medicine, University of Toronto, Toronto, ON, Canada
- Education, Unity Health Toronto, Toronto, ON, Canada
- Wilson Centre, University of Toronto, Toronto, ON, Canada
| | - Victoria Boyd
- Institute of Health Policy, Management & Evaluation, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Wilson Centre, University of Toronto, Toronto, ON, Canada
| | - Adam Gavarkovs
- Institute of Health Policy, Management & Evaluation, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Wilson Centre, University of Toronto, Toronto, ON, Canada
| | | | - Sarah Wright
- Department of Family and Community Medicine and Wilson Centre, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Wilson Centre, University of Toronto, Toronto, ON, Canada
| | - Kulamakan Kulasegaram
- Department of Family and Community Medicine and Wilson Centre, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Wilson Centre, University of Toronto, Toronto, ON, Canada
| | - Farah Friesen
- University of Toronto Centre for Interprofessional Education at University Health Network, Toronto Western Hospital, 399 Bathurst St., Nassau Annex (Entrance), Toronto, ON, M5T 2S8, Canada
| | - Nicole N Woods
- Department of Family and Community Medicine and Wilson Centre, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Wilson Centre, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
15
|
Yeates P, Moult A, Cope N, McCray G, Fuller R, McKinley R. Determining influence, interaction and causality of contrast and sequence effects in objective structured clinical exams. MEDICAL EDUCATION 2022; 56:292-302. [PMID: 34893998 PMCID: PMC9304241 DOI: 10.1111/medu.14713] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/03/2021] [Accepted: 12/08/2021] [Indexed: 06/14/2023]
Abstract
INTRODUCTION Differential rater function over time (DRIFT) and contrast effects (examiners' scores biased away from the standard of preceding performances) both challenge the fairness of scoring in objective structured clinical exams (OSCEs). This is important as, under some circumstances, these effects could alter whether some candidates pass or fail assessments. Benefitting from experimental control, this study investigated the causality, operation and interaction of both effects simultaneously for the first time in an OSCE setting. METHODS We used secondary analysis of data from an OSCE in which examiners scored embedded videos of student performances interspersed between live students. Embedded video position varied between examiners (early vs. late) whilst the standard of preceding performances naturally varied (previous high or low). We examined linear relationships suggestive of DRIFT and contrast effects in all within-OSCE data before comparing the influence and interaction of 'early' versus 'late' and 'previous high' versus 'previous low' conditions on embedded video scores. RESULTS Linear relationships data did not support the presence of DRIFT or contrast effects. Embedded videos were scored higher early (19.9 [19.4-20.5]) versus late (18.6 [18.1-19.1], p < 0.001), but scores did not differ between previous high and previous low conditions. The interaction term was non-significant. CONCLUSIONS In this instance, the small DRIFT effect we observed on embedded videos can be causally attributed to examiner behaviour. Contrast effects appear less ubiquitous than some prior research suggests. Possible mediators of these finding include the following: OSCE context, detail of task specification, examiners' cognitive load and the distribution of learners' ability. As the operation of these effects appears to vary across contexts, further research is needed to determine the prevalence and mechanisms of contrast and DRIFT effects, so that assessments may be designed in ways that are likely to avoid their occurrence. Quality assurance should monitor for these contextually variable effects in order to ensure OSCE equivalence.
Collapse
Affiliation(s)
- Peter Yeates
- School of MedicineKeele UniversityKeeleUK
- Fairfield General HospitalPennine Acute Hospitals NHS TrustBuryUK
| | | | | | | | | | | |
Collapse
|
16
|
Tavares W, Gofton W, Bhanji F, Dudek N. Reframing the O-SCORE as a Retrospective Supervision Scale Using Validity Theory. J Grad Med Educ 2022; 14:22-24. [PMID: 35222815 PMCID: PMC8848889 DOI: 10.4300/jgme-d-21-00592.1] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Affiliation(s)
- Walter Tavares
- Walter Tavares, PhD, is Assistant Professor and Scientist, The Wilson Centre and Temerty Faculty of Medicine, University Health Network and University of Toronto, Toronto, Ontario, Canada
| | - Wade Gofton
- Wade Gofton, MD, MEd, is Professor, Department of Surgery, University of Ottawa, Ottawa, Ontario, Canada
| | - Farhan Bhanji
- Farhan Bhanji, MD, MSc(Ed), is Professor, Department of Pediatrics, McGill University, Montreal, Quebec, Canada, and Associate Director of Assessment Strategy Royal College of Physicians and Surgeons, Ottawa, Ontario, Canada
| | - Nancy Dudek
- Nancy Dudek, MD, MEd, is Professor, Department of Medicine, Division of Physical Medicine and Rehabilitation, and The Ottawa Hospital, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
17
|
Tavares W, Hodwitz K, Rowland P, Ng S, Kuper A, Friesen F, Shwetz K, Brydges R. Implicit and inferred: on the philosophical positions informing assessment science. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2021; 26:1597-1623. [PMID: 34370126 DOI: 10.1007/s10459-021-10063-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 07/25/2021] [Indexed: 06/13/2023]
Abstract
Assessment practices have been increasingly informed by a range of philosophical positions. While generally beneficial, the addition of options can lead to misalignment in the philosophical assumptions associated with different features of assessment (e.g., the nature of constructs and competence, ways of assessing, validation approaches). Such incompatibility can threaten the quality and defensibility of researchers' claims, especially when left implicit. We investigated how authors state and use their philosophical positions when designing and reporting on performance-based assessments (PBA) of intrinsic roles, as well as the (in)compatibility of assumptions across assessment features. Using a representative sample of studies examining PBA of intrinsic roles, we used qualitative content analysis to extract data on how authors enacted their philosophical positions across three key assessment features: (1) construct conceptualizations, (2) assessment activities, and (3) validation methods. We also examined patterns in philosophical positioning across features and studies. In reviewing 32 papers from established peer-reviewed journals, we found (a) authors rarely reported their philosophical positions, meaning underlying assumptions could only be inferred; (b) authors approached features of assessment in variable ways that could be informed by or associated with different philosophical assumptions; (c) we experienced uncertainty in determining (in)compatibility of philosophical assumptions across features. Authors' philosophical positions were often vague or absent in the selected contemporary assessment literature. Leaving such details implicit may lead to misinterpretation by knowledge users wishing to implement, build on, or evaluate the work. As such, assessing claims, quality and defensibility, may increasingly depend more on who is interpreting, rather than what is being interpreted.
Collapse
Affiliation(s)
- Walter Tavares
- The Wilson Centre, Temerty Faculty of Medicine, Department of Medicine, Institute for Health Policy, Management and Evaluation, University of Toronto/University Health Network, Toronto, Ontario, Canada.
| | - Kathryn Hodwitz
- Li Ka Shing Knowledge Institute, St. Michaels Hospital, Toronto, Ontario, Canada
| | - Paula Rowland
- The Wilson Centre, Temerty Faculty of Medicine, Department of Occupational Therapy and Occupational Science, University of Toronto/University Health Network, Toronto, Ontario , Canada
| | - Stella Ng
- The Wilson Centre, Temerty Faculty of Medicine, Department of Speech-Language Pathology, Temerty Faculty of Medicine, The Wilson Centre, University of Toronto, Centre for Faculty Development, Unity Health Toronto, Toronto, Ontario, Canada
| | - Ayelet Kuper
- The Wilson Centre, University Health Network/University of Toronto, Division of General Internal Medicine, Sunnybrook Health Sciences Centre, Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Farah Friesen
- Centre for Faculty Development, Temerty Faculty of Medicine, University of Toronto at Unity Health Toronto, Toronto, Ontario, Canada
| | - Katherine Shwetz
- Department of English, University of Toronto, Toronto, Ontario, Canada
| | - Ryan Brydges
- The Wilson Centre, Temerty Faculty of Medicine, Department of Medicine, Unity Health Toronto, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
18
|
Pearce J, Tavares W. A philosophical history of programmatic assessment: tracing shifting configurations. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2021; 26:1291-1310. [PMID: 33893881 DOI: 10.1007/s10459-021-10050-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Accepted: 04/09/2021] [Indexed: 06/12/2023]
Abstract
Programmatic assessment is now well entrenched in medical education, allowing us to reflect on when it first emerged and how it evolved into the form we know today. Drawing upon the intellectual tradition of historical epistemology, we provide a philosophically-oriented historiographical study of programmatic assessment. Our goal is to trace its relatively short historical trajectory by describing shifting configurations in its scene of inquiry-focusing on questions, practices, and philosophical presuppositions. We identify three historical phases: emergence, evolution and entrenchment. For each, we describe the configurations of the scene; examine underlying philosophical presuppositions driving changes; and detail upshots in assessment practice. We find that programmatic assessment emerged in response to positivist 'turmoil' prior to 2005, driven by utility considerations and implicit pragmatist undertones. Once introduced, it evolved with notions of diversity and learning being underscored, and a constructivist ontology developing at its core. More recently, programmatic assessment has become entrenched as its own sub-discipline. Rich narratives have been emphasised, but philosophical underpinnings have been blurred. We hope to shed new light on current assessment practices in the medical education community by interrogating the history of programmatic assessment from this philosophical vantage point. Making philosophical presuppositions explicit highlights the perspectival nature of aspects of programmatic assessment, and suggest reasons for perceived benefits as well as potential tensions, contradictions and vulnerabilities in the approach today. We conclude by offering some reflections on important points to emerge from our historical study, and suggest 'what next' for programmatic assessment in light of this endeavour.
Collapse
Affiliation(s)
- J Pearce
- Tertiary Education (Assessment), Australian Council for Educational Research, 19 Prospect Hill Road, Camberwell, VIC, 3124, Australia.
| | - W Tavares
- The Wilson Centre and Post-MD Education. University Health Network and University of Toronto, Toronto, ON, Canada
| |
Collapse
|
19
|
Ginsburg S, Watling CJ, Schumacher DJ, Gingerich A, Hatala R. Numbers Encapsulate, Words Elaborate: Toward the Best Use of Comments for Assessment and Feedback on Entrustment Ratings. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S81-S86. [PMID: 34183607 DOI: 10.1097/acm.0000000000004089] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The adoption of entrustment ratings in medical education is based on a seemingly simple premise: to align workplace-based supervision with resident assessment. Yet it has been difficult to operationalize this concept. Entrustment rating forms combine numeric scales with comments and are embedded in a programmatic assessment framework, which encourages the collection of a large quantity of data. The implicit assumption that more is better has led to an untamable volume of data that competency committees must grapple with. In this article, the authors explore the roles of numbers and words on entrustment rating forms, focusing on the intended and optimal use(s) of each, with a focus on the words. They also unpack the problematic issue of dual-purposing words for both assessment and feedback. Words have enormous potential to elaborate, to contextualize, and to instruct; to realize this potential, educators must be crystal clear about their use. The authors set forth a number of possible ways to reconcile these tensions by more explicitly aligning words to purpose. For example, educators could focus written comments solely on assessment; create assessment encounters distinct from feedback encounters; or use different words collected from the same encounter to serve distinct feedback and assessment purposes. Finally, the authors address the tyranny of documentation created by programmatic assessment and urge caution in yielding to the temptation to reduce words to numbers to make them manageable. Instead, they encourage educators to preserve some educational encounters purely for feedback, and to consider that not all words need to become data.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- S. Ginsburg is professor of medicine, Department of Medicine, Sinai Health System and Faculty of Medicine, University of Toronto, scientist, Wilson Centre for Research in Education, University of Toronto, Toronto, Ontario, Canada, and Canada Research Chair in Health Professions Education; ORCID: http://orcid.org/0000-0002-4595-6650
| | - Christopher J Watling
- C.J. Watling is professor and director, Centre for Education Research and Innovation, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada; ORCID: https://orcid.org/0000-0001-9686-795X
| | - Daniel J Schumacher
- D.J. Schumacher is associate professor of pediatrics, Cincinnati Children's Hospital Medical Center and University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0001-5507-8452
| | - Andrea Gingerich
- A. Gingerich is assistant professor, Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada; ORCID: https://orcid.org/0000-0001-5765-3975
| | - Rose Hatala
- R. Hatala is professor, Department of Medicine, and director, Clinical Educator Fellowship, Center for Health Education Scholarship, University of British Columbia, Vancouver, British Columbia, Canada; ORCID: https://orcid.org/0000-0003-0521-2590
| |
Collapse
|