1
|
Costich M, Friedman S, Robinson V, Catallozzi M. Implementation and faculty perception of outpatient medical student workplace-based assessments. CLINICAL TEACHER 2024; 21:e13751. [PMID: 38433555 DOI: 10.1111/tct.13751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 02/06/2024] [Indexed: 03/05/2024]
Abstract
BACKGROUND There is growing interest in use of entrustable professional activity (EPA)-grounded workplace-based assessments (WBAs) to assess medical students through direct observation in the clinical setting. However, there has been very little reflection on how these tools are received by the faculty using them to deliver feedback. Faculty acceptance of WBAs is fundamentally important to sustained utilisation in the clinical setting, and understanding faculty perceptions of the WBA as an adjunct for giving targeted feedback is necessary to guide future faculty development in this area. APPROACH Use of a formative EPA-grounded WBA was implemented in the ambulatory setting during the paediatrics clerkship following performance-driven training and frame-of-reference training with faculty. Surveys and semi-structured interviews with faculty members explored how faculty perceived the tool and its impact on feedback delivery. EVALUATION Faculty reported providing more specific, task-oriented feedback following implementation of the WBA, as well as greater timeliness of feedback and greater satisfaction with opportunities to provide feedback, although these later two findings did not reach significance. Themes from the interviews reflected the benefits of WBAs, persistent barriers to the provision of feedback and suggestions for improvement of the WBA. IMPLICATIONS EPA-grounded WBAs are feasible to implement in the outpatient primary care setting and improve feedback delivery around core EPAs. The WBAs positively impacted the way faculty conceptualise feedback and provide learners with more actionable, behaviour-based feedback. Findings will inform modifications to the WBA and future faculty development and training to allow for sustainable WBA utilisation in the core clerkship.
Collapse
Affiliation(s)
- Marguerite Costich
- Division of Child and Adolescent Health, Department of Pediatrics, Columbia University Vagelos College of Physicians and Surgeons and NewYork-Presbyterian, New York, New York, USA
- Department of Pediatrics, Columbia University Vagelos College of Physicians and Surgeons and NewYork-Presbyterian, New York, New York, USA
| | - Suzanne Friedman
- Division of Child and Adolescent Health, Department of Pediatrics, Columbia University Vagelos College of Physicians and Surgeons and NewYork-Presbyterian, New York, New York, USA
- Department of Pediatrics, Columbia University Vagelos College of Physicians and Surgeons and NewYork-Presbyterian, New York, New York, USA
| | - Victoria Robinson
- Division of Child and Adolescent Health, Department of Pediatrics, Columbia University Vagelos College of Physicians and Surgeons and NewYork-Presbyterian, New York, New York, USA
- Department of Pediatrics, Columbia University Vagelos College of Physicians and Surgeons and NewYork-Presbyterian, New York, New York, USA
| | - Marina Catallozzi
- Division of Child and Adolescent Health, Department of Pediatrics, Columbia University Vagelos College of Physicians and Surgeons and NewYork-Presbyterian, New York, New York, USA
- Department of Pediatrics, Columbia University Vagelos College of Physicians and Surgeons and NewYork-Presbyterian, New York, New York, USA
- Department of Population and Family Health, Mailman School of Public Health, Columbia University Irving Medical Center, New York, New York, USA
| |
Collapse
|
2
|
Hauer KE, Park YS, Bullock JL, Tekian A. "My Assessments Are Biased!" Measurement and Sociocultural Approaches to Achieve Fairness in Assessment in Medical Education. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S16-S27. [PMID: 37094278 DOI: 10.1097/acm.0000000000005245] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Assessing learners is foundational to their training and developmental growth throughout the medical education continuum. However, growing evidence shows the prevalence and impact of harmful bias in assessments in medical education, accelerating the urgency to identify solutions. Assessment bias presents a critical problem for all stages of learning and the broader educational system. Bias poses significant challenges to learners, disrupts the learning environment, and threatens the pathway and transition of learners into health professionals. While the topic of assessment bias has been examined within the context of measurement literature, limited guidance and solutions exist for learners in medical education, particularly in the clinical environment. This article presents an overview of assessment bias, focusing on clinical learners. A definition of bias and its manifestations in assessments are presented. Consequences of assessment bias are discussed within the contexts of validity and fairness and their impact on learners, patients/caregivers, and the broader field of medicine. Messick's unified validity framework is used to contextualize assessment bias; in addition, perspectives from sociocultural contexts are incorporated into the discussion to elaborate the nuanced implications in the clinical training environment. Discussions of these topics are conceptualized within the literature and the interventions used to date. The article concludes with practical recommendations to overcome bias and to develop an ideal assessment system. Recommendations address articulating values to guide assessment, designing assessment to foster learning and outcomes, attending to assessment procedures, promoting continuous quality improvement of assessment, and fostering equitable learning and assessment environments.
Collapse
Affiliation(s)
- Karen E Hauer
- K.E. Hauer is associate dean for competency assessment and professional standards, and professor, Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, California; ORCID: http://orcid.org/0000-0002-8812-4045
| | - Yoon Soo Park
- Y.S. Park is associate professor and associate head, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0001-8583-4335
| | - Justin L Bullock
- J.L. Bullock is a fellow, Department of Medicine, Division of Nephrology, University of Washington School of Medicine, Seattle, Washington; ORCID: http://orcid.org/0000-0003-4240-9798
| | - Ara Tekian
- A. Tekian is professor and associate dean for international education, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0002-9252-1588
| |
Collapse
|
3
|
Schauer DP, Kinnear B, Kelleher M, Sall D, Schumacher DJ, Warm EJ. Developing the Expected Entrustment Score: Accounting for Variation in Resident Assessment. J Gen Intern Med 2022; 37:3670-3675. [PMID: 35377114 PMCID: PMC9585130 DOI: 10.1007/s11606-022-07492-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 03/22/2022] [Indexed: 11/25/2022]
Abstract
BACKGROUND Clinical competency committees (CCCs) and residency program leaders may find it difficult to interpret workplace-based assessment (WBA) ratings knowing that contextual factors and bias play a large role. OBJECTIVE We describe the development of an expected entrustment score for resident performance within the context of our well-developed Observable Practice Activity (OPA) WBA system. DESIGN Observational study PARTICIPANTS: Internal medicine residents MAIN MEASURE: Entrustment KEY RESULTS: Each individual resident had observed entrustment scores with a unique relationship to the expected entrustment scores. Many residents' observed scores oscillated closely around the expected scores. However, distinct performance patterns did emerge. CONCLUSIONS We used regression modeling and leveraged large numbers of historical WBA data points to produce an expected entrustment score that served as a guidepost for performance interpretation.
Collapse
Affiliation(s)
- Daniel P Schauer
- Department of Internal Medicine, University of Cincinnati, PO Box 670535, Cincinnati, OH, 45267-0535, USA.
| | - Benjamin Kinnear
- Department of Pediatrics, College of Medicine, University of Cincinnati, Cincinnati, OH, USA
| | - Matthew Kelleher
- Department of Pediatrics, College of Medicine, University of Cincinnati, Cincinnati, OH, USA
| | - Dana Sall
- HonorHealth Thompson Peak Medical Center, Scottsdale, USA
- University of Arizona College of Medicine, Phoenix, AZ, USA
| | - Daniel J Schumacher
- Department of Pediatrics, , College of Medicine, Cincinnati Children's Hospital/University of Cincinnati, Cincinnati, OH, USA
| | - Eric J Warm
- Department of Internal Medicine, University of Cincinnati, PO Box 670535, Cincinnati, OH, 45267-0535, USA
| |
Collapse
|
4
|
Jeyalingam T, Brydges R, Ginsburg S, McCreath GA, Walsh CM. How Clinical Supervisors Conceptualize Procedural Entrustment: An Interview-Based Study of Entrustment Decision Making in Endoscopic Training. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:586-592. [PMID: 34935727 DOI: 10.1097/acm.0000000000004566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE Entrustment is central to assessment in competency-based medical education (CBME). To date, little research has addressed how clinical supervisors conceptualize entrustment, including factors they consider in making entrustment decisions. The aim of this study was to characterize supervisors' decision making related to procedural entrustment, using gastrointestinal endoscopy as a test case. METHOD Using methods from constructivist grounded theory, the authors interviewed 29 endoscopy supervisors in the United States and Canada across multiple specialties (adult and pediatric gastroenterology, surgery, and family medicine). Semistructured interviews, conducted between April and November 2019, focused on how supervisors conceptualize procedural entrustment, how they make entrustment decisions, and what factors they consider. Transcripts were analyzed using constant comparison to generate an explanatory framework and themes. RESULTS Three themes were identified from the analysis of interview transcripts: (1) entrustment occurs in varying degrees and fluctuates over time; (2) entrustment decisions can transfer within and across procedural and nonprocedural contexts; (3a) persistent static factors (e.g., supervisor competence, institutional culture, legal considerations) influence entrustment decisions, as do (3b) fluctuating, situated dynamic factors (e.g., trainee skills, patient acuity, time constraints), which tend to change from one training encounter to the next. CONCLUSIONS In the process of making procedural entrustment decisions, clinical supervisors appear to synthesize multiple dynamic factors against a background of static factors, culminating in a decision of whether to entrust. Entrustment decisions appear to fluctuate over time, and assessors may transfer decisions about specific trainees across settings. Understanding which factors supervisors perceive as influencing their decision making has the potential to inform faculty development, as well as competency committees seeking to aggregate faculty judgments about trainee unsupervised practice. Those leading CBME programs may wish to invest in optimizing the observed static factors, such that these foundational factors are tuned to facilitate trainee learning and achievement of entrustment.
Collapse
Affiliation(s)
- Thurarshen Jeyalingam
- T. Jeyalingam is an advanced fellow in luminal therapeutic endoscopy, University of Calgary, Calgary, Alberta, Canada; ORCID: http://orcid.org/0000-0002-7254-9639
| | - Ryan Brydges
- R. Brydges is a scientist and holds the Professorship in Technology-Enabled Education, St. Michael's Hospital, Unity Health Toronto, and is associate professor, Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Shiphra Ginsburg
- S. Ginsburg is professor of medicine, Department of Medicine, Sinai Health System and Faculty of Medicine, a scientist, Wilson Centre for Research in Education, and Canada Research Chair in Health Professions Education, University of Toronto, Toronto, Ontario, Canada; ORCID: http://orcid.org/0000-0002-4595-6650
| | - Graham A McCreath
- G.A. McCreath is clinical research project coordinator, SickKids Research Institute, Hospital for Sick Children, Toronto, Ontario, Canada; ORCID: http://orcid.org/0000-0002-9312-8665
| | - Catharine M Walsh
- C.M. Walsh is staff gastroenterologist, Division of Gastroenterology, Hepatology, and Nutrition, educational researcher, SickKids Learning Institute, scientist, Child Health Evaluative Sciences, SickKids Research Institute, Hospital for Sick Children, scientist, Wilson Centre for Research in Education, and associate professor of paediatrics, University of Toronto, Toronto, Ontario, Canada; ORCID: http://orcid.org/0000-0003-3928-703X
| |
Collapse
|
5
|
Ryan MS, Khamishon R, Richards A, Perera R, Garber A, Santen SA. A Question of Scale? Generalizability of the Ottawa and Chen Scales to Render Entrustment Decisions for the Core EPAs in the Workplace. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:552-561. [PMID: 34074896 DOI: 10.1097/acm.0000000000004189] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
PURPOSE Assessments of the Core Entrustable Professional Activities (Core EPAs) are based on observations of supervisors throughout a medical student's progression toward entrustment. The purpose of this study was to compare generalizability of scores from 2 entrustment scales: the Ottawa Surgical Competency Operating Room Evaluation (Ottawa) scale and an undergraduate medical education supervisory scale proposed by Chen and colleagues (Chen). A secondary aim was to determine the impact of frequent assessors on generalizability of the data. METHOD For academic year 2019-2020, the Virginia Commonwealth University School of Medicine modified a previously described workplace-based assessment (WBA) system developed to provide feedback for the Core EPAs across clerkships. The WBA scored students' performance using both Ottawa and Chen scales. Generalizability (G) and decision (D) studies were performed using an unbalanced random-effects model to determine the reliability of each scale. Secondary G- and D-studies explored whether faculty who rated more than 5 students demonstrated better reliability. The Phi-coefficient was used to estimate reliability; a cutoff of at least 0.70 was used to conduct D-studies. RESULTS Using the Ottawa scale, variability attributable to the student ranged from 0.8% to 6.5%. For the Chen scale, student variability ranged from 1.8% to 7.1%. This indicates the majority of variation was due to the rater (42.8%-61.3%) and other unexplained factors. Between 28 and 127 assessments were required to obtain a Phi-coefficient of 0.70. For 2 EPAs, using faculty who frequently assessed the EPA improved generalizability, requiring only 5 and 13 assessments for the Chen scale. CONCLUSIONS Both scales performed poorly in terms of learner-attributed variance, with some improvement in 2 EPAs when considering only frequent assessors using the Chen scale. Based on these findings in conjunction with prior evidence, the authors provide a root cause analysis highlighting challenges with WBAs for Core EPAs.
Collapse
Affiliation(s)
- Michael S Ryan
- M.S. Ryan is associate professor and assistant dean for clinical medical education, Department of Pediatrics, Virginia Commonwealth University, Richmond, Virginia; ORCID: https://orcid.org/0000-0003-3266-9289
| | - Rebecca Khamishon
- R. Khamishon is a fourth-year medical student, Virginia Commonwealth University, Richmond, Virginia
| | - Alicia Richards
- A. Richards is a graduate student, Department of Biostatistics, Virginia Commonwealth University, Richmond, Virginia
| | - Robert Perera
- R. Perera is associate professor, Department of Biostatistics, Virginia Commonwealth University, Richmond, Virginia
| | - Adam Garber
- A. Garber is associate professor, Department of Internal Medicine, Virginia Commonwealth University, Richmond, Virginia; ORCID: https://orcid.org/0000-0002-7296-2896
| | - Sally A Santen
- S.A. Santen is professor and senior associate dean of assessment, evaluation, and scholarship, Department of Emergency Medicine, Virginia Commonwealth University, Richmond, Virginia; ORCID: https://orcid.org/0000-0002-8327-8002
| |
Collapse
|
6
|
Swanberg M, Woodson-Smith S, Pangaro L, Torre D, Maggio L. Factors and Interactions Influencing Direct Observation: A Literature Review Guided by Activity Theory. TEACHING AND LEARNING IN MEDICINE 2022; 34:155-166. [PMID: 34238091 DOI: 10.1080/10401334.2021.1931871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Revised: 04/19/2021] [Accepted: 05/11/2021] [Indexed: 06/13/2023]
Abstract
PhenomenonEnsuring that future physicians are competent to practice medicine is necessary for high quality patient care and safety. The shift toward competency-based education has placed renewed emphasis on direct observation via workplace-based assessments in authentic patient care contexts. Despite this interest and multiple studies focused on improving direct observation, challenges regarding the objectivity of this assessment approach remain underexplored and unresolved. Approach: We conducted a literature review of direct observation in authentic patient contexts by systematically searching databases PubMed, Embase, Web of Science, and ERIC. Included studies comprised original research conducted in the patient care context with authentic patients, either as a live encounter or a video recording of an actual encounter, which focused on factors affecting the direct observation of undergraduate medical education (UME) or graduate medical education (GME) trainees. Because the patient care context adds factors that contribute to the cognitive load of the learner and of the clinician-observer we focused our question on such contexts, which are most useful in judgments about advancement to the next level of training or practice. We excluded articles or published abstracts not conducted in the patient care context (e.g., OSCEs) or those involving simulation, allied health professionals, or non-UME/GME trainees. We also excluded studies focused on end-of-rotation evaluations and in-training evaluation reports. We extracted key data from the studies and used Activity Theory as a lens to identify factors affecting these observations and the interactions between them. Activity Theory provides a framework to understand and analyze complex human activities, the systems in which people work, and the interactions or tensions between multiple associated factors. Findings: Nineteen articles were included in the analysis; 13 involved GME learners and 6 UME learners. Of the 19, six studies were set in the operating room and four in the Emergency department. Using Activity Theory, we discovered that while numerous studies focus on rater and tool influences, very few study the impact of social elements. These are the rules that govern how the activity happens, the environment and members of the community involved in the activity and how completion of the activity is divided up among the members of the community. Insights: Viewing direct observation via workplace-based assessment through the lens of Activity Theory may enable educators to implement curricular changes to improve direct observation of assessment. Activity Theory may allow researchers to design studies to focus on the identified underexplored interactions and influences in relation to direct observation.
Collapse
Affiliation(s)
- Margaret Swanberg
- Department of Neurology, Uniformed Services University, Bethesda, Maryland, USA
| | - Sarah Woodson-Smith
- Department of Neurology, Naval Medical Center Portsmouth, Portsmouth, Virginia, USA
| | - Louis Pangaro
- Department of Medicine, Uniformed Services University, Bethesda, Maryland, USA
| | - Dario Torre
- Department of Medicine, Uniformed Services University, Bethesda, Maryland, USA
- Center for Health Professions Education, Uniformed Services University, Bethesda, Maryland, USA
| | - Lauren Maggio
- Department of Medicine, Uniformed Services University, Bethesda, Maryland, USA
- Center for Health Professions Education, Uniformed Services University, Bethesda, Maryland, USA
| |
Collapse
|
7
|
Soukoulis V, Martindale J, Bray MJ, Bradley E, Gusic ME. The use of EPA assessments in decision-making: Do supervision ratings correlate with other measures of clinical performance? MEDICAL TEACHER 2021; 43:1323-1329. [PMID: 34242113 DOI: 10.1080/0142159x.2021.1947480] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
BACKGROUND Entrustable professional activities (EPAs) have been introduced as a framework for teaching and assessment in competency-based educational programs. With growing use, has come a call to examine the validity of EPA assessments. We sought to explore the correlation of EPA assessments with other clinical performance measures to support use of supervision ratings in decisions about medical students' curricular progression. METHODS Spearman rank coefficients were used to determine correlation of supervision ratings from EPA assessments with scores on clerkship evaluations and performance on an end-of-clerkship-year Objective Structured Clinical Examination (CPX). RESULTS Both overall clinical evaluation items score (rho 0.40; n = 166) and CPX patient encounter domain score (rho 0.31; n = 149) showed significant correlation with students' overall mean EPA supervision rating during the clerkship year. There was significant correlation between mean supervision rating for EPA assessments of history, exam, note, and oral presentation skills with scores for these skills on clerkship evaluations; less so on the CPX. CONCLUSIONS Correlation of EPA supervision ratings with commonly used clinical performance measures offers support for their use in undergraduate medical education. Data supporting the validity of EPA assessments promotes stakeholders' acceptance of their use in summative decisions about students' readiness for increased patient care responsibility.
Collapse
Affiliation(s)
- Victor Soukoulis
- Division of Cardiovascular Medicine, University of Virginia School of Medicine, Charlottesville, VA, USA
| | - James Martindale
- Center for Medical Education Research and Scholarly Innovation, University of Virginia School of Medicine, Charlottesville, VA, USA
| | - Megan J Bray
- Center for Medical Education Research and Scholarly Innovation and Department of Obstetrics and Gynecology, University of Virginia School of Medicine, Charlottesville, VA, USA
| | - Elizabeth Bradley
- Center for Medical Education Research and Scholarly Innovation, University of Virginia School of Medicine, Charlottesville, VA, USA
| | - Maryellen E Gusic
- Center for Medical Education Research and Scholarly Innovation and Department of Pediatrics, University of Virginia School of Medicine, Charlottesville, VA, USA
| |
Collapse
|
8
|
Anderson HL, Kurtz J, West DC. Implementation and Use of Workplace-Based Assessment in Clinical Learning Environments: A Scoping Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S164-S174. [PMID: 34406132 DOI: 10.1097/acm.0000000000004366] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE Workplace-based assessment (WBA) serves a critical role in supporting competency-based medical education (CBME) by providing assessment data to inform competency decisions and support learning. Many WBA systems have been developed, but little is known about how to effectively implement WBA. Filling this gap is important for creating suitable and beneficial assessment processes that support large-scale use of CBME. As a step toward filling this gap, the authors describe what is known about WBA implementation and use to identify knowledge gaps and future directions. METHOD The authors used Arksey and O'Malley's 6-stage scoping review framework to conduct the review, including: (1) identifying the research question; (2) identifying relevant studies; (3) study selection; (4) charting the data; (5) collating, summarizing, and reporting the results; and (6) consulting with relevant stakeholders. RESULTS In 2019-2020, the authors searched and screened 726 papers for eligibility using defined inclusion and exclusion criteria. One hundred sixty-three met inclusion criteria. The authors identified 5 themes in their analysis: (1) Many WBA tools and programs have been implemented, and barriers are common across fields and specialties; (2) Theoretical perspectives emphasize the need for data-driven implementation strategies; (3) User perceptions of WBA vary and are often dependent on implementation factors; (4) Technology solutions could provide useful tools to support WBA; and (5) Many areas of future research and innovation remain. CONCLUSIONS Knowledge of WBA as an implemented practice to support CBME remains constrained. To remove these constraints, future research should aim to generate generalizable knowledge on WBA implementation and use, address implementation factors, and investigate remaining knowledge gaps.
Collapse
Affiliation(s)
- Hannah L Anderson
- H.L. Anderson is research associate, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0002-9435-1535
| | - Joshua Kurtz
- J. Kurtz is a first-year resident, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Daniel C West
- D.C. West is professor of pediatrics, The Perelman School of Medicine at the University of Pennsylvania, and associate chair for education and senior director of medical education, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0002-0909-4213
| |
Collapse
|
9
|
Gottlieb M, Jordan J, Siegelman JN, Cooney R, Stehman C, Chan TM. Direct Observation Tools in Emergency Medicine: A Systematic Review of the Literature. AEM EDUCATION AND TRAINING 2021; 5:e10519. [PMID: 34041428 PMCID: PMC8138102 DOI: 10.1002/aet2.10519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 07/31/2020] [Accepted: 08/09/2020] [Indexed: 05/07/2023]
Abstract
OBJECTIVES Direct observation is important for assessing the competency of medical learners. Multiple tools have been described in other fields, although the degree of emergency medicine-specific literature is unclear. This review sought to summarize the current literature on direct observation tools in the emergency department (ED) setting. METHODS We searched PubMed, Scopus, CINAHL, the Cochrane Central Register of Clinical Trials, the Cochrane Database of Systematic Reviews, ERIC, PsycINFO, and Google Scholar from 2012 to 2020 for publications on direct observation tools in the ED setting. Data were dual extracted into a predefined worksheet, and quality analysis was performed using the Medical Education Research Study Quality Instrument. RESULTS We identified 38 publications, comprising 2,977 learners. Fifteen different tools were described. The most commonly assessed tools included the Milestones (nine studies), Observed Structured Clinical Exercises (seven studies), the McMaster Modular Assessment Program (six studies), Queen's Simulation Assessment Test (five studies), and the mini-Clinical Evaluation Exercise (four studies). Most of the studies were performed in a single institution, and there were limited validity or reliability assessments reported. CONCLUSIONS The number of publications on direct observation tools for the ED setting has markedly increased. However, there remains a need for stronger internal and external validity data.
Collapse
Affiliation(s)
- Michael Gottlieb
- Department of Emergency MedicineRush University Medical CenterChicagoILUSA
| | - Jaime Jordan
- Department of Emergency MedicineRonald Reagan UCLA Medical CenterLos AngelesCAUSA
| | | | - Robert Cooney
- Department of Emergency MedicineGeisinger Medical CenterDanvillePAUSA
| | | | - Teresa M. Chan
- Department of MedicineDivision of Emergency MedicineMcMaster UniversityHamiltonOntarioCanada
| |
Collapse
|
10
|
Fainstad TL, McClintock AH, Yarris LM. Bias in assessment: name, reframe, and check in. CLINICAL TEACHER 2021; 18:449-453. [PMID: 33787001 DOI: 10.1111/tct.13351] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 02/19/2021] [Accepted: 03/11/2021] [Indexed: 11/28/2022]
Abstract
Cognitive bias permeates almost every learner assessment in medical education. Assessment bias has the potential to affect a learner's education, future career and sense of self-worth. Decades of data show that there is little educators can do to overcome bias in learner assessments. Using in-group favouritism as an example, we offer an evidence-based, three-step solution to understand and move forward with cognitive bias in assessment: (1) Name: a simple admission about the presence of inherent bias in assessment, (2) Reframe: a rephrasing of assessment language to shed light on the assessor's subjectivity and (3) Check-in: a chance to ensure learner understanding and open lines of bidirectional communication. This process is theory-informed and based on decades of educational, sociological and psychological literature; we offer it as a logical first step towards a much-needed paradigm shift towards addressing bias in learner assessment.
Collapse
Affiliation(s)
- Tyra L Fainstad
- Department of Medicine, Division of General Internal Medicine, University of Colorado, Aurora, CO, USA
| | - Adelaide H McClintock
- Department of Medicine, Division of General Internal Medicine, University of Washington, Seattle, WA, USA
| | | |
Collapse
|
11
|
|