1
|
Cross DA, Weiner J, Olson APJ. Digital supervision in the clinical learning environment: Characterizing teamwork in the electronic health record. J Hosp Med 2024. [PMID: 39400492 DOI: 10.1002/jhm.13529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 09/25/2024] [Accepted: 09/30/2024] [Indexed: 10/15/2024]
Abstract
BACKGROUND Attending physicians in academic hospitals work in supervisory team structures with medical residents to provide patient care. How attendings utilize the electronic health record (EHR) to support learning through supervision is not well understood. OBJECTIVE To compare EHR behavior on teaching versus direct care, including evidence of supervisory calibration to learners. METHODS Cross-sectional study analysis of EHR metadata from 1721 shifts of hospital medicine faculty at a large, urban academic medical center, January to June 2022. Measures included total EHR time per shift, EHR time outside shift, and time spent on: note-writing, note review/attestation, order entry, and other clinical review. We assessed within physician differences across these service types and used multilevel modeling to determine whether these behaviors varied with resident physicians' experience, accounting for physician-specific signature behavior patterns. RESULTS Attendings spent substantially less time in the EHR while on teaching service than on direct service (129 vs. 240 min; p < .001) and apportioned their work differently throughout the day. Physicians were less behaviorally consistent and varied more than their peers when on teaching service. Attendings calibrated their supervision to learners. Attendings logged 12.7% less EHR time when paired with more senior residents than postgraduate year 2 (PGY2) residents (137 vs. 120 min, p = .002). PGY1 presence was also associated with reduced EHR time, suggesting some delegation of supervision to senior trainees. CONCLUSION EHR behaviors on teaching service are highly variable and differ substantially from direct care; a lack of consistency suggests important opportunities to establish best practices for EHR-based supervision and create an effective clinical learning environment.
Collapse
Affiliation(s)
- Dori A Cross
- Division of Health Policy and Management, University of Minnesota School of Public Health, Minneapolis, Minnesota, USA
| | - Josh Weiner
- Division of Health Policy and Management, University of Minnesota School of Public Health, Minneapolis, Minnesota, USA
| | - Andrew P J Olson
- Department of Medicine, Division of Hospital Medicine, University of Minnesota Medical School, Minneapolis, Minnesota, USA
- Department of Pediatrics, Division of Pediatric Hospital Medicine, University of Minnesota Medical School, Minneapolis, Minnesota, USA
- Medical Education Outcomes Center, University of Minnesota Medical School, Minneapolis, Minnesota, USA
| |
Collapse
|
2
|
Drake CB, Rhee DW, Panigrahy N, Heery L, Iturrate E, Stern DT, Sartori DJ. Toward precision medical education: Characterizing individual residents' clinical experiences throughout training. J Hosp Med 2024. [PMID: 39103985 DOI: 10.1002/jhm.13471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Revised: 06/22/2024] [Accepted: 07/14/2024] [Indexed: 08/07/2024]
Abstract
BACKGROUND Despite the central role of experiential learning in residency training, the actual clinical experiences residents participate in are not well characterized. A better understanding of the type, volume, and variation in residents' clinical experiences is essential to support precision medical education strategies. OBJECTIVE We sought to characterize the entirety of the clinical experiences had by individual internal medicine residents throughout their time in training. METHOD We evaluated the clinical experiences of medicine residents (n = 51) who completed training at NYU Grossman School of Medicine's Brooklyn campus between 2020 and 2023. Residents' inpatient and outpatient experiences were identified using notes written, orders placed, and care team sign-ins; principal ICD-10 codes for each encounter were converted into medical content categories using a previously described crosswalk tool. RESULTS Of 152,426 clinical encounters with available ICD-10 codes, 132,284 were mapped to medical content categories (94.5% capture). Residents' clinical experiences were particularly enriched in infectious and cardiovascular disease; most had very little exposure to allergy, dermatology, oncology, or rheumatology. Some trainees saw twice as many cases in a given content area as did others. There was little concordance between actual frequency of clinical experience and expected content frequency on the ABIM certification exam. CONCLUSIONS Individual residents' clinical experiences in training vary widely, both in number and in type. Characterizing these experiences paves the way for exploration of the relationships between clinical exposure and educational outcomes, and for the implementation of precision education strategies that could fill residents' experiential gaps and complement strengths with targeted educational interventions.
Collapse
Affiliation(s)
- Carolyn B Drake
- Division of Hospital Medicine, Department of Medicine, Internal Medicine Residency Program, NYU Grossman School of Medicine, New York, New York, USA
| | - David W Rhee
- Leon H. Charney Division of Cardiology, Department of Medicine, NYU Grossman School of Medicine, New York, New York, USA
| | - Neha Panigrahy
- NYU Grossman School of Medicine, New York, New York, USA
| | - Lauren Heery
- NYU Grossman School of Medicine, New York, New York, USA
| | - Eduardo Iturrate
- Division of Hospital Medicine, Department of Medicine, DataCore, Enterprise Research Informatics and Epic Analytics, NYU Grossman School of Medicine, New York, New York, USA
| | - David T Stern
- Department of Medicine, Education and Faculty Affairs, NYU Grossman School of Medicine, New York, New York, USA
- Margaret Cochran Corbin VA Medical Center, New York, New York, USA
| | - Daniel J Sartori
- Division of Hospital Medicine, Department of Medicine, Internal Medicine Residency Program, NYU Grossman School of Medicine, New York, New York, USA
| |
Collapse
|
3
|
Lees AF, Beni C, Lee A, Wedgeworth P, Dzara K, Joyner B, Tarczy-Hornoch P, Leu M. Uses of Electronic Health Record Data to Measure the Clinical Learning Environment of Graduate Medical Education Trainees: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:1326-1336. [PMID: 37267042 PMCID: PMC10615720 DOI: 10.1097/acm.0000000000005288] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
PURPOSE This study systematically reviews the uses of electronic health record (EHR) data to measure graduate medical education (GME) trainee competencies. METHOD In January 2022, the authors conducted a systematic review of original research in MEDLINE from database start to December 31, 2021. The authors searched for articles that used the EHR as their data source and in which the individual GME trainee was the unit of observation and/or unit of analysis. The database query was intentionally broad because an initial survey of pertinent articles identified no unifying Medical Subject Heading terms. Articles were coded and clustered by theme and Accreditation Council for Graduate Medical Education (ACGME) core competency. RESULTS The database search yielded 3,540 articles, of which 86 met the study inclusion criteria. Articles clustered into 16 themes, the largest of which were trainee condition experience (17 articles), work patterns (16 articles), and continuity of care (12 articles). Five of the ACGME core competencies were represented (patient care and procedural skills, practice-based learning and improvement, systems-based practice, medical knowledge, and professionalism). In addition, 25 articles assessed the clinical learning environment. CONCLUSIONS This review identified 86 articles that used EHR data to measure individual GME trainee competencies, spanning 16 themes and 6 competencies and revealing marked between-trainee variation. The authors propose a digital learning cycle framework that arranges sequentially the uses of EHR data within the cycle of clinical experiential learning central to GME. Three technical components necessary to unlock the potential of EHR data to improve GME are described: measures, attribution, and visualization. Partnerships between GME programs and informatics departments will be pivotal in realizing this opportunity.
Collapse
Affiliation(s)
- A Fischer Lees
- A. Fischer Lees is a clinical informatics fellow, Department of Biomedical Informatics and Medical Education, University of Washington School of Medicine, Seattle, Washington
| | - Catherine Beni
- C. Beni is a general surgery resident, Department of Surgery, University of Washington School of Medicine, Seattle, Washington
| | - Albert Lee
- A. Lee is a clinical informatics fellow, Department of Biomedical Informatics and Medical Education, University of Washington School of Medicine, Seattle, Washington
| | - Patrick Wedgeworth
- P. Wedgeworth is a clinical informatics fellow, Department of Biomedical Informatics and Medical Education, University of Washington School of Medicine, Seattle, Washington
| | - Kristina Dzara
- K. Dzara is assistant dean for educator development, director, Center for Learning and Innovation in Medical Education, and associate professor of medical education, Department of Biomedical Informatics and Medical Education, University of Washington School of Medicine, Seattle, Washington
| | - Byron Joyner
- B. Joyner is vice dean for graduate medical education and a designated institutional official, Graduate Medical Education, University of Washington School of Medicine, Seattle, Washington
| | - Peter Tarczy-Hornoch
- P. Tarczy-Hornoch is professor and chair, Department of Biomedical Informatics and Medical Education, and professor, Department of Pediatrics (Neonatology), University of Washington School of Medicine, and adjunct professor, Allen School of Computer Science and Engineering, University of Washington, Seattle, Washington
| | - Michael Leu
- M. Leu is professor and director, Clinical Informatics Fellowship, Department of Biomedical Informatics and Medical Education, and professor, Department of Pediatrics, University of Washington School of Medicine, Seattle, Washington
| |
Collapse
|
4
|
Mai MV, Muthu N, Carroll B, Costello A, West DC, Dziorny AC. Measuring Training Disruptions Using an Informatics Based Tool. Acad Pediatr 2023; 23:7-11. [PMID: 35306187 DOI: 10.1016/j.acap.2022.03.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 03/01/2022] [Accepted: 03/11/2022] [Indexed: 01/19/2023]
Abstract
OBJECTIVE Training disruptions, such as planned curricular adjustments or unplanned global pandemics, impact residency training in ways that are difficult to quantify. Informatics-based medical education tools can help measure these impacts. We tested the ability of a software platform driven by electronic health record data to quantify anticipated changes in trainee clinical experiences during the COVID-19 pandemic. METHODS We previously developed and validated the Trainee Individualized Learning System (TRAILS) to identify pediatric resident clinical experiences (i.e. shifts, resident provider-patient interactions (rPPIs), and diagnoses). We used TRAILS to perform a year-over-year analysis comparing pediatrics residents at a large academic children's hospital during March 15-June 15 in 2018 (Control #1), 2019 (Control #2), and 2020 (Exposure). RESULTS Residents in the exposure cohort had fewer shifts than those in both control cohorts (P < .05). rPPIs decreased an average of 43% across all PGY levels, with interns experiencing a 78% decrease in Continuity Clinic. Patient continuity decreased from 23% to 11%. rPPIs with common clinic and emergency department diagnoses decreased substantially during the exposure period. CONCLUSIONS Informatics tools like TRAILS may help program directors understand the impact of training disruptions on resident clinical experiences and target interventions to learners' needs and development.
Collapse
Affiliation(s)
- Mark V Mai
- Department of Anesthesiology and Critical Care Medicine, Children's Hospital of Philadelphia (MV Mai), Philadelphia, Pa.
| | - Naveen Muthu
- Department of Pediatrics, Children's Hospital of Philadelphia (N Muthu, B Carroll, A Costello, and DC West), Philadelphia, Pa
| | - Bryn Carroll
- Department of Pediatrics, Children's Hospital of Philadelphia (N Muthu, B Carroll, A Costello, and DC West), Philadelphia, Pa
| | - Anna Costello
- Department of Pediatrics, Children's Hospital of Philadelphia (N Muthu, B Carroll, A Costello, and DC West), Philadelphia, Pa
| | - Daniel C West
- Department of Pediatrics, Children's Hospital of Philadelphia (N Muthu, B Carroll, A Costello, and DC West), Philadelphia, Pa
| | - Adam C Dziorny
- Departments of Pediatrics & Biomedical Engineering, University of Rochester School of Medicine (AC Dziorny), Rochester, NY
| |
Collapse
|
5
|
Wang MD, Rosner BI, Rosenbluth G. Where Is the Digitally Silent Provider? Development and Validation of a Team-Centered Electronic Health Record Attribution Model for Supervising Residents. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:62-66. [PMID: 36576768 DOI: 10.1097/acm.0000000000004978] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PROBLEM Providing trainees with data and benchmarks on their own patient populations is an Accreditation Council for Graduate Medical Education core residency requirement. Leveraging electronic health records (EHRs) for this purpose relies on correctly attributing patients to the trainees responsible for their care. EHR activity logs are useful for attributing interns to inpatients but not for attributing supervising residents, who often have no inpatient EHR usage obligations, and therefore may generate no digital "footprints" on a given patient-day from which to ascertain attribution. APPROACH The authors developed and tested a novel team-centered binary logistic regression model leveraging EHR activity logs from July 1, 2018, to June 30, 2019, for pediatric hospital medicine (PHM) supervising residents at the University of California, San Francisco. Unlike patient-centered models that determine daily attribution according to the trainee generating the greatest relative activity in individual patients' charts, the team-centered approach predicts daily attribution based on the trainee generating EHR activity across the greatest proportion of a team's patients. To assess generalizability, the authors similarly modeled supervising resident attribution in adult hospital medicine (AHM) and orthopedic surgery (OS). OUTCOMES For PHM, AHM, and OS, 1,100, 1,399, and 803 unique patient encounters and 29, 62, and 10 unique supervising residents were included, respectively. Team-centered models outperformed patient-centered models for the 3 specialties, with respective accuracies of 85.4% versus 72.4% (PHM), 88.7% versus 75.4% (AHM), and 69.3% versus 51.6% (OS; P < .001 for all). AHM and PHM models demonstrated relative generalizability to one another while OS did not. NEXT STEPS Validation at other institutions will be essential to understanding the potential for generalizability of this approach. Accurately attributed data are likely to be trusted more by trainees, enabling programs to operationalize feedback for use cases including performance measurement, case mix assessment, and postdischarge opportunities for follow-up learning.
Collapse
Affiliation(s)
- Michael D Wang
- M.D. Wang is assistant professor, Division of Hospital Medicine, Center for Clinical Informatics and Improvement Research, University of California, San Francisco, San Francisco, California
| | - Benjamin I Rosner
- B.I. Rosner is associate professor, Division of Hospital Medicine, Center for Clinical Informatics and Improvement Research, University of California, San Francisco, San Francisco, California
| | - Glenn Rosenbluth
- G. Rosenbluth is professor, Department of Pediatrics, and director of quality and safety programs, Office of Graduate Medical Education, University of California, San Francisco, San Francisco, California
| |
Collapse
|
6
|
Rule A, Melnick ER, Apathy NC. Using event logs to observe interactions with electronic health records: an updated scoping review shows increasing use of vendor-derived measures. J Am Med Inform Assoc 2022; 30:144-154. [PMID: 36173361 PMCID: PMC9748581 DOI: 10.1093/jamia/ocac177] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 09/15/2022] [Accepted: 09/19/2022] [Indexed: 01/24/2023] Open
Abstract
OBJECTIVE The aim of this article is to compare the aims, measures, methods, limitations, and scope of studies that employ vendor-derived and investigator-derived measures of electronic health record (EHR) use, and to assess measure consistency across studies. MATERIALS AND METHODS We searched PubMed for articles published between July 2019 and December 2021 that employed measures of EHR use derived from EHR event logs. We coded the aims, measures, methods, limitations, and scope of each article and compared articles employing vendor-derived and investigator-derived measures. RESULTS One hundred and two articles met inclusion criteria; 40 employed vendor-derived measures, 61 employed investigator-derived measures, and 1 employed both. Studies employing vendor-derived measures were more likely than those employing investigator-derived measures to observe EHR use only in ambulatory settings (83% vs 48%, P = .002) and only by physicians or advanced practice providers (100% vs 54% of studies, P < .001). Studies employing vendor-derived measures were also more likely to measure durations of EHR use (P < .001 for 6 different activities), but definitions of measures such as time outside scheduled hours varied widely. Eight articles reported measure validation. The reported limitations of vendor-derived measures included measure transparency and availability for certain clinical settings and roles. DISCUSSION Vendor-derived measures are increasingly used to study EHR use, but only by certain clinical roles. Although poorly validated and variously defined, both vendor- and investigator-derived measures of EHR time are widely reported. CONCLUSION The number of studies using event logs to observe EHR use continues to grow, but with inconsistent measure definitions and significant differences between studies that employ vendor-derived and investigator-derived measures.
Collapse
Affiliation(s)
- Adam Rule
- Information School, University of Wisconsin–Madison, Madison,
Wisconsin, USA
| | - Edward R Melnick
- Emergency Medicine, Yale School of Medicine, New Haven,
Connecticut, USA
- Biostatistics (Health Informatics), Yale School of Public
Health, New Haven, Connecticut, USA
| | - Nate C Apathy
- MedStar Health National Center for Human Factors in Healthcare, MedStar
Health Research Institute, District of Columbia, Washington, USA
- Regenstrief Institute, Indianapolis, Indiana, USA
| |
Collapse
|
7
|
Kahn JM, Minturn JS, Riman KA, Bukowski LA, Davis BS. Characterizing intensive care unit rounding teams using meta-data from the electronic health record. J Crit Care 2022; 72:154143. [PMID: 36084377 DOI: 10.1016/j.jcrc.2022.154143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 08/08/2022] [Accepted: 08/22/2022] [Indexed: 12/15/2022]
Abstract
PURPOSE Teamwork is an important determinant of outcomes in the intensive care unit (ICU), yet the nature of individual ICU teams remains poorly understood. We examined whether meta-data in the form of digital signatures in the electronic health record (EHR) could be used to identify and characterize ICU teams. METHODS We analyzed EHR data from 27 ICUs over one year. We linked intensivist physicians, nurses, and respiratory therapists to individual patients based on selected EHR meta-data. We then characterized ICU teams by their members' overall past experience and shared past experience; and used network analysis to characterize ICUs by their network's density and centralization. RESULTS We identified 2327 unique providers and 30,892 unique care teams. Teams varied based on their average team member experience (median and total range: 262.2 shifts, 9.0-706.3) and average shared experience (median and total range: 13.2 shared shifts, 1.0-99.3). ICUs varied based on their network's density (median and total range: 0.12, 0.07-0.23), degree centralization (0.50, 0.35-0.65) and closeness centralization (0.45, 0.11-0.60). In a regression analysis, this variation was only partially explained by readily observable ICU characteristics. CONCLUSIONS EHR meta-data can assist in the characterization of ICU teams, potentially providing novel insight into strategies to measure and improve team function in critical care.
Collapse
Affiliation(s)
- Jeremy M Kahn
- CRISMA Center, Department of Critical Care Medicine, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA; Department of Health Policy & Management, University of Pittsburgh School of Public Health, Pittsburgh, PA, USA.
| | - John S Minturn
- CRISMA Center, Department of Critical Care Medicine, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Kathryn A Riman
- CRISMA Center, Department of Critical Care Medicine, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Leigh A Bukowski
- CRISMA Center, Department of Critical Care Medicine, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Billie S Davis
- CRISMA Center, Department of Critical Care Medicine, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| |
Collapse
|
8
|
Sundaresan J, Ferrell ST, Hron JD. A Model for Work Intensity in a Pediatric Training Program. J Grad Med Educ 2022; 14:714-718. [PMID: 36591429 PMCID: PMC9765907 DOI: 10.4300/jgme-d-22-00323.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 08/23/2022] [Accepted: 10/13/2022] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND The Accreditation Council for Graduate Medical Education (ACGME) requires residency programs to monitor scheduling, work intensity, and work compression. OBJECTIVE We aimed to create a model for assessing intern work intensity by examining patient and clinical factors in our electronic health systems using multiple linear regression. METHODS We identified measurable factors that may contribute to resident work intensity within our electronic health systems. In the spring of 2021, we surveyed interns on pediatric hospital medicine rotations each weekday over 5 blocks to rank their daily work intensity on a scale from -100 (bored) to +100 (exasperated). We queried our electronic systems to identify patient care activities completed by study participants on days they were surveyed. We used multiple linear regression to identify factors that correlate with subjective scores of work intensity. RESULTS Nineteen unique interns provided 102 survey responses (28.3% response rate) during the study period. The mean work intensity score was 9.82 (SD=44.27). We identified 19 candidate variables for the regression model. The most significantly associated variables from our univariate regression model were text messages (β=0.432, P<.0009, R2=0.105), orders entered (β=0.207, P<.0002, R2=0.128), and consults ordered (β=0.268, P=.022, R2=0.053). Stepwise regression produced a reduced model (R2=0.247) including text messages (β=0.379, P=.002), patient transfers (β=-1.405, P=.15), orders entered (β=0.186, P<.001), and national patients (β=-0.873, P=.035). CONCLUSIONS Our study demonstrates that data extracted from electronic systems can be used to estimate resident work intensity.
Collapse
Affiliation(s)
- Janani Sundaresan
- Janani Sundaresan, MD, MSc, is a PGY-3 Resident, Boston Combined Residency Program, Department of Pediatrics Boston Children's Hospital and Boston Medical Center
| | - Sebastian Ty Ferrell
- Sebastian Ty Ferrell, BA, is a former Graduate Medical Education Data Analyst, Department of Medical Education, Boston Children's Hospital
| | - Jonathan D. Hron
- Jonathan D. Hron, MD, is a Pediatric Hospitalist, Division of General Pediatrics, Boston Children's Hospital, and Assistant Professor of Pediatrics, Harvard Medical School
| |
Collapse
|
9
|
Yarahuan JKW, Lo HY, Bass L, Wright J, Hess LM. Design, Usability, and Acceptability of a Needs-Based, Automated Dashboard to Provide Individualized Patient-Care Data to Pediatric Residents. Appl Clin Inform 2022; 13:380-390. [PMID: 35294985 PMCID: PMC8926457 DOI: 10.1055/s-0042-1744388] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022] Open
Abstract
BACKGROUND AND OBJECTIVES Pediatric residency programs are required by the Accreditation Council for Graduate Medical Education to provide residents with patient-care and quality metrics to facilitate self-identification of knowledge gaps to prioritize improvement efforts. Trainees are interested in receiving this data, but this is a largely unmet need. Our objectives were to (1) design and implement an automated dashboard providing individualized data to residents, and (2) examine the usability and acceptability of the dashboard among pediatric residents. METHODS We developed a dashboard containing individualized patient-care data for pediatric residents with emphasis on needs identified by residents and residency leadership. To build the dashboard, we created a connection from a clinical data warehouse to data visualization software. We allocated patients to residents based on note authorship and created individualized reports with masked identities that preserved anonymity. After development, we conducted usability and acceptability testing with 11 resident users utilizing a mixed-methods approach. We conducted interviews and anonymous surveys which evaluated technical features of the application, ease of use, as well as users' attitudes toward using the dashboard. Categories and subcategories from usability interviews were identified using a content analysis approach. RESULTS Our dashboard provides individualized metrics including diagnosis exposure counts, procedure counts, efficiency metrics, and quality metrics. In content analysis of the usability testing interviews, the most frequently mentioned use of the dashboard was to aid a resident's self-directed learning. Residents had few concerns about the dashboard overall. Surveyed residents found the dashboard easy to use and expressed intention to use the dashboard in the future. CONCLUSION Automated dashboards may be a solution to the current challenge of providing trainees with individualized patient-care data. Our usability testing revealed that residents found our dashboard to be useful and that they intended to use this tool to facilitate development of self-directed learning plans.
Collapse
Affiliation(s)
- Julia K W Yarahuan
- Division of Pediatric Hospital Medicine, Department of Pediatrics, Boston Children's Hospital, Boston, Massachusetts, United States
| | - Huay-Ying Lo
- Section of Pediatric Hospital Medicine, Department of Pediatrics, Baylor College of Medicine/Texas Children's Hospital, Houston, Texas, United States
| | - Lanessa Bass
- Section of Pediatric Hospital Medicine, Department of Pediatrics, Baylor College of Medicine/Texas Children's Hospital, Houston, Texas, United States
| | - Jeff Wright
- Information Services, Texas Children's Hospital, Houston, Texas, United States
| | - Lauren M Hess
- Section of Pediatric Hospital Medicine, Department of Pediatrics, Baylor College of Medicine/Texas Children's Hospital, Houston, Texas, United States
| |
Collapse
|
10
|
Zhang X, Yan C, Malin BA, Patel MB, Chen Y. Predicting next-day discharge via electronic health record access logs. J Am Med Inform Assoc 2021; 28:2670-2680. [PMID: 34592753 DOI: 10.1093/jamia/ocab211] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 08/21/2021] [Accepted: 09/15/2021] [Indexed: 02/06/2023] Open
Abstract
OBJECTIVE Hospital capacity management depends on accurate real-time estimates of hospital-wide discharges. Estimation by a clinician requires an excessively large amount of effort and, even when attempted, accuracy in forecasting next-day patient-level discharge is poor. This study aims to support next-day discharge predictions with machine learning by incorporating electronic health record (EHR) audit log data, a resource that captures EHR users' granular interactions with patients' records by communicating various semantics and has been neglected in outcome predictions. MATERIALS AND METHODS This study focused on the EHR data for all adults admitted to Vanderbilt University Medical Center in 2019. We learned multiple advanced models to assess the value that EHR audit log data adds to the daily prediction of discharge likelihood within 24 h and to compare different representation strategies. We applied Shapley additive explanations to identify the most influential types of user-EHR interactions for discharge prediction. RESULTS The data include 26 283 inpatient stays, 133 398 patient-day observations, and 819 types of user-EHR interactions. The model using the count of each type of interaction in the recent 24 h and other commonly used features, including demographics and admission diagnoses, achieved the highest area under the receiver operating characteristics (AUROC) curve of 0.921 (95% CI: 0.919-0.923). By contrast, the model lacking user-EHR interactions achieved a worse AUROC of 0.862 (0.860-0.865). In addition, 10 of the 20 (50%) most influential factors were user-EHR interaction features. CONCLUSION EHR audit log data contain rich information such that it can improve hospital-wide discharge predictions.
Collapse
Affiliation(s)
- Xinmeng Zhang
- Department of Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Chao Yan
- Department of Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Bradley A Malin
- Department of Computer Science, Vanderbilt University, Nashville, Tennessee, USA.,Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, USA.,Department of Biostatistics, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Mayur B Patel
- Section of Surgical Sciences, Departments of Surgery & Neurosurgery, Division of Trauma, Surgical Critical Care, and Emergency General Surgery, Nashville, Tennessee, USA.,Geriatric Research and Education Clinical Center, Surgical Services, Veteran Affairs Tennessee Valley Healthcare System, Nashville, Tennessee, USA
| | - You Chen
- Department of Computer Science, Vanderbilt University, Nashville, Tennessee, USA.,Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
11
|
Hong P, Herigon JC, Uptegraft C, Samuel B, Brown DL, Bickel J, Hron JD. Use of clinical data to augment healthcare worker contact tracing during the COVID-19 pandemic. J Am Med Inform Assoc 2021; 29:142-148. [PMID: 34623426 DOI: 10.1093/jamia/ocab231] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 09/28/2021] [Accepted: 10/06/2021] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE This work examined the secondary use of clinical data from the electronic health record (EHR) for screening our healthcare worker (HCW) population for potential exposures to patients with coronavirus disease 2019. MATERIALS AND METHODS We conducted a cross-sectional study at a free-standing, quaternary care pediatric hospital comparing first-degree, patient-HCW pairs identified by the hospital's COVID-19 contact tracing team (CTT) to those identified using EHR clinical event data (EHR Report). The primary outcome was the number of patient-HCW pairs detected by each process. RESULTS Among 233 patients with COVID-19, our EHR Report identified 4,116 patient-HCW pairs, including 2,365 (30.0%) of the 7,890 pairs detected by the CTT. The EHR Report also revealed 1,751 pairs not identified by the CTT. The highest number of patient-HCW pairs per patient was detected in the inpatient care venue. Nurses comprised the most frequently identified HCW role overall. CONCLUSION Automated methods to screen HCWs for potential exposure to patients with COVID-19 using clinical event data from the EHR are likely to improve epidemiologic surveillance by contact tracing programs and represent a viable and readily available strategy which should be considered by other institutions.
Collapse
Affiliation(s)
- Peter Hong
- Division of General Pediatrics, Department of Pediatrics, Boston Children's Hospital, Boston, Massachusetts, USA.,Department of Pediatrics, Harvard Medical School, Boston, Massachusetts, USA
| | - Joshua C Herigon
- Division of Pediatric Infectious Diseases, Department of Pediatrics, Children's Mercy Kansas City, Kansas City, Missouri, USA.,Department of Pediatrics, University of Missouri-Kansas City School of Medicine, USA, Kansas City, Missouri
| | - Colby Uptegraft
- Health Informatics Branch, Defense Health Agency, Falls Church, Virginia, USA
| | - Bassem Samuel
- Information Services Department, Boston Children's Hospital, Boston, Massachusetts, USA
| | - D Levin Brown
- Information Services Department, Boston Children's Hospital, Boston, Massachusetts, USA
| | - Jonathan Bickel
- Division of General Pediatrics, Department of Pediatrics, Boston Children's Hospital, Boston, Massachusetts, USA.,Department of Pediatrics, Harvard Medical School, Boston, Massachusetts, USA.,Information Services Department, Boston Children's Hospital, Boston, Massachusetts, USA.,Computational Health Informatics Program, Boston Children's Hospital, Boston, Massachusetts, USA
| | - Jonathan D Hron
- Division of General Pediatrics, Department of Pediatrics, Boston Children's Hospital, Boston, Massachusetts, USA.,Department of Pediatrics, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
12
|
Holmgren AJ, Lindeman B, Ford EW. Resident Physician Experience and Duration of Electronic Health Record Use. Appl Clin Inform 2021; 12:721-728. [PMID: 34348409 DOI: 10.1055/s-0041-1732403] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
Abstract
BACKGROUND Electronic health records (EHRs) demand a significant amount of physician time for documentation, orders, and communication during care delivery. Resident physicians already work long hours as they gain experience and develop both clinical and socio-technical skills. OBJECTIVES Measure how much time resident physicians spend in the EHR during clinic hours and after-hours, and how EHR usage changes as they gain experience over a 12-month period. METHODS Longitudinal descriptive study where participants were 622 resident physicians across postgraduate year cohorts (of 948 resident physicians at the institution, 65.6%) working in an ambulatory setting from July 2017 to June 2018. Time spent in the EHR per patient, patients records documented per day, and proportion of EHR time spent after-hours were the outcome, while the number of months of ambulatory care experience was the predictor. RESULTS Resident physicians spent an average of 45.6 minutes in the EHR per patient, with 13.5% of that time spent after-hours. Over 12 months of ambulatory experience, resident physicians reduced their EHR time per patient and saw more patients per day, but the proportion of EHR time after-hours did not change. CONCLUSION Resident physicians spend a significant amount of time working in the EHR, both during and after clinic hours. While residents improve efficiency in reducing EHR time per patient, they do not reduce the proportion of EHR time spent after-hours. Concerns over the impact of EHRs on physician well-being should include recognition of the burden of EHR usage on early-career physicians.
Collapse
Affiliation(s)
- A Jay Holmgren
- Center for Clinical Informatics and Improvement Research, School of Medicine, University of California, California, United States
| | - Brenessa Lindeman
- Department of Surgery, School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama, United States
| | - Eric W Ford
- School of Public Health, The University of Alabama at Birmingham, Birmingham, Alabama, United States
| |
Collapse
|
13
|
Ende HB, Richardson MG, Lopez BM, Wanderer JP. Improving ACGME Compliance for Obstetric Anesthesiology Fellows Using an Automated Email Notification System. Appl Clin Inform 2021; 12:479-483. [PMID: 34041735 DOI: 10.1055/s-0041-1730323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022] Open
Abstract
BACKGROUND The Accreditation Council for Graduate Medical Education establishes minimum case requirements for trainees. In the subspecialty of obstetric anesthesiology, requirements for fellow participation in nonobstetric antenatal procedures pose a particular challenge due to the physical location remote from labor and delivery and frequent last-minute scheduling. OBJECTIVES In response to this challenge, we implemented an informatics-based notification system, with the aim of increasing fellow participation in nonobstetric antenatal surgeries. METHODS In December 2014 an automated email notification system to inform obstetric anesthesiology fellows of scheduled nonobstetric surgeries in pregnant patients was initiated. Cases were identified via daily automated query of the preoperative evaluation database looking for structured documentation of current pregnancy. Information on flagged cases including patient medical record number, operating room location, and date and time of procedure were communicated to fellows via automated email daily. Median fellow participation in nonobstetric antenatal procedures per quarter before and after implementation were compared using an exact Wilcoxon-Mann-Whitney test due to low baseline absolute counts. The fraction of antenatal cases representing nonobstetric procedures completed by fellows before and after implementation was compared using a Fisher's exact test. RESULTS The number of nonobstetric antenatal cases logged by fellows per quarter increased significantly following implementation, from median 0[0,1] to 3[1,6] cases/quarter (p = 0.007). Additionally, nonobstetric antenatal cases completed by fellows as a percentage of total antenatal cases completed increased from 14% in preimplementation years to 52% in postimplementation years (p < 0.001). CONCLUSION Through an automated email system to identify nonobstetric antenatal procedures in pregnant patients, we were able to increase the number of these cases completed by fellows during 3 years following implementation.
Collapse
Affiliation(s)
- Holly B Ende
- Department of Anesthesiology, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Michael G Richardson
- Department of Anesthesiology, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Brandon M Lopez
- Department of Anesthesiology, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Jonathan P Wanderer
- Department of Anesthesiology, Vanderbilt University Medical Center, Nashville, Tennessee, United States.,Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| |
Collapse
|