1
|
Rule A, Melnick ER, Apathy NC. Using event logs to observe interactions with electronic health records: an updated scoping review shows increasing use of vendor-derived measures. J Am Med Inform Assoc 2022; 30:144-154. [PMID: 36173361 PMCID: PMC9748581 DOI: 10.1093/jamia/ocac177] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 09/15/2022] [Accepted: 09/19/2022] [Indexed: 01/24/2023] Open
Abstract
OBJECTIVE The aim of this article is to compare the aims, measures, methods, limitations, and scope of studies that employ vendor-derived and investigator-derived measures of electronic health record (EHR) use, and to assess measure consistency across studies. MATERIALS AND METHODS We searched PubMed for articles published between July 2019 and December 2021 that employed measures of EHR use derived from EHR event logs. We coded the aims, measures, methods, limitations, and scope of each article and compared articles employing vendor-derived and investigator-derived measures. RESULTS One hundred and two articles met inclusion criteria; 40 employed vendor-derived measures, 61 employed investigator-derived measures, and 1 employed both. Studies employing vendor-derived measures were more likely than those employing investigator-derived measures to observe EHR use only in ambulatory settings (83% vs 48%, P = .002) and only by physicians or advanced practice providers (100% vs 54% of studies, P < .001). Studies employing vendor-derived measures were also more likely to measure durations of EHR use (P < .001 for 6 different activities), but definitions of measures such as time outside scheduled hours varied widely. Eight articles reported measure validation. The reported limitations of vendor-derived measures included measure transparency and availability for certain clinical settings and roles. DISCUSSION Vendor-derived measures are increasingly used to study EHR use, but only by certain clinical roles. Although poorly validated and variously defined, both vendor- and investigator-derived measures of EHR time are widely reported. CONCLUSION The number of studies using event logs to observe EHR use continues to grow, but with inconsistent measure definitions and significant differences between studies that employ vendor-derived and investigator-derived measures.
Collapse
Affiliation(s)
- Adam Rule
- Information School, University of Wisconsin–Madison, Madison,
Wisconsin, USA
| | - Edward R Melnick
- Emergency Medicine, Yale School of Medicine, New Haven,
Connecticut, USA
- Biostatistics (Health Informatics), Yale School of Public
Health, New Haven, Connecticut, USA
| | - Nate C Apathy
- MedStar Health National Center for Human Factors in Healthcare, MedStar
Health Research Institute, District of Columbia, Washington, USA
- Regenstrief Institute, Indianapolis, Indiana, USA
| |
Collapse
|
2
|
Taxter A, Frenkel M, Witek L, Bundy R, Kirkendall E, Miller D, Dharod A. Design, Implementation, Utilization, and Sustainability of a Fast Healthcare Interoperability Resources-Based Inpatient Rounding List. Appl Clin Inform 2022; 13:180-188. [PMID: 35108740 PMCID: PMC8810271 DOI: 10.1055/s-0041-1742219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
OBJECTIVE We designed and implemented an application programming interface (API)-based electronic health record (EHR)-integrated rounding list and evaluated acceptability, clinician satisfaction, information accuracy, and efficiency related to the application. METHODS We developed and integrated an application, employing iterative design techniques with user feedback. EHR and application user action logs, as well as hospital safety reports, were evaluated. Rounding preparation characteristics were obtained through surveys before and after application integration. To evaluate usability, inpatient providers, including residents, fellows, and attendings were surveyed 2 weeks prior to and 6 months after enterprise-wide EHR application integration. Our primary outcome was provider time savings measured by user action logs; secondary outcomes include provider satisfaction. RESULTS The application was widely adopted by inpatient providers, with more than 69% of all inpatients queried by the application within 6 months of deployment. Application utilization was sustained throughout the study period with 79% (interquartile range [IQR]: 76, 82) of enterprise-wide unique patients accessed per weekday. EHR action logs showed application users spent -3.24 minutes per day (95% confidence interval [CI]: -6.8, 0.33), p = 0.07 within the EHR compared with nonusers. Median self-reported chart review time for attendings decreased from 30 minutes (IQR: 15, 60) to 20 minutes (IQR: 10, 45) after application integration (p = 0.04). Self-reported sign-out preparation time decreased by a median of 5 minutes (p < 0.01), and providers were better prepared for hand-offs (p = 0.02). There were no increased safety reports during the study period. CONCLUSION This study demonstrates successful integration of a rounding application within a commercial EHR using APIs. We demonstrate increasing both provider-reported satisfaction and time savings. Rounding lists provided more accurate and timely information for rounds. Application usage was sustained across multiple specialties at 42 months. Other application designers should consider data density, optimization of provider workflows, and using real-time data transfer using novel tools when designing an application.
Collapse
Affiliation(s)
- Alysha Taxter
- Division of Rheumatology, Nationwide Children's Hospital, Columbus, Ohio, United States
| | - Mark Frenkel
- Department of Neurosurgery, Wake Forest School of Medicine, Winston-Salem, North Carolina, United States
| | - Lauren Witek
- Department of Internal Medicine, Wake Forest School of Medicine, Winston-Salem, North Carolina, United States
| | - Richa Bundy
- Department of Internal Medicine, Wake Forest School of Medicine, Winston-Salem, North Carolina, United States
| | - Eric Kirkendall
- Department of Pediatrics, Wake Forest School of Medicine, Winston-Salem, North Carolina, United States,Center for Healthcare Innovation, Wake Forest School of Medicine, Winston-Salem, North Carolina, United States,Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, United States
| | - David Miller
- Department of Internal Medicine, Wake Forest School of Medicine, Winston-Salem, North Carolina, United States,Center for Healthcare Innovation, Wake Forest School of Medicine, Winston-Salem, North Carolina, United States,Department of Implementation Science, Wake Forest School of Medicine, Winston-Salem, North Carolina, United States
| | - Ajay Dharod
- Department of Internal Medicine, Wake Forest School of Medicine, Winston-Salem, North Carolina, United States,Center for Healthcare Innovation, Wake Forest School of Medicine, Winston-Salem, North Carolina, United States,Department of Implementation Science, Wake Forest School of Medicine, Winston-Salem, North Carolina, United States,Address for correspondence Ajay Dharod, MD, FACP Department of Internal Medicine1 Medical Center Boulevard, Winston-Salem, NC 27157United States
| |
Collapse
|
3
|
Moy AJ, Schwartz JM, Chen R, Sadri S, Lucas E, Cato KD, Rossetti SC. Measurement of clinical documentation burden among physicians and nurses using electronic health records: a scoping review. J Am Med Inform Assoc 2021; 28:998-1008. [PMID: 33434273 PMCID: PMC8068426 DOI: 10.1093/jamia/ocaa325] [Citation(s) in RCA: 83] [Impact Index Per Article: 27.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Accepted: 12/04/2020] [Indexed: 12/26/2022] Open
Abstract
BACKGROUND . OBJECTIVE Electronic health records (EHRs) are linked with documentation burden resulting in clinician burnout. While clear classifications and validated measures of burnout exist, documentation burden remains ill-defined and inconsistently measured. We aim to conduct a scoping review focused on identifying approaches to documentation burden measurement and their characteristics. MATERIALS AND METHODS Based on Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) Extension for Scoping Reviews (ScR) guidelines, we conducted a scoping review assessing MEDLINE, Embase, Web of Science, and CINAHL from inception to April 2020 for studies investigating documentation burden among physicians and nurses in ambulatory or inpatient settings. Two reviewers evaluated each potentially relevant study for inclusion/exclusion criteria. RESULTS Of the 3482 articles retrieved, 35 studies met inclusion criteria. We identified 15 measurement characteristics, including 7 effort constructs: EHR usage and workload, clinical documentation/review, EHR work after hours and remotely, administrative tasks, cognitively cumbersome work, fragmentation of workflow, and patient interaction. We uncovered 4 time constructs: average time, proportion of time, timeliness of completion, activity rate, and 11 units of analysis. Only 45.0% of studies assessed the impact of EHRs on clinicians and/or patients and 40.0% mentioned clinician burnout. DISCUSSION Standard and validated measures of documentation burden are lacking. While time and effort were the core concepts measured, there appears to be no consensus on the best approach nor degree of rigor to study documentation burden. CONCLUSION Further research is needed to reliably operationalize the concept of documentation burden, explore best practices for measurement, and standardize its use.
Collapse
Affiliation(s)
- Amanda J Moy
- Department of Biomedical Informatics, Columbia University, New York, New York, USA
| | | | - RuiJun Chen
- Department of Biomedical Informatics, Columbia University, New York, New York, USA
- Department of Translational Data Science and Informatics, Geisinger, Danville, Pennsylvania, USA
| | - Shirin Sadri
- Vagelos School of Physicians and Surgeons, Columbia University New York, New York, USA
| | - Eugene Lucas
- Department of Biomedical Informatics, Columbia University, New York, New York, USA
- Department of Medicine, Weill Cornell Medical College, New York, New York, USA
| | - Kenrick D Cato
- School of Nursing, Columbia University, New York, New York, USA
| | - Sarah Collins Rossetti
- Department of Biomedical Informatics, Columbia University, New York, New York, USA
- School of Nursing, Columbia University, New York, New York, USA
| |
Collapse
|
4
|
Coleman C, Gotz D, Eaker S, James E, Bice T, Carson S, Khairat S. Analysing EHR navigation patterns and digital workflows among physicians during ICU pre-rounds. Health Inf Manag 2020; 50:107-117. [PMID: 32476474 PMCID: PMC8435833 DOI: 10.1177/1833358320920589] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Background: Some physicians in intensive care units (ICUs) report that electronic health records (EHRs) can be cumbersome and disruptive to workflow. There are significant gaps in our understanding of the physician–EHR interaction. Objective: To better understand how clinicians use the EHR for chart review during ICU pre-rounds through the characterisation and description of screen navigation pathways and workflow patterns. Method: We conducted a live, direct observational study of six physician trainees performing electronic chart review during daily pre-rounds in the 30-bed medical ICU at a large academic medical centre in the Southeastern United States. A tailored checklist was used by observers for data collection. Results: We observed 52 distinct live patient chart review encounters, capturing a total of 2.7 hours of pre-rounding chart review activity by six individual physicians. Physicians reviewed an average of 8.7 patients (range = 5–12), spending a mean of 3:05 minutes per patient (range = 1:34–5:18). On average, physicians visited 6.3 (±3.1) total EHR screens per patient (range = 1–16). Four unique screens were viewed most commonly, accounting for over half (52.7%) of all screen visits: results review (17.9%), summary/overview (13.0%), flowsheet (12.7%), and the chart review tab (9.1%). Navigation pathways were highly variable, but several common screen transition patterns emerged across users. Average interrater reliability for the paired EHR observation was 80.0%. Conclusion: We observed the physician–EHR interaction during ICU pre-rounds to be brief and highly focused. Although we observed a high degree of “information sprawl” in physicians’ digital navigation, we also identified common launch points for electronic chart review, key high-traffic screens and common screen transition patterns. Implications: From the study findings, we suggest recommendations towards improved EHR design.
Collapse
Affiliation(s)
| | - David Gotz
- University of North Carolina at Chapel Hill, USA
| | | | - Elaine James
- University of North Carolina at Chapel Hill, USA
| | - Thomas Bice
- University of North Carolina at Chapel Hill, USA
| | | | - Saif Khairat
- University of North Carolina at Chapel Hill, USA
| |
Collapse
|