1
|
Lees AF, Beni C, Lee A, Wedgeworth P, Dzara K, Joyner B, Tarczy-Hornoch P, Leu M. Uses of Electronic Health Record Data to Measure the Clinical Learning Environment of Graduate Medical Education Trainees: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:1326-1336. [PMID: 37267042 PMCID: PMC10615720 DOI: 10.1097/acm.0000000000005288] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
PURPOSE This study systematically reviews the uses of electronic health record (EHR) data to measure graduate medical education (GME) trainee competencies. METHOD In January 2022, the authors conducted a systematic review of original research in MEDLINE from database start to December 31, 2021. The authors searched for articles that used the EHR as their data source and in which the individual GME trainee was the unit of observation and/or unit of analysis. The database query was intentionally broad because an initial survey of pertinent articles identified no unifying Medical Subject Heading terms. Articles were coded and clustered by theme and Accreditation Council for Graduate Medical Education (ACGME) core competency. RESULTS The database search yielded 3,540 articles, of which 86 met the study inclusion criteria. Articles clustered into 16 themes, the largest of which were trainee condition experience (17 articles), work patterns (16 articles), and continuity of care (12 articles). Five of the ACGME core competencies were represented (patient care and procedural skills, practice-based learning and improvement, systems-based practice, medical knowledge, and professionalism). In addition, 25 articles assessed the clinical learning environment. CONCLUSIONS This review identified 86 articles that used EHR data to measure individual GME trainee competencies, spanning 16 themes and 6 competencies and revealing marked between-trainee variation. The authors propose a digital learning cycle framework that arranges sequentially the uses of EHR data within the cycle of clinical experiential learning central to GME. Three technical components necessary to unlock the potential of EHR data to improve GME are described: measures, attribution, and visualization. Partnerships between GME programs and informatics departments will be pivotal in realizing this opportunity.
Collapse
Affiliation(s)
- A Fischer Lees
- A. Fischer Lees is a clinical informatics fellow, Department of Biomedical Informatics and Medical Education, University of Washington School of Medicine, Seattle, Washington
| | - Catherine Beni
- C. Beni is a general surgery resident, Department of Surgery, University of Washington School of Medicine, Seattle, Washington
| | - Albert Lee
- A. Lee is a clinical informatics fellow, Department of Biomedical Informatics and Medical Education, University of Washington School of Medicine, Seattle, Washington
| | - Patrick Wedgeworth
- P. Wedgeworth is a clinical informatics fellow, Department of Biomedical Informatics and Medical Education, University of Washington School of Medicine, Seattle, Washington
| | - Kristina Dzara
- K. Dzara is assistant dean for educator development, director, Center for Learning and Innovation in Medical Education, and associate professor of medical education, Department of Biomedical Informatics and Medical Education, University of Washington School of Medicine, Seattle, Washington
| | - Byron Joyner
- B. Joyner is vice dean for graduate medical education and a designated institutional official, Graduate Medical Education, University of Washington School of Medicine, Seattle, Washington
| | - Peter Tarczy-Hornoch
- P. Tarczy-Hornoch is professor and chair, Department of Biomedical Informatics and Medical Education, and professor, Department of Pediatrics (Neonatology), University of Washington School of Medicine, and adjunct professor, Allen School of Computer Science and Engineering, University of Washington, Seattle, Washington
| | - Michael Leu
- M. Leu is professor and director, Clinical Informatics Fellowship, Department of Biomedical Informatics and Medical Education, and professor, Department of Pediatrics, University of Washington School of Medicine, Seattle, Washington
| |
Collapse
|
2
|
Helman S, Terry MA, Pellathy T, Hravnak M, George E, Al-Zaiti S, Clermont G. Engaging Multidisciplinary Clinical Users in the Design of an Artificial Intelligence-Powered Graphical User Interface for Intensive Care Unit Instability Decision Support. Appl Clin Inform 2023; 14:789-802. [PMID: 37793618 PMCID: PMC10550364 DOI: 10.1055/s-0043-1775565] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 07/26/2023] [Indexed: 10/06/2023] Open
Abstract
BACKGROUND Critical instability forecast and treatment can be optimized by artificial intelligence (AI)-enabled clinical decision support. It is important that the user-facing display of AI output facilitates clinical thinking and workflow for all disciplines involved in bedside care. OBJECTIVES Our objective is to engage multidisciplinary users (physicians, nurse practitioners, physician assistants) in the development of a graphical user interface (GUI) to present an AI-derived risk score. METHODS Intensive care unit (ICU) clinicians participated in focus groups seeking input on instability risk forecast presented in a prototype GUI. Two stratified rounds (three focus groups [only nurses, only providers, then combined]) were moderated by a focus group methodologist. After round 1, GUI design changes were made and presented in round 2. Focus groups were recorded, transcribed, and deidentified transcripts independently coded by three researchers. Codes were coalesced into emerging themes. RESULTS Twenty-three ICU clinicians participated (11 nurses, 12 medical providers [3 mid-level and 9 physicians]). Six themes emerged: (1) analytics transparency, (2) graphical interpretability, (3) impact on practice, (4) value of trend synthesis of dynamic patient data, (5) decisional weight (weighing AI output during decision-making), and (6) display location (usability, concerns for patient/family GUI view). Nurses emphasized having GUI objective information to support communication and optimal GUI location. While providers emphasized need for recommendation interpretability and concern for impairing trainee critical thinking. All disciplines valued synthesized views of vital signs, interventions, and risk trends but were skeptical of placing decisional weight on AI output until proven trustworthy. CONCLUSION Gaining input from all clinical users is important to consider when designing AI-derived GUIs. Results highlight that health care intelligent decisional support systems technologies need to be transparent on how they work, easy to read and interpret, cause little disruption to current workflow, as well as decisional support components need to be used as an adjunct to human decision-making.
Collapse
Affiliation(s)
- Stephanie Helman
- Department of Acute and Tertiary Care Nursing, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
| | - Martha Ann Terry
- Department of Behavioral and Community Health Sciences, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
| | - Tiffany Pellathy
- Veterans Administration Center for Health Equity Research and Promotion, Pittsburgh, Pennsylvania, United States
| | - Marilyn Hravnak
- Department of Acute and Tertiary Care Nursing, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
| | - Elisabeth George
- Department of Nursing, University of Pittsburgh Medical Center, Presbyterian Hospital, Pittsburgh, Pennsylvania, United States
| | - Salah Al-Zaiti
- Department of Acute and Tertiary Care Nursing, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
- Department of Emergency Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
- Division of Cardiology at University of Pittsburgh, Pittsburgh, Pennsylvania, United States
| | - Gilles Clermont
- Department of Critical Care Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
| |
Collapse
|
3
|
Lim HC, Austin JA, van der Vegt AH, Rahimi AK, Canfell OJ, Mifsud J, Pole JD, Barras MA, Hodgson T, Shrapnel S, Sullivan CM. Toward a Learning Health Care System: A Systematic Review and Evidence-Based Conceptual Framework for Implementation of Clinical Analytics in a Digital Hospital. Appl Clin Inform 2022; 13:339-354. [PMID: 35388447 PMCID: PMC8986462 DOI: 10.1055/s-0042-1743243] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Objective
A learning health care system (LHS) uses routinely collected data to continuously monitor and improve health care outcomes. Little is reported on the challenges and methods used to implement the analytics underpinning an LHS. Our aim was to systematically review the literature for reports of real-time clinical analytics implementation in digital hospitals and to use these findings to synthesize a conceptual framework for LHS implementation.
Methods
Embase, PubMed, and Web of Science databases were searched for clinical analytics derived from electronic health records in adult inpatient and emergency department settings between 2015 and 2021. Evidence was coded from the final study selection that related to (1) dashboard implementation challenges, (2) methods to overcome implementation challenges, and (3) dashboard assessment and impact. The evidences obtained, together with evidence extracted from relevant prior reviews, were mapped to an existing digital health transformation model to derive a conceptual framework for LHS analytics implementation.
Results
A total of 238 candidate articles were reviewed and 14 met inclusion criteria. From the selected studies, we extracted 37 implementation challenges and 64 methods employed to overcome such challenges. We identified common approaches for evaluating the implementation of clinical dashboards. Six studies assessed clinical process outcomes and only four studies evaluated patient health outcomes. A conceptual framework for implementing the analytics of an LHS was developed.
Conclusion
Health care organizations face diverse challenges when trying to implement real-time data analytics. These challenges have shifted over the past decade. While prior reviews identified fundamental information problems, such as data size and complexity, our review uncovered more postpilot challenges, such as supporting diverse users, workflows, and user-interface screens. Our review identified practical methods to overcome these challenges which have been incorporated into a conceptual framework. It is hoped this framework will support health care organizations deploying near-real-time clinical dashboards and progress toward an LHS.
Collapse
Affiliation(s)
- Han Chang Lim
- Centre for Health Services Research, Faculty of Medicine, The University of Queensland, Herston, Brisbane, Australia.,Department of Health, eHealth Queensland, Queensland Government, Brisbane, Australia
| | - Jodie A Austin
- Centre for Health Services Research, Faculty of Medicine, The University of Queensland, Herston, Brisbane, Australia.,Department of Health, eHealth Queensland, Queensland Government, Brisbane, Australia
| | - Anton H van der Vegt
- Information Engineering Lab, School of Information Technology and Electrical Engineering, The University of Queensland, St Lucia, Brisbane, Australia
| | - Amir Kamel Rahimi
- Centre for Health Services Research, Faculty of Medicine, The University of Queensland, Herston, Brisbane, Australia.,Digital Health Cooperative Research Centre, Australian Government, Sydney, New South Wales, Australia
| | - Oliver J Canfell
- Centre for Health Services Research, Faculty of Medicine, The University of Queensland, Herston, Brisbane, Australia.,Digital Health Cooperative Research Centre, Australian Government, Sydney, New South Wales, Australia.,UQ Business School, Faculty of Business, Economics and Law, The University of Queensland, St. Lucia, Brisbane, Australia
| | - Jayden Mifsud
- Centre for Health Services Research, Faculty of Medicine, The University of Queensland, Herston, Brisbane, Australia
| | - Jason D Pole
- Centre for Health Services Research, Faculty of Medicine, The University of Queensland, Herston, Brisbane, Australia
| | - Michael A Barras
- School of Pharmacy, Faculty of Health and Behavioural Sciences, The University of Queensland, PACE Precinct, Woolloongabba, Brisbane, Australia.,Pharmacy Department, Princess Alexandra Hospital, Woolloongabba, Brisbane, Australia
| | - Tobias Hodgson
- UQ Business School, Faculty of Business, Economics and Law, The University of Queensland, St. Lucia, Brisbane, Australia
| | - Sally Shrapnel
- Centre for Health Services Research, Faculty of Medicine, The University of Queensland, Herston, Brisbane, Australia.,School of Mathematics and Physics, Faculty of Science, The University of Queensland, St Lucia, Brisbane, Australia
| | - Clair M Sullivan
- Centre for Health Services Research, Faculty of Medicine, The University of Queensland, Herston, Brisbane, Australia.,Department of Health, Metro North Hospital and Health Service, Queensland Government, Herston QLD, Australia
| |
Collapse
|
4
|
Petrides AK, Conrad MJ, Terebo T, Melanson SEF. Pandemic Response in the Clinical Laboratory: The Utility of Interactive Dashboards. J Pathol Inform 2022; 13:100010. [PMID: 35186704 PMCID: PMC8841220 DOI: 10.1016/j.jpi.2022.100010] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 01/10/2022] [Indexed: 11/27/2022] Open
Abstract
The ability to access and analyze data is critical to manage a laboratory and respond and adapt to changes, particularly during a pandemic. Data analytic tools can not only improve laboratory operations, but also increase the visibility of the laboratory in the healthcare system and demonstrate the positive impact of the laboratory on patient care. In this article, we describe the creation and utility of laboratory dashboards. Several dashboards were designed to assist with pandemic response. For each dashboard, a stored procedure was created that performed a SQL query of our laboratory information system mirror database. We utilized the business analytics platform, Tableau, for data visualization. Users could modify the data by selecting a specific date range, time window, work shift, institution(s), specific test(s), and/or testing platform(s). Access was controlled by OKTA integration to the host server over the web, behind the hospital firewall. During the April 2020 surge, we saw an increase in blood gas testing and corresponding decrease in non-critical testing such as Vitamin D. At our institution, SARS-CoV-2 molecular testing was performed using four primary platforms, four in-house and one send-out. Weekly and hourly testing volumes as well as turnaround times fluctuated based on reagent availability, new testing requests, staffing, and operational changes. Productivity dashboards indicated that coagulation testing volumes were highest on the third shift and that all three analyzers may not be necessary. Further, specimen volumes and productivity of accessioning staff varied throughout the day. Phlebotomy venipuncture volumes and patient wait times also varied throughout the pandemic. A decrease in ambulatory draws was seen during the surge but after reopening draw volumes, particularly at offsite locations, surpassed prepandemic volumes. We demonstrate that data analytics and interactive dashboards are powerful tools, are helpful in response to a pandemic and lead to improved TAT, supply utilization, staffing and workflows. Furthermore, dashboards provide objective data to review with hospital leadership and promote collaboration.
Collapse
Affiliation(s)
- Athena K Petrides
- Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA.,Harvard Medical School, Boston, MA, USA
| | - Michael J Conrad
- Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA
| | - Tolumofe Terebo
- Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA
| | - Stacy E F Melanson
- Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA.,Harvard Medical School, Boston, MA, USA
| |
Collapse
|
5
|
Tsang JY, Peek N, Buchan I, van der Veer SN, Brown B. OUP accepted manuscript. J Am Med Inform Assoc 2022; 29:1106-1119. [PMID: 35271724 PMCID: PMC9093027 DOI: 10.1093/jamia/ocac031] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 02/08/2021] [Accepted: 02/24/2022] [Indexed: 11/26/2022] Open
Abstract
Objectives (1) Systematically review the literature on computerized audit and feedback (e-A&F) systems in healthcare. (2) Compare features of current systems against e-A&F best practices. (3) Generate hypotheses on how e-A&F systems may impact patient care and outcomes. Methods We searched MEDLINE (Ovid), EMBASE (Ovid), and CINAHL (Ebsco) databases to December 31, 2020. Two reviewers independently performed selection, extraction, and quality appraisal (Mixed Methods Appraisal Tool). System features were compared with 18 best practices derived from Clinical Performance Feedback Intervention Theory. We then used realist concepts to generate hypotheses on mechanisms of e-A&F impact. Results are reported in accordance with the PRISMA statement. Results Our search yielded 4301 unique articles. We included 88 studies evaluating 65 e-A&F systems, spanning a diverse range of clinical areas, including medical, surgical, general practice, etc. Systems adopted a median of 8 best practices (interquartile range 6–10), with 32 systems providing near real-time feedback data and 20 systems incorporating action planning. High-confidence hypotheses suggested that favorable e-A&F systems prompted specific actions, particularly enabled by timely and role-specific feedback (including patient lists and individual performance data) and embedded action plans, in order to improve system usage, care quality, and patient outcomes. Conclusions e-A&F systems continue to be developed for many clinical applications. Yet, several systems still lack basic features recommended by best practice, such as timely feedback and action planning. Systems should focus on actionability, by providing real-time data for feedback that is specific to user roles, with embedded action plans. Protocol Registration PROSPERO CRD42016048695.
Collapse
Affiliation(s)
- Jung Yin Tsang
- Corresponding Author: Jung Yin Tsang, Centre for Primary Care and Health Services Research, University of Manchester, 6th Floor Williamson Building, Oxford Road, Manchester M13 9PL, UK;
| | - Niels Peek
- Centre for Health Informatics, Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK
- NIHR Greater Manchester Patient Safety Translational Research Centre (GMPSTRC), University of Manchester, Manchester, UK
- NIHR Applied Research Collaboration Greater Manchester, University of Manchester, Manchester, UK
| | - Iain Buchan
- Institute of Population Health, University of Liverpool, Liverpool, UK
| | - Sabine N van der Veer
- Centre for Health Informatics, Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK
| | - Benjamin Brown
- Centre for Health Informatics, Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK
- Centre for Primary Care and Health Services Research, University of Manchester, Manchester, UK
- NIHR Greater Manchester Patient Safety Translational Research Centre (GMPSTRC), University of Manchester, Manchester, UK
| |
Collapse
|
6
|
Effectiveness of an automated feedback with dashboard on use of laboratory tests by neurology residents. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100767] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
7
|
Providers' perceptions on barriers and facilitators to prescribing naloxone for patients at risk for opioid overdose after implementation of a national academic detailing program: A qualitative assessment. Res Social Adm Pharm 2020; 16:1033-1040. [DOI: 10.1016/j.sapharm.2019.10.015] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2019] [Revised: 08/13/2019] [Accepted: 10/25/2019] [Indexed: 11/18/2022]
|
8
|
Bersani K, Fuller TE, Garabedian P, Espares J, Mlaver E, Businger A, Chang F, Boxer RB, Schnock KO, Rozenblum R, Dykes PC, Dalal AK, Benneyan JC, Lehmann LS, Gershanik EF, Bates DW, Schnipper JL. Use, Perceived Usability, and Barriers to Implementation of a Patient Safety Dashboard Integrated within a Vendor EHR. Appl Clin Inform 2020; 11:34-45. [PMID: 31940670 PMCID: PMC6962088 DOI: 10.1055/s-0039-3402756] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2019] [Accepted: 12/03/2019] [Indexed: 01/18/2023] Open
Abstract
BACKGROUND Preventable adverse events continue to be a threat to hospitalized patients. Clinical decision support in the form of dashboards may improve compliance with evidence-based safety practices. However, limited research describes providers' experiences with dashboards integrated into vendor electronic health record (EHR) systems. OBJECTIVE This study was aimed to describe providers' use and perceived usability of the Patient Safety Dashboard and discuss barriers and facilitators to implementation. METHODS The Patient Safety Dashboard was implemented in a cluster-randomized stepped wedge trial on 12 units in neurology, oncology, and general medicine services over an 18-month period. Use of the Dashboard was tracked during the implementation period and analyzed in-depth for two 1-week periods to gather a detailed representation of use. Providers' perceptions of tool usability were measured using the Health Information Technology Usability Evaluation Scale (rated 1-5). Research assistants conducted field observations throughout the duration of the study to describe use and provide insight into tool adoption. RESULTS The Dashboard was used 70% of days the tool was available, with use varying by role, service, and time of day. On general medicine units, nurses logged in throughout the day, with many logins occurring during morning rounds, when not rounding with the care team. Prescribers logged in typically before and after morning rounds. On neurology units, physician assistants accounted for most logins, accessing the Dashboard during daily brief interdisciplinary rounding sessions. Use on oncology units was rare. Satisfaction with the tool was highest for perceived ease of use, with attendings giving the highest rating (4.23). The overall lowest rating was for quality of work life, with nurses rating the tool lowest (2.88). CONCLUSION This mixed methods analysis provides insight into the use and usability of a dashboard tool integrated within a vendor EHR and can guide future improvements and more successful implementation of these types of tools.
Collapse
Affiliation(s)
- Kerrin Bersani
- Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, United States
| | - Theresa E. Fuller
- Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, United States
| | | | - Jenzel Espares
- Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, United States
| | - Eli Mlaver
- Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, United States
| | - Alexandra Businger
- Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, United States
| | - Frank Chang
- Partners Healthcare, Somerville, Massachusetts, United States
| | - Robert B. Boxer
- Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, United States
- Harvard Medical School, Boston, Massachusetts, United States
| | - Kumiko O. Schnock
- Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, United States
- Harvard Medical School, Boston, Massachusetts, United States
| | - Ronen Rozenblum
- Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, United States
- Harvard Medical School, Boston, Massachusetts, United States
| | - Patricia C. Dykes
- Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, United States
- Harvard Medical School, Boston, Massachusetts, United States
| | - Anuj K. Dalal
- Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, United States
- Harvard Medical School, Boston, Massachusetts, United States
| | - James C. Benneyan
- Healthcare Systems Engineering Institute, Colleges of Engineering and Health Sciences, Northeastern University, Boston, Massachusetts, United States
| | - Lisa S. Lehmann
- Veterans Affairs New England Healthcare System, Boston, Massachusetts, United States
| | - Esteban F. Gershanik
- Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, United States
- Harvard Medical School, Boston, Massachusetts, United States
| | - David W. Bates
- Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, United States
- Harvard Medical School, Boston, Massachusetts, United States
| | - Jeffrey L. Schnipper
- Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, United States
- Harvard Medical School, Boston, Massachusetts, United States
| |
Collapse
|
9
|
Bodley T, Kwan JL, Matelski J, Darragh PJ, Cram P. Self-reported test ordering practices among Canadian internal medicine physicians and trainees: a multicenter cross-sectional survey. BMC Health Serv Res 2019; 19:820. [PMID: 31703686 PMCID: PMC6842191 DOI: 10.1186/s12913-019-4639-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Accepted: 10/15/2019] [Indexed: 11/22/2022] Open
Abstract
Background Over-testing is a recognized problem, but clinicians usually lack information about their personal test ordering volumes. In the absence of data, clinicians rely on self-perception to inform their test ordering practices. In this study we explore clinician self-perception of diagnostic test ordering intensity. Methods We conducted a cross-sectional survey of inpatient General Internal Medicine (GIM) attending physicians and trainees at three Canadian teaching hospitals. We collected information about: self-reported test ordering intensity, perception of colleagues test ordering intensity, and importance of clinical utility, patient comfort, and cost when ordering tests. We compared responses of clinicians who self-identified as high vs low utilizers of diagnostic tests, and attending physicians vs trainees. Results Only 15% of inpatient GIM clinicians self-identified as high utilizers of diagnostic tests, while 73% felt that GIM clinicians in aggregate (“others”) order too many tests. Survey respondents identified clinical utility as important when choosing to order tests (selected by 94%), followed by patient comfort (48%) and cost (23%). Self-identified low/average utilizers of diagnostic tests were more likely to report considering cost compared to high utilizers (27% vs 5%, P = 0.04). Attending physicians were more likely to consider patient comfort (70% vs 41%, p = 0.01) and cost (42% vs 17%, p = 0.003) than trainees. Conclusions In the absence of data, providers seem to recognize that over investigation is a problem, but few self-identify as being high test utilizers. Moreover, a significant percentage of respondents did not consider cost or patient discomfort when ordering tests. Our findings highlight challenges in reducing over-testing in the current era.
Collapse
Affiliation(s)
- Thomas Bodley
- Department of Medicine, University of Toronto, Toronto, ON, Canada.
| | - Janice L Kwan
- Department of Medicine, University of Toronto, Toronto, ON, Canada.,Division of General Internal Medicine, Sinai Health System and University Health Network, Toronto, ON, Canada
| | - John Matelski
- Division of General Internal Medicine, Sinai Health System and University Health Network, Toronto, ON, Canada.,Biostatistics Research Unit, University Health Network, Toronto, ON, Canada
| | - Patrick J Darragh
- Department of Medicine, University of Toronto, Toronto, ON, Canada.,Department of Medicine, Michael Garron Hospital, Toronto, ON, Canada
| | - Peter Cram
- Department of Medicine, University of Toronto, Toronto, ON, Canada.,Division of General Internal Medicine, Sinai Health System and University Health Network, Toronto, ON, Canada
| |
Collapse
|
10
|
Ryskina K, Jessica Dine C, Gitelman Y, Leri D, Patel M, Kurtzman G, Lin LY, Epstein AJ. Effect of Social Comparison Feedback on Laboratory Test Ordering for Hospitalized Patients: A Randomized Controlled Trial. J Gen Intern Med 2018; 33:1639-1645. [PMID: 29790072 PMCID: PMC6153251 DOI: 10.1007/s11606-018-4482-y] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/05/2017] [Revised: 02/14/2018] [Accepted: 05/04/2018] [Indexed: 10/16/2022]
Abstract
BACKGROUND Social comparison feedback is an increasingly popular strategy that uses performance report cards to modify physician behavior. Our objective was to test the effect of such feedback on the ordering of routine laboratory tests for hospitalized patients, a practice considered overused. METHODS This was a single-blinded randomized controlled trial. Between January and June 2016, physicians on six general medicine teams at the Hospital of the University of Pennsylvania were cluster randomized with equal allocation to two arms: (1) those e-mailed a summary of their routine laboratory test ordering vs. the service average for the prior week, linked to a continuously updated personalized dashboard containing patient-level details, and snapshot of the dashboard and (2) those who did not receive the intervention. The primary outcome was the count of routine laboratory test orders placed by a physician per patient-day. We modeled the count of orders by each physician per patient-day after the intervention as a function of trial arm and the physician's order count before the intervention. The count outcome was modeled using negative binomial models with adjustment for clustering within teams. RESULTS One hundred and fourteen interns and residents participated. We did not observe a statistically significant difference in adjusted reduction in routine laboratory ordering between the intervention and control physicians (physicians in the intervention group ordered 0.14 fewer tests per patient-day than physicians in the control group, 95% CI - 0.56 to 0.27, p = 0.50). Physicians whose absolute ordering rate deviated from the peer rate by more than 1.0 laboratory test per patient-day reduced their laboratory ordering by 0.80 orders per patient-day (95% CI - 1.58 to - 0.02, p = 0.04). CONCLUSIONS Personalized social comparison feedback on routine laboratory ordering did not change targeted behavior among physicians, although there was a significant decrease in orders among participants who deviated more from the peer rate. TRIAL REGISTRATION Clinicaltrials.gov registration: #NCT02330289.
Collapse
Affiliation(s)
- Kira Ryskina
- Division of General Internal Medicine, Department of Medicine, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA.
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, USA.
| | - C Jessica Dine
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, USA
- Division of Pulmonary and Critical Care, Department of Medicine, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - Yevgeniy Gitelman
- Penn Medicine Center for Health Care Innovation, Philadelphia, PA, USA
- Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA
| | - Damien Leri
- Penn Medicine Center for Health Care Innovation, Philadelphia, PA, USA
| | - Mitesh Patel
- Division of General Internal Medicine, Department of Medicine, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, USA
- Penn Medicine Center for Health Care Innovation, Philadelphia, PA, USA
- Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA
| | - Gregory Kurtzman
- Division of General Internal Medicine, Department of Medicine, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - Lisa Y Lin
- Division of General Internal Medicine, Department of Medicine, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - Andrew J Epstein
- Division of General Internal Medicine, Department of Medicine, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, USA
- Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA
| |
Collapse
|
11
|
Ryskina KL, Smith CD, Arora VM, Zaas AK, Halvorsen AJ, Weissman A, Wahi-Gururaj S. Relationship Between Institutional Investment in High-Value Care (HVC) Performance Improvement and Internal Medicine Residents' Perceptions of HVC Training. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:1517-1523. [PMID: 29697425 PMCID: PMC6442932 DOI: 10.1097/acm.0000000000002257] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
PURPOSE To measure the association between institutional investment in high-value care (HVC) performance improvement and resident HVC experiences. METHOD The authors analyzed data from two 2014 surveys assessing institutions' investments in HVC performance improvement as reported by program directors (PDs) and residents' perceptions of the frequency of HVC teaching, participation in HVC-focused quality improvement (QI), and views on HVC topics. The authors measured the association between institutional investment and resident-reported experiences using logistic regression, controlling for program and resident characteristics. RESULTS The sample included 214 programs and 9,854 residents (59.3% of 361 programs, 55.2% of 17,851 residents surveyed). Most PDs (158/209; 75.6%) reported some support. Residents were more likely to report HVC discussions with faculty at least a few times weekly if they trained in programs that offered HVC-focused faculty development (odds ratio [OR] = 1.19; 95% confidence interval [CI] 1.04-1.37; P = .01), that supported such faculty development (OR = 1.21; 95% CI 1.04-1.41; P = .02), or that provided physician cost-of-care performance data (OR = 1.19; 95% CI 1.03-1.39; P = .02). Residents were more likely to report participation in HVC QI if they trained in programs with a formal HVC curriculum (OR = 1.83; 95% CI 1.48-2.27; P < .001) or with HVC-focused faculty development (OR = 1.46; 95% CI 1.15-1.85; P = .002). CONCLUSIONS Institutional investment in HVC-related faculty development and physician feedback on costs of care may increase the frequency of HVC teaching and resident participation in HVC-related QI.
Collapse
Affiliation(s)
- Kira L Ryskina
- K.L. Ryskina is assistant professor of medicine, Division of General Internal Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0003-3379-6394. C.D. Smith is vice president, Clinical Programs, American College of Physicians, and adjunct associate professor of medicine, Division of General Internal Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0002-1910-9546. V.M. Arora is associate professor and director, Graduate Medical Education Clinical Learning Environment Innovation, Section of General Internal Medicine, Department of Medicine, University of Chicago, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-4745-7599. A.K. Zaas is associate professor, Division of Infectious Diseases and International Health, and program director, Duke Internal Medicine Residency, Duke University School of Medicine, Duke University, Durham, North Carolina. A.J. Halvorsen is assistant professor of medicine, Office of Educational Innovations, Internal Medicine Residency Program, Mayo Clinic, Rochester, Minnesota; ORCID: https://orcid.org/0000-0003-1272-616X. A. Weissman is director, Research Center, American College of Physicians, Philadelphia, Pennsylvania. S. Wahi-Gururaj is associate professor of medicine, Section of General Internal Medicine, and program director, Internal Medicine Residency, Department of Internal Medicine, University of Nevada, Las Vegas School of Medicine, Las Vegas, Nevada
| | | | | | | | | | | | | |
Collapse
|
12
|
Abstract
Medicare reimbursement for hospitals is increasingly tied to performance. The use of individual provider performance reports offers the potential to improve clinical outcomes through social comparison, and isolated cases of clinical dashboard uses at specific institutions have been previously reported. However, little is known about overall trends in how hospitals use the electronic health record to track and provide feedback on provider performance. We used data from 2013 to 2015 from the American Hospital Association (AHA) Annual Survey Information Technology Supplement, which asked hospitals if they have used electronic data to create performance profiles. We linked these data to AHA Annual Survey responses for all general adult and pediatric hospitals. Multivariable logistic regression was used to model the odds of use as a function of hospital characteristics. In 2015, 65.8% of the 2334 respondents used performance profiles, whereas 59.3% of the 2077 respondents used them in 2013. Report use was associated with non-profit status (odds ratio [OR], 2.77; 95% confidence interval [CI], 1.94-3.95) compared to for-profit, large hospital size (OR, 2.37; 95% CI, 1.56-3.60) compared to small size, highest quartile of bed-adjusted expenditures compared to bottom quartile (OR, 2.09; 95% CI, 1.55-2.82; P < .01), and participation in a health maintenance organization (OR, 1.50; 95% CI, 1.17-1.90; P < .01) or bundled payment program (OR, 1.61; 95% CI, 1.18- 2.19; P < .01). While a majority of hospitals now use such profiles, more than a third do not. The hospitals that do not use performance profiles may be less well positioned to adapt to value-based payment reforms.
Collapse
Affiliation(s)
- Joshua A. Rolnick
- Department of Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
- National Clinician Scholars Program, University of Pennsyvania, Philadelphia, Pennsylvania
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, Pennsylvania
- Corporal Michael J. Crescenz Veterans Affairs Medical Center, Philadelphia, Pennsylvania
- Address for correspondence: Joshua A. Rolnick, MD, JD, University of Pennsylvania, National Clinician Scholars Program, Blockley Hall, 13th Floor, 423 Guardian Drive, Philadelphia, PA 19104-6021; Telephone: 617-538-5191; Fax: 610-642-4380;
| | - Kira L. Ryskina
- Department of Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
13
|
Ellenbogen MI, O'Leary KJ. Reducing Routine Labs-Teaching Residents Restraint. J Hosp Med 2017; 12:781-782. [PMID: 28914290 DOI: 10.12788/jhm.2817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Affiliation(s)
- Michael I Ellenbogen
- Hospitalist Program, Division of General Internal Medicine, Johns Hopkins School of Medicine, Baltimore, Maryland, USA.
| | - Kevin J O'Leary
- Division of Hospital Medicine, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| |
Collapse
|