1
|
Thoppil J, Kraut L, Montgomery C, Castillo W, Silverman R, Gupta S, Davis F. A retrospective analysis of gender among patients admitted to a clinical decision unit at risk for acute coronary syndrome. World J Emerg Med 2023; 14:133-137. [PMID: 36911051 PMCID: PMC9999137 DOI: 10.5847/wjem.j.1920-8642.2023.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Accepted: 11/23/2022] [Indexed: 12/24/2022] Open
Affiliation(s)
- Joby Thoppil
- Department of Emergency Medicine, University of Texas Southwestern Medical Center, Texas 75002, USA
| | - Lauren Kraut
- Department of Emergency Medicine, University of Texas Southwestern Medical Center, Texas 75002, USA
| | - Collin Montgomery
- Department of Emergency Medicine, Long Island Jewish Medical Center, Queens NY 11040, USA
| | - Wilfrido Castillo
- Department of Emergency Medicine, Long Island Jewish Medical Center, Queens NY 11040, USA
| | - Robert Silverman
- Department of Emergency Medicine, Long Island Jewish Medical Center, Queens NY 11040, USA
| | - Sanjey Gupta
- Department of Emergency Medicine, South Shore Hospital, Bay Shore NY 11706, USA
| | - Frederick Davis
- Department of Emergency Medicine, Long Island Jewish Medical Center, Queens NY 11040, USA
| |
Collapse
|
2
|
Reading Turchioe M, Volodarskiy A, Pathak J, Wright DN, Tcheng JE, Slotwiner D. Systematic review of current natural language processing methods and applications in cardiology. Heart 2021; 108:909-916. [PMID: 34711662 DOI: 10.1136/heartjnl-2021-319769] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 09/29/2021] [Indexed: 01/16/2023] Open
Abstract
Natural language processing (NLP) is a set of automated methods to organise and evaluate the information contained in unstructured clinical notes, which are a rich source of real-world data from clinical care that may be used to improve outcomes and understanding of disease in cardiology. The purpose of this systematic review is to provide an understanding of NLP, review how it has been used to date within cardiology and illustrate the opportunities that this approach provides for both research and clinical care. We systematically searched six scholarly databases (ACM Digital Library, Arxiv, Embase, IEEE Explore, PubMed and Scopus) for studies published in 2015-2020 describing the development or application of NLP methods for clinical text focused on cardiac disease. Studies not published in English, lacking a description of NLP methods, non-cardiac focused and duplicates were excluded. Two independent reviewers extracted general study information, clinical details and NLP details and appraised quality using a checklist of quality indicators for NLP studies. We identified 37 studies developing and applying NLP in heart failure, imaging, coronary artery disease, electrophysiology, general cardiology and valvular heart disease. Most studies used NLP to identify patients with a specific diagnosis and extract disease severity using rule-based NLP methods. Some used NLP algorithms to predict clinical outcomes. A major limitation is the inability to aggregate findings across studies due to vastly different NLP methods, evaluation and reporting. This review reveals numerous opportunities for future NLP work in cardiology with more diverse patient samples, cardiac diseases, datasets, methods and applications.
Collapse
Affiliation(s)
- Meghan Reading Turchioe
- Department of Population Health Sciences, Division of Health Informatics, Weill Cornell Medicine, New York, New York, USA
| | - Alexander Volodarskiy
- Department of Medicine, Division of Cardiology, NewYork-Presbyterian Hospital, New York, New York, USA
| | - Jyotishman Pathak
- Department of Population Health Sciences, Division of Health Informatics, Weill Cornell Medicine, New York, New York, USA
| | - Drew N Wright
- Samuel J. Wood Library & C.V. Starr Biomedical Information Center, Weill Cornell Medical College, New York, New York, USA
| | - James Enlou Tcheng
- Department of Medicine, Duke University School of Medicine, Durham, North Carolina, USA
| | - David Slotwiner
- Department of Population Health Sciences, Division of Health Informatics, Weill Cornell Medicine, New York, New York, USA.,Department of Medicine, Division of Cardiology, NewYork-Presbyterian Hospital, New York, New York, USA
| |
Collapse
|
3
|
Big data analytics in health sector: Theoretical framework, techniques and prospects. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT 2020. [DOI: 10.1016/j.ijinfomgt.2019.05.003] [Citation(s) in RCA: 68] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
4
|
Ordu M, Demir E, Tofallis C. A comprehensive modelling framework to forecast the demand for all hospital services. Int J Health Plann Manage 2019; 34:e1257-e1271. [PMID: 30901132 DOI: 10.1002/hpm.2771] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2019] [Accepted: 02/21/2019] [Indexed: 11/08/2022] Open
Abstract
BACKGROUND Because of increasing demand, hospitals in England are currently under intense pressure resulting in shortages of beds, nurses, clinicians, and equipment. To be able to effectively cope with this demand, the management needs to accurately find out how many patients are expected to use their services in the future. This applies not just to one service but for all hospital services. PURPOSE A forecasting modelling framework is developed for all hospital's acute services, including all specialties within outpatient and inpatient settings and the accident and emergency (A&E) department. The objective is to support the management to better deal with demand and plan ahead effectively. METHODOLOGY/APPROACH Having established a theoretical framework, we used the national episodes statistics dataset to systematically capture demand for all specialties. Three popular forecasting methodologies, namely, autoregressive integrated moving average (ARIMA), exponential smoothing, and multiple linear regression were used. A fourth technique known as the seasonal and trend decomposition using loess function (STLF) was applied for the first time within the context of health-care forecasting. RESULTS According to goodness of fit and forecast accuracy measures, 64 best forecasting models and periods (daily, weekly, or monthly forecasts) were selected out of 760 developed models; ie, demand was forecasted for 38 outpatient specialties (first referrals and follow-ups), 25 inpatient specialties (elective and non-elective admissions), and for A&E. CONCLUSION This study has confirmed that the best demand estimates arise from different forecasting methods and forecasting periods (ie, one size does not fit all). Despite the fact that the STLF method was applied for the first time, it outperformed traditional time series forecasting methods (ie, ARIMA and exponential smoothing) for a number of specialties. PRACTISE IMPLICATIONS Knowing the peaks and troughs of demand for an entire hospital will enable the management to (a) effectively plan ahead; (b) ensure necessary resources are in place (eg, beds and staff); (c) better manage budgets, ensuring enough cash is available; and (d) reduce risk.
Collapse
Affiliation(s)
- Muhammed Ordu
- University of Hertfordshire, Hertfordshire Business School, Hatfield, UK
| | - Eren Demir
- University of Hertfordshire, Hertfordshire Business School, Hatfield, UK
| | - Chris Tofallis
- University of Hertfordshire, Hertfordshire Business School, Hatfield, UK
| |
Collapse
|
5
|
Helgheim BI, Maia R, Ferreira JC, Martins AL. Merging Data Diversity of Clinical Medical Records to Improve Effectiveness. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2019; 16:ijerph16050769. [PMID: 30832447 PMCID: PMC6427263 DOI: 10.3390/ijerph16050769] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2018] [Revised: 02/04/2019] [Accepted: 02/24/2019] [Indexed: 12/13/2022]
Abstract
Medicine is a knowledge area continuously experiencing changes. Every day, discoveries and procedures are tested with the goal of providing improved service and quality of life to patients. With the evolution of computer science, multiple areas experienced an increase in productivity with the implementation of new technical solutions. Medicine is no exception. Providing healthcare services in the future will involve the storage and manipulation of large volumes of data (big data) from medical records, requiring the integration of different data sources, for a multitude of purposes, such as prediction, prevention, personalization, participation, and becoming digital. Data integration and data sharing will be essential to achieve these goals. Our work focuses on the development of a framework process for the integration of data from different sources to increase its usability potential. We integrated data from an internal hospital database, external data, and also structured data resulting from natural language processing (NPL) applied to electronic medical records. An extract-transform and load (ETL) process was used to merge different data sources into a single one, allowing more effective use of these data and, eventually, contributing to more efficient use of the available resources.
Collapse
Affiliation(s)
- Berit I Helgheim
- Logistics, Molde University College, Molde, NO-6410 Molde, Norway.
| | - Rui Maia
- DEI, Instituto Superior Técnico, Lisboa, 1049-001 Portugal.
| | - Joao C Ferreira
- Instituto Universitário de Lisboa (ISCTE-IUL), ISTAR-IUL, Lisbon 1649-026, Portugal.
| | - Ana Lucia Martins
- Instituto Universitário de Lisboa (ISCTE-IUL), BRU-IUL, Lisbon 1649-026, Portugal.
| |
Collapse
|
6
|
An Electronic Dashboard to Monitor Patient Flow at the Johns Hopkins Hospital: Communication of Key Performance Indicators Using the Donabedian Model. J Med Syst 2018; 42:133. [PMID: 29915933 DOI: 10.1007/s10916-018-0988-4] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2018] [Accepted: 06/07/2018] [Indexed: 10/14/2022]
Abstract
Efforts to monitoring and managing hospital capacity depend on the ability to extract relevant time-stamped data from electronic medical records and other information technologies. However, the various characterizations of patient flow, cohort decisions, sub-processes, and the diverse stakeholders requiring data visibility create further overlying complexity. We use the Donabedian model to prioritize patient flow metrics and build an electronic dashboard for enabling communication. Ten metrics were identified as key indicators including outcome (length of stay, 30-day readmission, operating room exit delays, capacity-related diversions), process (timely inpatient unit discharge, emergency department disposition), and structural metrics (occupancy, discharge volume, boarding, bed assignation duration). Dashboard users provided real-life examples of how the tool is assisting capacity improvement efforts, and user traffic data revealed an uptrend in dashboard utilization from May to October 2017 (26 to 148 views per month, respectively). Our main contributions are twofold. The former being the results and methods for selecting key performance indicators for a unit, department, and across the entire hospital (i.e., separating signal from noise). The latter being an electronic dashboard deployed and used at The Johns Hopkins Hospital to visualize these ten metrics and communicate systematically to hospital stakeholders. Integration of diverse information technology may create further opportunities for improved hospital capacity.
Collapse
|
7
|
Gabriel RA, Kuo TT, McAuley J, Hsu CN. Identifying and characterizing highly similar notes in big clinical note datasets. J Biomed Inform 2018; 82:63-69. [PMID: 29679685 DOI: 10.1016/j.jbi.2018.04.009] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Revised: 04/16/2018] [Accepted: 04/17/2018] [Indexed: 11/19/2022]
Abstract
BACKGROUND Big clinical note datasets found in electronic health records (EHR) present substantial opportunities to train accurate statistical models that identify patterns in patient diagnosis and outcomes. However, near-to-exact duplication in note texts is a common issue in many clinical note datasets. We aimed to use a scalable algorithm to de-duplicate notes and further characterize the sources of duplication. METHODS We use an approximation algorithm to minimize pairwise comparisons consisting of three phases: (1) Minhashing with Locality Sensitive Hashing; (2) a clustering method using tree-structured disjoint sets; and (3) classification of near-duplicates (exact copies, common machine output notes, or similar notes) via pairwise comparison of notes in each cluster. We use the Jaccard Similarity (JS) to measure similarity between two documents. We analyzed two big clinical note datasets: our institutional dataset and MIMIC-III. RESULTS There were 1,528,940 notes analyzed from our institution. The de-duplication algorithm completed in 36.3 h. When the JS threshold was set at 0.7, the total number of clusters was 82,371 (total notes = 304,418). Among all JS thresholds, no clusters contained pairs of notes that were incorrectly clustered. When the JS threshold was set at 0.9 or 1.0, the de-duplication algorithm captured 100% of all random pairs with their JS at least as high as the set thresholds from the validation set. Similar performance was noted when analyzing the MIMIC-III dataset. CONCLUSIONS We showed that among the EHR from our institution and from the publicly-available MIMIC-III dataset, there were a significant number of near-to-exact duplicated notes.
Collapse
Affiliation(s)
- Rodney A Gabriel
- UCSD Health Department of Biomedical Informatics, University of California, San Diego, 9500 Gilman Dr, La Jolla, CA 92093, USA; Department of Anesthesiology, University of California, San Diego, 200 West Arbor Dr, San Diego, CA 92103, USA.
| | - Tsung-Ting Kuo
- UCSD Health Department of Biomedical Informatics, University of California, San Diego, 9500 Gilman Dr, La Jolla, CA 92093, USA
| | - Julian McAuley
- Department of Computer Science and Engineering, University of California, San Diego, 9500 Gilman Dr, La Jolla, CA 92093, USA
| | - Chun-Nan Hsu
- UCSD Health Department of Biomedical Informatics, University of California, San Diego, 9500 Gilman Dr, La Jolla, CA 92093, USA
| |
Collapse
|
8
|
Névéol A, Zweigenbaum P. Making Sense of Big Textual Data for Health Care: Findings from the Section on Clinical Natural Language Processing. Yearb Med Inform 2017; 26:228-234. [PMID: 29063569 PMCID: PMC6239234 DOI: 10.15265/iy-2017-027] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2017] [Indexed: 02/01/2023] Open
Abstract
Objectives: To summarize recent research and present a selection of the best papers published in 2016 in the field of clinical Natural Language Processing (NLP). Method: A survey of the literature was performed by the two section editors of the IMIA Yearbook NLP section. Bibliographic databases were searched for papers with a focus on NLP efforts applied to clinical texts or aimed at a clinical outcome. Papers were automatically ranked and then manually reviewed based on titles and abstracts. A shortlist of candidate best papers was first selected by the section editors before being peer-reviewed by independent external reviewers. Results: The five clinical NLP best papers provide a contribution that ranges from emerging original foundational methods to transitioning solid established research results to a practical clinical setting. They offer a framework for abbreviation disambiguation and coreference resolution, a classification method to identify clinically useful sentences, an analysis of counseling conversations to improve support to patients with mental disorder and grounding of gradable adjectives. Conclusions: Clinical NLP continued to thrive in 2016, with an increasing number of contributions towards applications compared to fundamental methods. Fundamental work addresses increasingly complex problems such as lexical semantics, coreference resolution, and discourse analysis. Research results translate into freely available tools, mainly for English.
Collapse
Affiliation(s)
- A. Névéol
- LIMSI, CNRS, Université Paris Saclay, Orsay, France
| | | | | |
Collapse
|