1
|
Bahl A, Johnson S, Mielke N, Blaivas M, Blaivas L. Anticipating impending peripheral intravenous catheter failure: A diagnostic accuracy observational study combining ultrasound and artificial intelligence to improve clinical care. J Vasc Access 2025:11297298241307055. [PMID: 39831402 DOI: 10.1177/11297298241307055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2025] Open
Abstract
OBJECTIVE Peripheral intravenous catheter (PIVC) failure occurs in approximately 50% of insertions. Unexpected PIVC failure leads to treatment delays, longer hospitalizations, and increased risk of patient harm. In current practice there is no method to predict if PIVC failure will occur until it is too late and a grossly obvious complication has occurred. The aim of this study is to demonstrate the diagnostic accuracy of a predictive model for PIVC failure based on artificial intelligence (AI). METHODS This study evaluated the capabilities of a novel machine learning algorithm. The algorithm was trained using real-world ultrasound videos of PIVC sites with a goal of predicting which PIVCs would fail within the following day. After training, AI models were validated using another, unseen, collection of real-world ultrasound videos of PIVC sites. RESULTS 2133 ultrasound videos (361 failure and 1772 non-failure) were used for algorithm development. When the algorithm was tasked with predicting failure in the unseen collection of videos, the best achieved results were an accuracy of 0.93, sensitivity of 0.77, specificity of 0.98, positive predictive value of 0.91, negative predictive value of 0.93, and area under the curve of 0.87. CONCLUSIONS This proprietary and novel machine learning algorithm can accurately and reliably predict PIVC failure 1 day prior to clinically evident failure. Implementation of this technology in the patient care setting would provide timely information for clinicians to plan and manage impending device failure. Future research on the use of AI technology and PIVCs should focus on improving catheter function and longevity, while limiting complication rates.
Collapse
Affiliation(s)
- Amit Bahl
- Department of Emergency Medicine, Beaumont Hospital, Royal Oak, MI, USA
| | - Steven Johnson
- Department of Anesthesia Critical Care, University of Southern California, Los Angeles, CA, USA
| | - Nicholas Mielke
- Department of Medicine, Creighton University School of Medicine, Omaha, NE, USA
| | - Michael Blaivas
- Department of Medicine, University of South Carolina School of Medicine, Columbia, SC, USA
| | - Laura Blaivas
- Department of Environmental Sciences, Michigan State University, Lansing, MI, USA
| |
Collapse
|
2
|
Wu D, Smith D, VanBerlo B, Roshankar A, Lee H, Li B, Ali F, Rahman M, Basmaji J, Tschirhart J, Ford A, VanBerlo B, Durvasula A, Vannelli C, Dave C, Deglint J, Ho J, Chaudhary R, Clausdorff H, Prager R, Millington S, Shah S, Buchanan B, Arntfield R. Improving the Generalizability and Performance of an Ultrasound Deep Learning Model Using Limited Multicenter Data for Lung Sliding Artifact Identification. Diagnostics (Basel) 2024; 14:1081. [PMID: 38893608 PMCID: PMC11172006 DOI: 10.3390/diagnostics14111081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 05/18/2024] [Accepted: 05/20/2024] [Indexed: 06/21/2024] Open
Abstract
Deep learning (DL) models for medical image classification frequently struggle to generalize to data from outside institutions. Additional clinical data are also rarely collected to comprehensively assess and understand model performance amongst subgroups. Following the development of a single-center model to identify the lung sliding artifact on lung ultrasound (LUS), we pursued a validation strategy using external LUS data. As annotated LUS data are relatively scarce-compared to other medical imaging data-we adopted a novel technique to optimize the use of limited external data to improve model generalizability. Externally acquired LUS data from three tertiary care centers, totaling 641 clips from 238 patients, were used to assess the baseline generalizability of our lung sliding model. We then employed our novel Threshold-Aware Accumulative Fine-Tuning (TAAFT) method to fine-tune the baseline model and determine the minimum amount of data required to achieve predefined performance goals. A subgroup analysis was also performed and Grad-CAM++ explanations were examined. The final model was fine-tuned on one-third of the external dataset to achieve 0.917 sensitivity, 0.817 specificity, and 0.920 area under the receiver operator characteristic curve (AUC) on the external validation dataset, exceeding our predefined performance goals. Subgroup analyses identified LUS characteristics that most greatly challenged the model's performance. Grad-CAM++ saliency maps highlighted clinically relevant regions on M-mode images. We report a multicenter study that exploits limited available external data to improve the generalizability and performance of our lung sliding model while identifying poorly performing subgroups to inform future iterative improvements. This approach may contribute to efficiencies for DL researchers working with smaller quantities of external validation data.
Collapse
Affiliation(s)
- Derek Wu
- Department of Medicine, Western University, London, ON N6A 5C1, Canada;
| | - Delaney Smith
- Faculty of Mathematics, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (D.S.); (H.L.)
| | - Blake VanBerlo
- Faculty of Mathematics, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (D.S.); (H.L.)
| | - Amir Roshankar
- Faculty of Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (A.R.); (B.L.); (F.A.); (M.R.)
| | - Hoseok Lee
- Faculty of Mathematics, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (D.S.); (H.L.)
| | - Brian Li
- Faculty of Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (A.R.); (B.L.); (F.A.); (M.R.)
| | - Faraz Ali
- Faculty of Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (A.R.); (B.L.); (F.A.); (M.R.)
| | - Marwan Rahman
- Faculty of Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (A.R.); (B.L.); (F.A.); (M.R.)
| | - John Basmaji
- Division of Critical Care Medicine, Western University, London, ON N6A 5C1, Canada; (J.B.); (C.D.); (R.P.); (R.A.)
| | - Jared Tschirhart
- Schulich School of Medicine and Dentistry, Western University, London, ON N6A 5C1, Canada; (J.T.); (A.D.); (C.V.)
| | - Alex Ford
- Independent Researcher, London, ON N6A 1L8, Canada;
| | - Bennett VanBerlo
- Faculty of Engineering, Western University, London, ON N6A 5C1, Canada;
| | - Ashritha Durvasula
- Schulich School of Medicine and Dentistry, Western University, London, ON N6A 5C1, Canada; (J.T.); (A.D.); (C.V.)
| | - Claire Vannelli
- Schulich School of Medicine and Dentistry, Western University, London, ON N6A 5C1, Canada; (J.T.); (A.D.); (C.V.)
| | - Chintan Dave
- Division of Critical Care Medicine, Western University, London, ON N6A 5C1, Canada; (J.B.); (C.D.); (R.P.); (R.A.)
| | - Jason Deglint
- Faculty of Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (A.R.); (B.L.); (F.A.); (M.R.)
| | - Jordan Ho
- Department of Family Medicine, Western University, London, ON N6A 5C1, Canada;
| | - Rushil Chaudhary
- Department of Medicine, Western University, London, ON N6A 5C1, Canada;
| | - Hans Clausdorff
- Departamento de Medicina de Urgencia, Pontificia Universidad Católica de Chile, Santiago 8331150, Chile;
| | - Ross Prager
- Division of Critical Care Medicine, Western University, London, ON N6A 5C1, Canada; (J.B.); (C.D.); (R.P.); (R.A.)
| | - Scott Millington
- Department of Critical Care Medicine, University of Ottawa, Ottawa, ON K1N 6N5, Canada;
| | - Samveg Shah
- Department of Medicine, University of Alberta, Edmonton, AB T6G 2R3, Canada;
| | - Brian Buchanan
- Department of Critical Care, University of Alberta, Edmonton, AB T6G 2R3, Canada;
| | - Robert Arntfield
- Division of Critical Care Medicine, Western University, London, ON N6A 5C1, Canada; (J.B.); (C.D.); (R.P.); (R.A.)
| |
Collapse
|
3
|
Guo H, Somayajula SA, Hosseini R, Xie P. Improving image classification of gastrointestinal endoscopy using curriculum self-supervised learning. Sci Rep 2024; 14:6100. [PMID: 38480815 PMCID: PMC10937990 DOI: 10.1038/s41598-024-53955-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 02/07/2024] [Indexed: 03/17/2024] Open
Abstract
Endoscopy, a widely used medical procedure for examining the gastrointestinal (GI) tract to detect potential disorders, poses challenges in manual diagnosis due to non-specific symptoms and difficulties in accessing affected areas. While supervised machine learning models have proven effective in assisting clinical diagnosis of GI disorders, the scarcity of image-label pairs created by medical experts limits their availability. To address these limitations, we propose a curriculum self-supervised learning framework inspired by human curriculum learning. Our approach leverages the HyperKvasir dataset, which comprises 100k unlabeled GI images for pre-training and 10k labeled GI images for fine-tuning. By adopting our proposed method, we achieved an impressive top-1 accuracy of 88.92% and an F1 score of 73.39%. This represents a 2.1% increase over vanilla SimSiam for the top-1 accuracy and a 1.9% increase for the F1 score. The combination of self-supervised learning and a curriculum-based approach demonstrates the efficacy of our framework in advancing the diagnosis of GI disorders. Our study highlights the potential of curriculum self-supervised learning in utilizing unlabeled GI tract images to improve the diagnosis of GI disorders, paving the way for more accurate and efficient diagnosis in GI endoscopy.
Collapse
Affiliation(s)
- Han Guo
- Department of Electrical and Computer Engineering, University of California, San Diego, San Diego, 92093, USA
| | - Sai Ashish Somayajula
- Department of Electrical and Computer Engineering, University of California, San Diego, San Diego, 92093, USA
| | - Ramtin Hosseini
- Department of Electrical and Computer Engineering, University of California, San Diego, San Diego, 92093, USA
| | - Pengtao Xie
- Department of Electrical and Computer Engineering, University of California, San Diego, San Diego, 92093, USA.
| |
Collapse
|
4
|
Lien WC, Chang YC, Chou HH, Lin LC, Liu YP, Liu L, Chan YT, Kuan FS. Detecting Hydronephrosis Through Ultrasound Images Using State-of-the-Art Deep Learning Models. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:723-733. [PMID: 36509616 DOI: 10.1016/j.ultrasmedbio.2022.10.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 09/12/2022] [Accepted: 10/04/2022] [Indexed: 06/17/2023]
Abstract
The goal of this study was to assess the feasibility of three models for detecting hydronephrosis through ultrasound images using state-of-the-art deep learning algorithms. The diagnosis of hydronephrosis is challenging because of varying and non-specific presentations. With the characteristics of ready accessibility, no radiation exposure and repeated assessments, point-of-care ultrasound becomes a complementary diagnostic tool for hydronephrosis; however, inter-observer variability still exists after time-consuming training. Artificial intelligence has the potential to overcome the human limitations. A total of 3462 ultrasound frames for 97 patients with hydronephrosis confirmed by the expert nephrologists were included. One thousand six hundred twenty-eight ultrasound frames were also extracted from the 265 controls who had normal renal ultrasonography. We built three deep learning models based on U-Net, Res-UNet and UNet++ and compared their performance. We applied pre-processing techniques including wiping the background to lessen interference by YOLOv4 and standardizing image sizes. Also, post-processing techniques such as adding filter for filtering the small effusion areas were used. The Res-UNet algorithm had the best performance with an accuracy of 94.6% for moderate/severe hydronephrosis with substantial recall rate, specificity, precision, F1 measure and intersection over union. The Res-UNet algorithm has the best performance in detection of moderate/severe hydronephrosis. It would decrease variability among sonographers and improve efficiency under clinical conditions.
Collapse
Affiliation(s)
- Wan-Ching Lien
- Department of Emergency Medicine, National Taiwan University Hospital, Taipei, Taiwan; Department of Emergency Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Yi-Chung Chang
- Department of Computer Science and Engineering, National Chi Nan University, Nantou, Taiwan
| | - Hsin-Hung Chou
- Department of Computer Science and Engineering, National Chi Nan University, Nantou, Taiwan.
| | - Lung-Chun Lin
- Department of Emergency Medicine, National Taiwan University Hospital, Taipei, Taiwan; Department of Emergency Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Yueh-Ping Liu
- Department of Emergency Medicine, National Taiwan University Hospital, Taipei, Taiwan; Department of Medical Affairs Ministry of Health and Welfare, Taipei, Taiwan
| | - Li Liu
- Show Chwan Health Care System, Taipei, Taiwan
| | - Yen-Ting Chan
- Department of Research Planning of Omni Health Group Inc., Taipei, Taiwan
| | - Feng-Sen Kuan
- Department of Business Development, Huasin H. T. Limited, Taipei, Taiwan
| |
Collapse
|
5
|
Chung YW, Choi IY. Detection of abnormal extraocular muscles in small datasets of computed tomography images using a three-dimensional variational autoencoder. Sci Rep 2023; 13:1765. [PMID: 36720904 PMCID: PMC9889739 DOI: 10.1038/s41598-023-28082-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Accepted: 01/12/2023] [Indexed: 02/02/2023] Open
Abstract
We sought to establish an unsupervised algorithm with a three-dimensional (3D) variational autoencoder model (VAE) for the detection of abnormal extraocular muscles in small datasets of orbital computed tomography (CT) images. 334 CT images of normal orbits and 96 of abnormal orbits diagnosed as thyroid eye disease were used for training and validation; 24 normal and 11 abnormal orbits were used for the test. A 3D VAE was developed and trained. All images were preprocessed to emphasize extraocular muscles and to suppress background noise (e.g., high signal intensity from bones). The optimal cut-off value was identified through receiver operating characteristic (ROC) curve analysis. The ability of the model to detect muscles of abnormal size was assessed by visualization. The model achieved a sensitivity of 79.2%, specificity of 72.7%, accuracy of 77.1%, F1-score of 0.667, and AUROC of 0.801. Abnormal CT images correctly identified by the model showed differences in the reconstruction of extraocular muscles. The proposed model showed potential to detect abnormalities in extraocular muscles using a small dataset, similar to the diagnostic approach used by physicians. Unsupervised learning could serve as an alternative detection method for medical imaging studies in which annotation is difficult or impossible to perform.
Collapse
Affiliation(s)
- Yeon Woong Chung
- Department of Ophthalmology and Visual Science, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea.,Department of Medical Informatics, College of Medicine, The Catholic University of Korea, Banpo Dae-Ro 222, Seoul, 06591, Republic of Korea
| | - In Young Choi
- Department of Medical Informatics, College of Medicine, The Catholic University of Korea, Banpo Dae-Ro 222, Seoul, 06591, Republic of Korea.
| |
Collapse
|
6
|
Taye M, Morrow D, Cull J, Smith DH, Hagan M. Deep Learning for FAST Quality Assessment. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2023; 42:71-79. [PMID: 35770928 DOI: 10.1002/jum.16045] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 04/30/2022] [Accepted: 06/04/2022] [Indexed: 06/15/2023]
Abstract
OBJECTIVES To determine the feasibility of using a deep learning (DL) algorithm to assess the quality of focused assessment with sonography in trauma (FAST) exams. METHODS Our dataset consists of 441 FAST exams, classified as good-quality or poor-quality, with 3161 videos. We first used convolutional neural networks (CNNs), pretrained on the Imagenet dataset and fine-tuned on the FAST dataset. Second, we trained a CNN autoencoder to compress FAST images, with a 20-1 compression ratio. The compressed codes were input to a two-layer classifier network. To train the networks, each video was labeled with the quality of the exam, and the frames were labeled with the quality of the video. For inference, a video was classified as poor-quality if half the frames were classified as poor-quality by the network, and an exam was classified as poor-quality if half the videos were classified as poor-quality. RESULTS The results with the encoder-classifier networks were much better than the transfer learning results with CNNs. This was primarily because the Imagenet dataset is not a good match for the ultrasound quality assessment problem. The DL models produced video sensitivities and specificities of 99% and 98% on held-out test sets. CONCLUSIONS Using an autoencoder to compress FAST images is a very effective way to obtain features that can be used to predict exam quality. These features are more suitable than those obtained from CNNs pretrained on Imagenet.
Collapse
Affiliation(s)
- Mesfin Taye
- School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK, USA
- IBM, IBM Cloud, Armonk, New York, USA
| | - Dustin Morrow
- Prisma Health, Department of Emergency Medicine, Division Chief of Emergency Ultrasound, University of South Carolina, School of Medicine Greenville, Greenville, SC, USA
| | - John Cull
- Prisma Health, University of South Carolina School of Medicine-Greenville, Greenville, SC, USA
| | - Dane Hudson Smith
- Holcombe Department of Electrical Engineering, Watt Family Innovation Center, Clemson University, Clemson, SC, USA
| | - Martin Hagan
- Oklahoma State University, School of Electrical and Computer Engineering, Stillwater, OK, USA
| |
Collapse
|
7
|
Kimura BJ, Resnikoff PM, Tran EM, Bonagiri PR, Spierling Bagsic SR. Simplified Lung Ultrasound Examination and Telehealth Feasibility in Early SARS-CoV-2 Infection. J Am Soc Echocardiogr 2022; 35:1047-1054. [PMID: 35691456 PMCID: PMC9183238 DOI: 10.1016/j.echo.2022.05.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Revised: 04/28/2022] [Accepted: 05/25/2022] [Indexed: 11/16/2022]
Abstract
BACKGROUND In COVID-19, inpatient studies have demonstrated that lung ultrasound B-lines relate to disease severity and mortality and can occur in apical regions that can be imaged by patients themselves. However, as illness begins in an ambulatory setting, the aim of this study was to determine the prevalence of apical B-lines in early outpatient infection and then test the accuracy of their detection using telehealth and automated methods. METHODS Consecutive adult patients (N = 201) with positive results for SARS-CoV-2, at least one clinical risk factor, and mild to moderate disease were prospectively enrolled at a monoclonal antibody infusion clinic. Physician imaging of the lung apices for three B-lines (ultrasound lung comet [ULC]) using 3-MHz ultrasound was performed on all patients for prevalence data and served as the standard for a nested subset (n = 50) to test the accuracy of telehealth methods, including patient self-imaging and automated B-line detection. Patient characteristics, vaccination data, and hospitalizations were analyzed for associations with the presence of ULC. RESULTS Patients' mean age was 54 ± 15 years, and all lacked hypoxemia or fever. ULC was present in 55 of 201 patients (27%) at a median of 7 symptomatic days (interquartile range, 5-8 days) and in four of five patients who were later hospitalized (P = .03). Presence of ULC was associated with unvaccinated status (odds ratio [OR], 4.11; 95% CI, 1.85-9.33; P = .001), diabetes (OR, 2.56; 95% CI, 1.08-6.05; P = .03), male sex (OR, 2.14; 95% CI, 1.07-4.37; P = .03), and hypertension or cardiovascular disease (OR, 2.06; 95% CI, 1.02-4.23; P = .04), while adjusting for body mass index > 25 kg/m2. Telehealth and automated B-line detection had 84% and 82% accuracy, respectively. CONCLUSIONS In high-risk outpatients, B-lines in the upper lungs were common in early SARS-CoV-2 infection, were related to subsequent hospitalization, and could be detected by telehealth and automated methods.
Collapse
Affiliation(s)
- Bruce J Kimura
- Department of Medicine, Scripps Mercy Hospital, San Diego, California.
| | | | - Eric M Tran
- Department of Medicine, Scripps Mercy Hospital, San Diego, California
| | - Pranay R Bonagiri
- Department of Medicine, Scripps Mercy Hospital, San Diego, California
| | - Samantha R Spierling Bagsic
- Department of Medicine, Scripps Mercy Hospital, San Diego, California; Scripps Whittier Diabetes Institute, Scripps Health, San Diego, California
| |
Collapse
|
8
|
Analysis of facial ultrasonography images based on deep learning. Sci Rep 2022; 12:16480. [PMID: 36182939 PMCID: PMC9526737 DOI: 10.1038/s41598-022-20969-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 09/21/2022] [Indexed: 11/28/2022] Open
Abstract
Transfer learning using a pre-trained model with the ImageNet database is frequently used when obtaining large datasets in the medical imaging field is challenging. We tried to estimate the value of deep learning for facial US images by assessing the classification performance for facial US images through transfer learning using current representative deep learning models and analyzing the classification criteria. For this clinical study, we recruited 86 individuals from whom we acquired ultrasound images of nine facial regions. To classify these facial regions, 15 deep learning models were trained using augmented or non-augmented datasets and their performance was evaluated. The F-measure scores average of all models was about 93% regardless of augmentation in the dataset, and the best performing model was the classic model VGGs. The models regarded the contours of skin and bones, rather than muscles and blood vessels, as distinct features for distinguishing regions in the facial US images. The results of this study can be used as reference data for future deep learning research on facial US images and content development.
Collapse
|
9
|
Blaivas M, Blaivas LN, Campbell K, Thomas J, Shah S, Yadav K, Liu YT. Making Artificial Intelligence Lemonade Out of Data Lemons: Adaptation of a Public Apical Echo Database for Creation of a Subxiphoid Visual Estimation Automatic Ejection Fraction Machine Learning Algorithm. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2022; 41:2059-2069. [PMID: 34820867 DOI: 10.1002/jum.15889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 11/02/2021] [Accepted: 11/09/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVES A paucity of point-of-care ultrasound (POCUS) databases limits machine learning (ML). Assess feasibility of training ML algorithms to visually estimate left ventricular ejection fraction (EF) from a subxiphoid (SX) window using only apical 4-chamber (A4C) images. METHODS Researchers used a long-short-term-memory algorithm for image analysis. Using the Stanford EchoNet-Dynamic database of 10,036 A4C videos with calculated exact EF, researchers tested 3 ML training permeations. First, training on unaltered Stanford A4C videos, then unaltered and 90° clockwise (CW) rotated videos and finally unaltered, 90° rotated and horizontally flipped videos. As a real-world test, we obtained 615 SX videos from Harbor-UCLA (HUCLA) with EF calculations in 5% ranges. Researchers performed 1000 randomizations of EF point estimation within HUCLA EF ranges to compensate for ML and HUCLA EF mismatch, obtaining a mean value for absolute error (MAE) comparison and performed Bland-Altman analyses. RESULTS The ML algorithm EF mean MAE was estimated at 23.0, with a range of 22.8-23.3 using unaltered A4C video, mean MAE was 16.7, with a range of 16.5-16.9 using unaltered and 90° CW rotated video, mean MAE was 16.6, with a range of 16.3-16.8 using unaltered, 90° CW rotated and horizontally flipped video training. Bland-Altman showed weakest agreement at 40-45% EF. CONCLUSIONS Researchers successfully adapted unrelated ultrasound window data to train a POCUS ML algorithm with fair MAE using data manipulation to simulate a different ultrasound examination. This may be important for future POCUS algorithm design to help overcome a paucity of POCUS databases.
Collapse
Affiliation(s)
- Michael Blaivas
- Department of Medicine, University of South Carolina School of Medicine, Columbia, SC, USA
- Department of Emergency Medicine, St. Francis Hospital, Columbus, GA, USA
| | | | - Kendra Campbell
- Department of Emergency Medicine, Harbor-UCLA Medical Center, Torrance, CA, USA
| | - Joseph Thomas
- Department of Cardiology, Harbor-UCLA Medical Center, Torrance, CA, USA
- David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| | - Sonia Shah
- Department of Cardiology, Harbor-UCLA Medical Center, Torrance, CA, USA
- David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| | - Kabir Yadav
- Department of Emergency Medicine, Harbor-UCLA Medical Center, Torrance, CA, USA
- David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| | - Yiju Teresa Liu
- Department of Emergency Medicine, Harbor-UCLA Medical Center, Torrance, CA, USA
- David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| |
Collapse
|
10
|
Jin G, Jiao Y, Wang J, Ma M, Song Q. Improving the performance of deep learning-based classification when a sample has various appearances. J EXP THEOR ARTIF IN 2022. [DOI: 10.1080/0952813x.2022.2092558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Affiliation(s)
- Guanghao Jin
- School of Telecommunication Engineering, Beijing Polytechnic, Beijing, Beijing, China
| | - Yuming Jiao
- School of Computer Science and Technology, Tiangong University, Tianjin, Tianjin, China
| | - Jianming Wang
- School of Computer Science and Technology, Tiangong University, Tianjin, Tianjin, China
| | - Ming Ma
- School of Computer Science and Technology, Tiangong University, Tianjin, Tianjin, China
| | - Qingzeng Song
- School of Computer Science and Technology, Tiangong University, Tianjin, Tianjin, China
| |
Collapse
|