1
|
Freitas J, Gomes-Fonseca J, Tonelli AC, Correia-Pinto J, Fonseca JC, Queirós S. Automatic multi-view pose estimation in focused cardiac ultrasound. Med Image Anal 2024; 94:103146. [PMID: 38537416 DOI: 10.1016/j.media.2024.103146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 03/18/2024] [Accepted: 03/19/2024] [Indexed: 04/16/2024]
Abstract
Focused cardiac ultrasound (FoCUS) is a valuable point-of-care method for evaluating cardiovascular structures and function, but its scope is limited by equipment and operator's experience, resulting in primarily qualitative 2D exams. This study presents a novel framework to automatically estimate the 3D spatial relationship between standard FoCUS views. The proposed framework uses a multi-view U-Net-like fully convolutional neural network to regress line-based heatmaps representing the most likely areas of intersection between input images. The lines that best fit the regressed heatmaps are then extracted, and a system of nonlinear equations based on the intersection between view triplets is created and solved to determine the relative 3D pose between all input images. The feasibility and accuracy of the proposed pipeline were validated using a novel realistic in silico FoCUS dataset, demonstrating promising results. Interestingly, as shown in preliminary experiments, the estimation of the 2D images' relative poses enables the application of 3D image analysis methods and paves the way for 3D quantitative assessments in FoCUS examinations.
Collapse
Affiliation(s)
- João Freitas
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal; Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal
| | - João Gomes-Fonseca
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal
| | | | - Jorge Correia-Pinto
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal; Department of Pediatric Surgery, Hospital de Braga, Braga, Portugal
| | - Jaime C Fonseca
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal
| | - Sandro Queirós
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal.
| |
Collapse
|
2
|
Olaisen S, Smistad E, Espeland T, Hu J, Pasdeloup D, Østvik A, Aakhus S, Rösner A, Malm S, Stylidis M, Holte E, Grenne B, Løvstakken L, Dalen H. Automatic measurements of left ventricular volumes and ejection fraction by artificial intelligence: clinical validation in real time and large databases. Eur Heart J Cardiovasc Imaging 2024; 25:383-395. [PMID: 37883712 PMCID: PMC11024810 DOI: 10.1093/ehjci/jead280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 10/11/2023] [Accepted: 10/15/2023] [Indexed: 10/28/2023] Open
Abstract
AIMS Echocardiography is a cornerstone in cardiac imaging, and left ventricular (LV) ejection fraction (EF) is a key parameter for patient management. Recent advances in artificial intelligence (AI) have enabled fully automatic measurements of LV volumes and EF both during scanning and in stored recordings. The aim of this study was to evaluate the impact of implementing AI measurements on acquisition and processing time and test-retest reproducibility compared with standard clinical workflow, as well as to study the agreement with reference in large internal and external databases. METHODS AND RESULTS Fully automatic measurements of LV volumes and EF by a novel AI software were compared with manual measurements in the following clinical scenarios: (i) in real time use during scanning of 50 consecutive patients, (ii) in 40 subjects with repeated echocardiographic examinations and manual measurements by 4 readers, and (iii) in large internal and external research databases of 1881 and 849 subjects, respectively. Real-time AI measurements significantly reduced the total acquisition and processing time by 77% (median 5.3 min, P < 0.001) compared with standard clinical workflow. Test-retest reproducibility of AI measurements was superior in inter-observer scenarios and non-inferior in intra-observer scenarios. AI measurements showed good agreement with reference measurements both in real time and in large research databases. CONCLUSION The software reduced the time taken to perform and volumetrically analyse routine echocardiograms without a decrease in accuracy compared with experts.
Collapse
Affiliation(s)
- Sindre Olaisen
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
| | - Erik Smistad
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
- Medical Image Analysis, Health Research, SINTEF Digital, Trondheim, Norway
| | - Torvald Espeland
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
- Clinic of Cardiology, St.Olavs Hospital, Trondheim University Hospital, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
| | - Jieyu Hu
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
| | - David Pasdeloup
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
| | - Andreas Østvik
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
- Medical Image Analysis, Health Research, SINTEF Digital, Trondheim, Norway
| | - Svend Aakhus
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
- Clinic of Cardiology, St.Olavs Hospital, Trondheim University Hospital, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
| | - Assami Rösner
- Department of Cardiology, University Hospital of North Norway, Tromsø, Norway
- Institute for Clinical Medicine, UiT, The Arctic University of Norway, Tromsø, Norway
| | - Siri Malm
- Institute for Clinical Medicine, UiT, The Arctic University of Norway, Tromsø, Norway
- Department of Cardiology, University Hospital of North Norway, UNN Harstad, Tromsø, Norway
| | - Michael Stylidis
- Department of Cardiology, University Hospital of North Norway, Tromsø, Norway
- Department of Community Medicine, UiT, The Arctic University of Norway, Tromsø, Norway
| | - Espen Holte
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
- Clinic of Cardiology, St.Olavs Hospital, Trondheim University Hospital, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
| | - Bjørnar Grenne
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
- Clinic of Cardiology, St.Olavs Hospital, Trondheim University Hospital, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
| | - Lasse Løvstakken
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
| | - Havard Dalen
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
- Clinic of Cardiology, St.Olavs Hospital, Trondheim University Hospital, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
- Department of Medicine, Levanger Hospital, Nord-Trøndelag Hospital Trust, Kirkegata 2, 7600 Levanger, Norway
| |
Collapse
|
3
|
Fazlalizadeh H, Khan MS, Fox ER, Douglas PS, Adams D, Blaha MJ, Daubert MA, Dunn G, van den Heuvel E, Kelsey MD, Martin RP, Thomas JD, Thomas Y, Judd SE, Vasan RS, Budoff MJ, Bloomfield GS. Closing the Last Mile Gap in Access to Multimodality Imaging in Rural Settings: Design of the Imaging Core of the Risk Underlying Rural Areas Longitudinal Study. Circ Cardiovasc Imaging 2024; 17:e015496. [PMID: 38377236 PMCID: PMC10883604 DOI: 10.1161/circimaging.123.015496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Abstract
Achieving optimal cardiovascular health in rural populations can be challenging for several reasons including decreased access to care with limited availability of imaging modalities, specialist physicians, and other important health care team members. Therefore, innovative solutions are needed to optimize health care and address cardiovascular health disparities in rural areas. Mobile examination units can bring imaging technology to underserved or remote communities with limited access to health care services. Mobile examination units can be equipped with a wide array of assessment tools and multiple imaging modalities such as computed tomography scanning and echocardiography. The detailed structural assessment of cardiovascular and lung pathology, as well as the detection of extracardiac pathology afforded by computed tomography imaging combined with the functional and hemodynamic assessments acquired by echocardiography, yield deep phenotyping of heart and lung disease for populations historically underrepresented in epidemiological studies. Moreover, by bringing the mobile examination unit to local communities, innovative approaches are now possible including engagement with local professionals to perform these imaging assessments, thereby augmenting local expertise and experience. However, several challenges exist before mobile examination unit-based examinations can be effectively integrated into the rural health care setting including standardizing acquisition protocols, maintaining consistent image quality, and addressing ethical and privacy considerations. Herein, we discuss the potential importance of cardiac multimodality imaging to improve cardiovascular health in rural regions, outline the emerging experience in this field, highlight important current challenges, and offer solutions based on our experience in the RURAL (Risk Underlying Rural Areas Longitudinal) cohort study.
Collapse
Affiliation(s)
- Hooman Fazlalizadeh
- Lundquist Institute, Harbor-University of California Los Angeles Medical Center, Torrance (H.F., M.J.B.)
| | - Muhammad Shahzeb Khan
- Division of Cardiology, Department of Medicine (M.S.K., P.S.D., M.A.D., M.D.K., G.S.B.), Duke University, Durham, NC
| | - Ervin R Fox
- Division of Cardiology, Department of Medicine University of Mississippi Medical Center, Jackson, MS (E.R.F.)
| | - Pamela S Douglas
- Division of Cardiology, Department of Medicine (M.S.K., P.S.D., M.A.D., M.D.K., G.S.B.), Duke University, Durham, NC
- Duke Clinical Research Institute (P.S.D., M.A.D., G.D., M.D.K., G.S.B.), Duke University, Durham, NC
| | - David Adams
- Caption Health, Inc, San Francisco, CA (D.A., R.P.M., Y.T.)
| | - Michael J Blaha
- Lundquist Institute, Harbor-University of California Los Angeles Medical Center, Torrance (H.F., M.J.B.)
| | - Melissa A Daubert
- Division of Cardiology, Department of Medicine (M.S.K., P.S.D., M.A.D., M.D.K., G.S.B.), Duke University, Durham, NC
- Duke Clinical Research Institute (P.S.D., M.A.D., G.D., M.D.K., G.S.B.), Duke University, Durham, NC
| | - Gary Dunn
- Duke Clinical Research Institute (P.S.D., M.A.D., G.D., M.D.K., G.S.B.), Duke University, Durham, NC
| | - Edwin van den Heuvel
- Department of Mathematics and Computer Science, Eindhoven University of Technology, The Netherlands (E.v.d.H.)
| | - Michelle D Kelsey
- Division of Cardiology, Department of Medicine (M.S.K., P.S.D., M.A.D., M.D.K., G.S.B.), Duke University, Durham, NC
- Duke Clinical Research Institute (P.S.D., M.A.D., G.D., M.D.K., G.S.B.), Duke University, Durham, NC
| | | | - James D Thomas
- Division of Cardiology, Department of Medicine, Northwestern University Feinberg School of Medicine, Chicago, IL (J.D.T.)
- Center for Artificial Intelligence, Northwestern Medicine Bluhm Cardiovascular Institute, Chicago, IL (J.D.T.)
| | - Yngvil Thomas
- Caption Health, Inc, San Francisco, CA (D.A., R.P.M., Y.T.)
| | - Suzanne E Judd
- Department of Biostatistics, University of Alabama at Birmingham (S.E.J.)
| | - Ramachandran S Vasan
- University of Texas Health Sciences Center, University of Texas School of Public Health, San Antonio (R.S.V.)
| | - Matthew J Budoff
- Division of Cardiology, John Hopkins University School of Medicine, Baltimore, MD (M.J.B.)
| | - Gerald S Bloomfield
- Division of Cardiology, Department of Medicine (M.S.K., P.S.D., M.A.D., M.D.K., G.S.B.), Duke University, Durham, NC
- Duke Clinical Research Institute (P.S.D., M.A.D., G.D., M.D.K., G.S.B.), Duke University, Durham, NC
- Duke Global Health Institute (G.S.B.), Duke University, Durham, NC
| |
Collapse
|
4
|
Zha SZ, Rogstadkjernet M, Klæboe LG, Skulstad H, Singstad BJ, Gilbert A, Edvardsen T, Samset E, Brekke PH. Deep learning for automated left ventricular outflow tract diameter measurements in 2D echocardiography. Cardiovasc Ultrasound 2023; 21:19. [PMID: 37833731 PMCID: PMC10571406 DOI: 10.1186/s12947-023-00317-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 10/02/2023] [Indexed: 10/15/2023] Open
Abstract
BACKGROUND Measurement of the left ventricular outflow tract diameter (LVOTd) in echocardiography is a common source of error when used to calculate the stroke volume. The aim of this study is to assess whether a deep learning (DL) model, trained on a clinical echocardiographic dataset, can perform automatic LVOTd measurements on par with expert cardiologists. METHODS Data consisted of 649 consecutive transthoracic echocardiographic examinations of patients with coronary artery disease admitted to a university hospital. 1304 LVOTd measurements in the parasternal long axis (PLAX) and zoomed parasternal long axis views (ZPLAX) were collected, with each patient having 1-6 measurements per examination. Data quality control was performed by an expert cardiologist, and spatial geometry data was preserved for each LVOTd measurement to convert DL predictions into metric units. A convolutional neural network based on the U-Net was used as the DL model. RESULTS The mean absolute LVOTd error was 1.04 (95% confidence interval [CI] 0.90-1.19) mm for DL predictions on the test set. The mean relative LVOTd errors across all data subgroups ranged from 3.8 to 5.1% for the test set. Generally, the DL model had superior performance on the ZPLAX view compared to the PLAX view. DL model precision for patients with repeated LVOTd measurements had a mean coefficient of variation of 2.2 (95% CI 1.6-2.7) %, which was comparable to the clinicians for the test set. CONCLUSION DL for automatic LVOTd measurements in PLAX and ZPLAX views is feasible when trained on a limited clinical dataset. While the DL predicted LVOTd measurements were within the expected range of clinical inter-observer variability, the robustness of the DL model requires validation on independent datasets. Future experiments using temporal information and anatomical constraints could improve valvular identification and reduce outliers, which are challenges that must be addressed before clinical utilization.
Collapse
Affiliation(s)
| | | | | | - Helge Skulstad
- University of Oslo, Oslo, Norway
- Oslo University Hospital, Rikshospitalet, Oslo, Norway
| | | | | | - Thor Edvardsen
- University of Oslo, Oslo, Norway
- Oslo University Hospital, Rikshospitalet, Oslo, Norway
| | - Eigil Samset
- University of Oslo, Oslo, Norway
- GE HealthCare, Oslo, Norway
| | | |
Collapse
|
5
|
Zhao Y, Wang X, Che T, Bao G, Li S. Multi-task deep learning for medical image computing and analysis: A review. Comput Biol Med 2023; 153:106496. [PMID: 36634599 DOI: 10.1016/j.compbiomed.2022.106496] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/06/2022] [Accepted: 12/27/2022] [Indexed: 12/29/2022]
Abstract
The renaissance of deep learning has provided promising solutions to various tasks. While conventional deep learning models are constructed for a single specific task, multi-task deep learning (MTDL) that is capable to simultaneously accomplish at least two tasks has attracted research attention. MTDL is a joint learning paradigm that harnesses the inherent correlation of multiple related tasks to achieve reciprocal benefits in improving performance, enhancing generalizability, and reducing the overall computational cost. This review focuses on the advanced applications of MTDL for medical image computing and analysis. We first summarize four popular MTDL network architectures (i.e., cascaded, parallel, interacted, and hybrid). Then, we review the representative MTDL-based networks for eight application areas, including the brain, eye, chest, cardiac, abdomen, musculoskeletal, pathology, and other human body regions. While MTDL-based medical image processing has been flourishing and demonstrating outstanding performance in many tasks, in the meanwhile, there are performance gaps in some tasks, and accordingly we perceive the open challenges and the perspective trends. For instance, in the 2018 Ischemic Stroke Lesion Segmentation challenge, the reported top dice score of 0.51 and top recall of 0.55 achieved by the cascaded MTDL model indicate further research efforts in high demand to escalate the performance of current models.
Collapse
Affiliation(s)
- Yan Zhao
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia.
| | - Tongtong Che
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Guoqing Bao
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia
| | - Shuyu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
6
|
Wei H, Ma J, Zhou Y, Xue W, Ni D. Co-learning of appearance and shape for precise ejection fraction estimation from echocardiographic sequences. Med Image Anal 2023; 84:102686. [PMID: 36455332 DOI: 10.1016/j.media.2022.102686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 10/31/2022] [Accepted: 11/07/2022] [Indexed: 11/17/2022]
Abstract
Accurate estimation of ejection fraction (EF) from echocardiography is of great importance for evaluation of cardiac function. It is usually obtained by the Simpson's bi-plane method based on the segmentation of the left ventricle (LV) in two keyframes. However, obtaining accurate EF estimation from echocardiography is challenging due to (1) noisy appearance in ultrasound images, (2) temporal dynamic movement of myocardium, (3) sparse annotation of the full sequence, and (4) potential quality degradation during scanning. In this paper, we propose a multi-task semi-supervised framework, which is denoted as MCLAS, for precise EF estimation from echocardiographic sequences of two cardiac views. Specifically, we first propose a co-learning mechanism to explore the mutual benefits of cardiac segmentation and myocardium tracking iteratively on appearance level and shape level, therefore alleviating the noisy appearance and enforcing the temporal consistency of the segmentation results. This temporal consistency, as shown in our work, is critical for precise EF estimation. Then we propose two auxiliary tasks for the encoder, (1) view classification to help extract the discriminative features of each view, and automatize the whole pipeline of EF estimation in clinical practice, and (2) EF regression to help regularize the spatiotemporal embedding of the echocardiographic sequence. Both two auxiliary tasks can improve the segmentation-based EF prediction, especially for sequences of poor quality. Our method is capable of automating the whole pipeline of EF estimation, from view identification, cardiac structures segmentation to EF calculation. The effectiveness of our method is validated in aspects of segmentation, tracking, consistency analysis, and clinical parameters estimation. When compared with existing methods, our method shows obvious superiority for LV volumes on ED and ES phases, and EF estimation, with Pearson correlation of 0.975, 0.983 and 0.946, respectively. This is a significant improvement for echocardiography-based EF estimation and improves the potential of automated EF estimation in clinical practice. Besides, our method can obtain accurate and temporal-consistent segmentation for the in-between frames, which enables it for cardiac dynamic function evaluation.
Collapse
Affiliation(s)
- Hongrong Wei
- School of Biomedical Engineering, Health Science Center, Shenzhen University, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, China
| | - Junqiang Ma
- School of Biomedical Engineering, Health Science Center, Shenzhen University, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, China
| | - Yongjin Zhou
- School of Biomedical Engineering, Health Science Center, Shenzhen University, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, China
| | - Wufeng Xue
- School of Biomedical Engineering, Health Science Center, Shenzhen University, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, China.
| | - Dong Ni
- School of Biomedical Engineering, Health Science Center, Shenzhen University, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, China.
| |
Collapse
|
7
|
Shaikh F, Kenny JE, Awan O, Markovic D, Friedman O, He T, Singh S, Yan P, Qadir N, Barjaktarevic I. Measuring the accuracy of cardiac output using POCUS: the introduction of artificial intelligence into routine care. Ultrasound J 2022; 14:47. [DOI: 10.1186/s13089-022-00301-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Accepted: 12/07/2022] [Indexed: 12/15/2022] Open
Abstract
Abstract
Background
Shock management requires quick and reliable means to monitor the hemodynamic effects of fluid resuscitation. Point-of-care ultrasound (POCUS) is a relatively quick and non-invasive imaging technique capable of capturing cardiac output (CO) variations in acute settings. However, POCUS is plagued by variable operator skill and interpretation. Artificial intelligence may assist healthcare professionals obtain more objective and precise measurements during ultrasound imaging, thus increasing usability among users with varying experience. In this feasibility study, we compared the performance of novice POCUS users in measuring CO with manual techniques to a novel automation-assisted technique that provides real-time feedback to correct image acquisition for optimal aortic outflow velocity measurement.
Methods
28 junior critical care trainees with limited experience in POCUS performed manual and automation-assisted CO measurements on a single healthy volunteer. CO measurements were obtained using left ventricular outflow tract (LVOT) velocity time integral (VTI) and LVOT diameter. Measurements obtained by study subjects were compared to those taken by board-certified echocardiographers. Comparative analyses were performed using Spearman’s rank correlation and Bland–Altman matched-pairs analysis.
Results
Adequate image acquisition was 100% feasible. The correlation between manual and automated VTI values was not significant (p = 0.11) and means from both groups underestimated the mean values obtained by board-certified echocardiographers. Automated measurements of VTI in the trainee cohort were found to have more reproducibility, narrower measurement range (6.2 vs. 10.3 cm), and reduced standard deviation (1.98 vs. 2.33 cm) compared to manual measurements. The coefficient of variation across raters was 11.5%, 13.6% and 15.4% for board-certified echocardiographers, automated, and manual VTI tracing, respectively.
Conclusions
Our study demonstrates that novel automation-assisted VTI is feasible and can decrease variability while increasing precision in CO measurement. These results support the use of artificial intelligence-augmented image acquisition in routine critical care ultrasound and may have a role for evaluating the response of CO to hemodynamic interventions. Further investigations into artificial intelligence-assisted ultrasound systems in clinical settings are warranted.
Collapse
|
8
|
Awasthi N, Vermeer L, Fixsen LS, Lopata RGP, Pluim JPW. LVNet: Lightweight Model for Left Ventricle Segmentation for Short Axis Views in Echocardiographic Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:2115-2128. [PMID: 35452387 DOI: 10.1109/tuffc.2022.3169684] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Lightweight segmentation models are becoming more popular for fast diagnosis on small and low cost medical imaging devices. This study focuses on the segmentation of the left ventricle (LV) in cardiac ultrasound (US) images. A new lightweight model [LV network (LVNet)] is proposed for segmentation, which gives the benefits of requiring fewer parameters but with improved segmentation performance in terms of Dice score (DS). The proposed model is compared with state-of-the-art methods, such as UNet, MiniNetV2, and fully convolutional dense dilated network (FCdDN). The model proposed comes with a post-processing pipeline that further enhances the segmentation results. In general, the training is done directly using the segmentation mask as the output and the US image as the input of the model. A new strategy for segmentation is also introduced in addition to the direct training method used. Compared with the UNet model, an improvement in DS performance as high as 5% for segmentation with papillary (WP) muscles was found, while showcasing an improvement of 18.5% when the papillary muscles are excluded. The model proposed requires only 5% of the memory required by a UNet model. LVNet achieves a better trade-off between the number of parameters and its segmentation performance as compared with other conventional models. The developed codes are available at https://github.com/navchetanawasthi/Left_Ventricle_Segmentation.
Collapse
|
9
|
Moal O, Roger E, Lamouroux A, Younes C, Bonnet G, Moal B, Lafitte S. Explicit and automatic ejection fraction assessment on 2D cardiac ultrasound with a deep learning-based approach. Comput Biol Med 2022; 146:105637. [PMID: 35617727 DOI: 10.1016/j.compbiomed.2022.105637] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 03/01/2022] [Accepted: 04/29/2022] [Indexed: 12/16/2022]
Abstract
BACKGROUND Ejection fraction (EF) is a key parameter for assessing cardiovascular functions in cardiac ultrasound, but its manual assessment is time-consuming and subject to high inter and intra-observer variability. Deep learning-based methods have the potential to perform accurate fully automatic EF predictions but suffer from a lack of explainability and interpretability. This study proposes a fully automatic method to reliably and explicitly evaluate the biplane left ventricular EF on 2D echocardiography following the recommended modified Simpson's rule. METHODS A deep learning model was trained on apical 4 and 2-chamber echocardiography to segment the left ventricle and locate the mitral valve. Predicted segmentations are then validated with a statistical shape model, which detects potential failures that could impact the EF evaluation. Finally, the end-diastolic and end-systolic frames are identified based on the remaining LV segmentations' areas and EF is estimated on all available cardiac cycles. RESULTS Our approach was trained on a dataset of 783 patients. Its performances were evaluated on an internal and external dataset of respectively 200 and 450 patients. On the internal dataset, EF assessment achieved a mean absolute error of 6.10% and a bias of 1.56 ± 7.58% using multiple cardiac cycles. The approach evaluated EF with a mean absolute error of 5.39% and a bias of -0.74 ± 7.12% on the external dataset. CONCLUSION Following the recommended guidelines, we proposed an end-to-end fully automatic approach that achieves state-of-the-art performance in biplane EF evaluation while giving explicit details to clinicians.
Collapse
Affiliation(s)
| | | | | | | | - Guillaume Bonnet
- Hôpital Cardiologique Haut Lévêque, CHU de Bordeaux, CIC 0005, Pessac, France.
| | | | - Stephane Lafitte
- Hôpital Cardiologique Haut Lévêque, CHU de Bordeaux, CIC 0005, Pessac, France.
| |
Collapse
|
10
|
Hong W, Sheng Q, Dong B, Wu L, Chen L, Zhao L, Liu Y, Zhu J, Liu Y, Xie Y, Yu Y, Wang H, Yuan J, Ge T, Zhao L, Liu X, Zhang Y. Automatic Detection of Secundum Atrial Septal Defect in Children Based on Color Doppler Echocardiographic Images Using Convolutional Neural Networks. Front Cardiovasc Med 2022; 9:834285. [PMID: 35463790 PMCID: PMC9019069 DOI: 10.3389/fcvm.2022.834285] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 02/24/2022] [Indexed: 11/13/2022] Open
Abstract
Secundum atrial septal defect (ASD) is one of the most common congenital heart diseases (CHDs). This study aims to evaluate the feasibility and accuracy of automatic detection of ASD in children based on color Doppler echocardiographic images using convolutional neural networks. In this study, we propose a fully automatic detection system for ASD, which includes three stages. The first stage is used to identify four target echocardiographic views (that is, the subcostal view focusing on the atrium septum, the apical four-chamber view, the low parasternal four-chamber view, and the parasternal short-axis view). These four echocardiographic views are most useful for the diagnosis of ASD clinically. The second stage aims to segment the target cardiac structure and detect candidates for ASD. The third stage is to infer the final detection by utilizing the segmentation and detection results of the second stage. The proposed ASD detection system was developed and validated using a training set of 4,031 cases containing 370,057 echocardiographic images and an independent test set of 229 cases containing 203,619 images, of which 105 cases with ASD and 124 cases with intact atrial septum. Experimental results showed that the proposed ASD detection system achieved accuracy, recall, precision, specificity, and F1 score of 0.8833, 0.8545, 0.8577, 0.9136, and 0.8546, respectively on the image-level averages of the four most clinically useful echocardiographic views. The proposed system can automatically and accurately identify ASD, laying a good foundation for the subsequent artificial intelligence diagnosis of CHDs.
Collapse
Affiliation(s)
- Wenjing Hong
- Department of Pediatric Cardiology, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Qiuyang Sheng
- Deepwise Artificial Intelligence Laboratory, Beijing, China
| | - Bin Dong
- Pediatric Artificial Intelligence Clinical Application and Research Center, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Engineering Research Center of Intelligence Pediatrics (SERCIP), Shanghai, China
| | - Lanping Wu
- Department of Pediatric Cardiology, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Lijun Chen
- Department of Pediatric Cardiology, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Leisheng Zhao
- Department of Pediatric Cardiology, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yiqing Liu
- Department of Pediatric Cardiology, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Junxue Zhu
- Department of Pediatric Cardiology, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yiman Liu
- Department of Pediatric Cardiology, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yixin Xie
- Department of Pediatric Cardiology, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yizhou Yu
- Deepwise Artificial Intelligence Laboratory, Beijing, China
| | - Hansong Wang
- Pediatric Artificial Intelligence Clinical Application and Research Center, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Engineering Research Center of Intelligence Pediatrics (SERCIP), Shanghai, China
| | - Jiajun Yuan
- Pediatric Artificial Intelligence Clinical Application and Research Center, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Engineering Research Center of Intelligence Pediatrics (SERCIP), Shanghai, China
| | - Tong Ge
- Pediatric Artificial Intelligence Clinical Application and Research Center, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Engineering Research Center of Intelligence Pediatrics (SERCIP), Shanghai, China
| | - Liebin Zhao
- Shanghai Engineering Research Center of Intelligence Pediatrics (SERCIP), Shanghai, China
| | - Xiaoqing Liu
- Deepwise Artificial Intelligence Laboratory, Beijing, China
| | - Yuqi Zhang
- Department of Pediatric Cardiology, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
11
|
Blaivas M, Blaivas L. Machine learning algorithm using publicly available echo database for simplified “visual estimation” of left ventricular ejection fraction. World J Exp Med 2022; 12:16-25. [PMID: 35433318 PMCID: PMC8968469 DOI: 10.5493/wjem.v12.i2.16] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 12/14/2021] [Accepted: 03/07/2022] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Left ventricular ejection fraction calculation automation typically requires complex algorithms and is dependent of optimal visualization and tracing of endocardial borders. This significantly limits usability in bedside clinical applications, where ultrasound automation is needed most.
AIM To create a simple deep learning (DL) regression-type algorithm to visually estimate left ventricular (LV) ejection fraction (EF) from a public database of actual patient echo examinations and compare results to echocardiography laboratory EF calculations.
METHODS A simple DL architecture previously proven to perform well on ultrasound image analysis, VGG16, was utilized as a base architecture running within a long short term memory algorithm for sequential image (video) analysis. After obtaining permission to use the Stanford EchoNet-Dynamic database, researchers randomly removed approximately 15% of the approximately 10036 echo apical 4-chamber videos for later performance testing. All database echo examinations were read as part of comprehensive echocardiography study performance and were coupled with EF, end systolic and diastolic volumes, key frames and coordinates for LV endocardial tracing in csv file. To better reflect point-of-care ultrasound (POCUS) clinical settings and time pressure, the algorithm was trained on echo video correlated with calculated ejection fraction without incorporating additional volume, measurement and coordinate data. Seventy percent of the original data was used for algorithm training and 15% for validation during training. The previously randomly separated 15% (1263 echo videos) was used for algorithm performance testing after training completion. Given the inherent variability of echo EF measurement and field standards for evaluating algorithm accuracy, mean absolute error (MAE) and root mean square error (RMSE) calculations were made on algorithm EF results compared to Echo Lab calculated EF. Bland-Atlman calculation was also performed. MAE for skilled echocardiographers has been established to range from 4% to 5%.
RESULTS The DL algorithm visually estimated EF had a MAE of 8.08% (95%CI 7.60 to 8.55) suggesting good performance compared to highly skill humans. The RMSE was 11.98 and correlation of 0.348.
CONCLUSION This experimental simplified DL algorithm showed promise and proved reasonably accurate at visually estimating LV EF from short real time echo video clips. Less burdensome than complex DL approaches used for EF calculation, such an approach may be more optimal for POCUS settings once improved upon by future research and development.
Collapse
Affiliation(s)
- Michael Blaivas
- Department of Medicine, University of South Carolina School of Medicine, Roswell, GA 30076, United States
| | - Laura Blaivas
- Department of Environmental Science, Michigan State University, Roswell, Georgia 30076, United States
| |
Collapse
|
12
|
Automatic morphological classification of mitral valve diseases in echocardiographic images based on explainable deep learning methods. Int J Comput Assist Radiol Surg 2021; 17:413-425. [PMID: 34897594 DOI: 10.1007/s11548-021-02542-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Accepted: 11/30/2021] [Indexed: 10/19/2022]
Abstract
PURPOSE Carpentier's functional classification is a guide to explain the types of mitral valve regurgitation based on morphological features. There are four types of pathological morphologies, regardless of the presence or absence of mitral regurgitation: Type I, normal; Type II, mitral valve prolapse; Type IIIa, mitral valve stenosis; and Type IIIb, restricted mitral leaflet motion. The aim of this study was to automatically classify mitral valves using echocardiographic images. METHODS In our procedure, after the classification of apical 4-chamber (A4C) and parasternal long-axis (PLA) views, we extracted the systolic/diastolic phase of the cardiac cycle by calculating the left ventricular area. Six typical pre-trained models were fine-tuned with a 4-class model for the PLA and a 3-class model for the A4C views. As an additional contribution, to provide explainability, we applied the Gradient-weighted Class Activation Mapping (Grad-CAM) algorithm to visualize areas of echocardiographic images where the different models generated a prediction. RESULTS This approach conferred a proper understanding of where various networks "look" into echocardiographic images to predict the four types of pathological mitral valve morphologies. Considering the accuracy metric and Grad-CAM maps and by applying the Inception-ResNet-v2 architecture to classify Type II in the PLA view and ResNeXt50 architecture to classify the other three classes in the A4C view, we achieved an 80% rate of model accuracy in the test data set. CONCLUSIONS We suggest an explainable, fully automated, and rule-based procedure to classify the four types of mitral valve morphologies based on Carpentier's functional classification using deep learning on transthoracic echocardiographic images. Our study results infer the feasibility of the use of deep learning models to prepare quick and precise assessments of mitral valve morphologies in echocardiograms. According to our knowledge, our study is the first one that provides a public data set regarding the Carpentier classification of MV pathologies.
Collapse
|
13
|
de Siqueira VS, Borges MM, Furtado RG, Dourado CN, da Costa RM. Artificial intelligence applied to support medical decisions for the automatic analysis of echocardiogram images: A systematic review. Artif Intell Med 2021; 120:102165. [PMID: 34629153 DOI: 10.1016/j.artmed.2021.102165] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 08/07/2021] [Accepted: 08/31/2021] [Indexed: 12/16/2022]
Abstract
The echocardiogram is a test that is widely used in Heart Disease Diagnoses. However, its analysis is largely dependent on the physician's experience. In this regard, artificial intelligence has become an essential technology to assist physicians. This study is a Systematic Literature Review (SLR) of primary state-of-the-art studies that used Artificial Intelligence (AI) techniques to automate echocardiogram analyses. Searches on the leading scientific article indexing platforms using a search string returned approximately 1400 articles. After applying the inclusion and exclusion criteria, 118 articles were selected to compose the detailed SLR. This SLR presents a thorough investigation of AI applied to support medical decisions for the main types of echocardiogram (Transthoracic, Transesophageal, Doppler, Stress, and Fetal). The article's data extraction indicated that the primary research interest of the studies comprised four groups: 1) Improvement of image quality; 2) identification of the cardiac window vision plane; 3) quantification and analysis of cardiac functions, and; 4) detection and classification of cardiac diseases. The articles were categorized and grouped to show the main contributions of the literature to each type of ECHO. The results indicate that the Deep Learning (DL) methods presented the best results for the detection and segmentation of the heart walls, right and left atrium and ventricles, and classification of heart diseases using images/videos obtained by echocardiography. The models that used Convolutional Neural Network (CNN) and its variations showed the best results for all groups. The evidence produced by the results presented in the tabulation of the studies indicates that the DL contributed significantly to advances in echocardiogram automated analysis processes. Although several solutions were presented regarding the automated analysis of ECHO, this area of research still has great potential for further studies to improve the accuracy of results already known in the literature.
Collapse
Affiliation(s)
- Vilson Soares de Siqueira
- Federal Institute of Tocantins, Av. Bernado Sayão, S/N, Santa Maria, Colinas do Tocantins, TO, Brazil; Federal University of Goias, Alameda Palmeiras, Quadra D, Câmpus Samambaia, Goiânia, GO, Brazil.
| | - Moisés Marcos Borges
- Diagnostic Imaging Center - CDI, Av. Portugal, 1155, St. Marista, Goiânia, GO, Brazil
| | - Rogério Gomes Furtado
- Diagnostic Imaging Center - CDI, Av. Portugal, 1155, St. Marista, Goiânia, GO, Brazil
| | - Colandy Nunes Dourado
- Diagnostic Imaging Center - CDI, Av. Portugal, 1155, St. Marista, Goiânia, GO, Brazil. http://www.cdigoias.com.br
| | - Ronaldo Martins da Costa
- Federal University of Goias, Alameda Palmeiras, Quadra D, Câmpus Samambaia, Goiânia, GO, Brazil.
| |
Collapse
|
14
|
Dziri H, Cherni MA, Ben-Sellem D. New Hybrid Method for Left Ventricular Ejection Fraction Assessment from Radionuclide Ventriculography Images. Curr Med Imaging 2021; 17:623-633. [PMID: 33213328 DOI: 10.2174/1573405616666201118122509] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 09/22/2020] [Accepted: 10/14/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND In this paper, we propose a new efficient method of radionuclide ventriculography image segmentation to estimate the left ventricular ejection fraction. This parameter is an important prognostic factor for diagnosing abnormal cardiac function. METHODS The proposed method combines the Chan-Vese and the mathematical morphology algorithms. It was applied to diastolic and systolic images obtained from the Nuclear Medicine Department of Salah AZAIEZ Institute. In order to validate our proposed method, we compare the obtained results to those of two methods present in the literature. The first one is based on mathematical morphology, while the second one uses the basic Chan-Vese algorithm. To evaluate the quality of segmentation, we compute accuracy, positive predictive value and area under the ROC curve. We also compare the left ventricle ejection fraction estimated by our method to that of the reference given by the software of the gamma-camera and validated by the expert, using Pearson's correlation coefficient, ANOVA test and linear regression. RESULTS Static results show that the proposed method is very efficient for the detection of the left ventricle. The accuracy was 98.60%, higher than that of the other two methods (95.52% and 98.50%). CONCLUSION Likewise, the positive predictive value was the highest (86.40% vs. 83.63% 71.82%). The area under the ROC curve was also the most important (0.998% vs. 0.926% 0.919%). On the other hand, Pearson's correlation coefficient was the highest (99% vs. 98% 37%). The correlation was significantly positive (p<0.001).
Collapse
Affiliation(s)
- Halima Dziri
- Universite de Tunis El Manar, Laboratoire de recherche en Biophysique et Technologies Medicales (LRBTM), Tunis, Tunisia
| | | | - Dorra Ben-Sellem
- Universite de Tunis El Manar, Laboratoire de recherche en Biophysique et Technologies Medicales (LRBTM), Tunis, Tunisia
| |
Collapse
|
15
|
Ulloa Cerna AE, Jing L, Good CW, vanMaanen DP, Raghunath S, Suever JD, Nevius CD, Wehner GJ, Hartzel DN, Leader JB, Alsaid A, Patel AA, Kirchner HL, Pfeifer JM, Carry BJ, Pattichis MS, Haggerty CM, Fornwalt BK. Deep-learning-assisted analysis of echocardiographic videos improves predictions of all-cause mortality. Nat Biomed Eng 2021; 5:546-554. [PMID: 33558735 DOI: 10.1038/s41551-020-00667-9] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 11/24/2020] [Indexed: 01/30/2023]
Abstract
Machine learning promises to assist physicians with predictions of mortality and of other future clinical events by learning complex patterns from historical data, such as longitudinal electronic health records. Here we show that a convolutional neural network trained on raw pixel data in 812,278 echocardiographic videos from 34,362 individuals provides superior predictions of one-year all-cause mortality. The model's predictions outperformed the widely used pooled cohort equations, the Seattle Heart Failure score (measured in an independent dataset of 2,404 patients with heart failure who underwent 3,384 echocardiograms), and a machine learning model involving 58 human-derived variables from echocardiograms and 100 clinical variables derived from electronic health records. We also show that cardiologists assisted by the model substantially improved the sensitivity of their predictions of one-year all-cause mortality by 13% while maintaining prediction specificity. Large unstructured datasets may enable deep learning to improve a wide range of clinical prediction models.
Collapse
Affiliation(s)
- Alvaro E Ulloa Cerna
- Department of Translational Data Science and Informatics, Geisinger, Danville, PA, USA.,Electrical and Computer Engineering Department, University of New Mexico, Albuquerque, NM, USA
| | - Linyuan Jing
- Department of Translational Data Science and Informatics, Geisinger, Danville, PA, USA
| | | | - David P vanMaanen
- Department of Translational Data Science and Informatics, Geisinger, Danville, PA, USA
| | - Sushravya Raghunath
- Department of Translational Data Science and Informatics, Geisinger, Danville, PA, USA
| | - Jonathan D Suever
- Department of Translational Data Science and Informatics, Geisinger, Danville, PA, USA
| | - Christopher D Nevius
- Department of Translational Data Science and Informatics, Geisinger, Danville, PA, USA
| | - Gregory J Wehner
- Department of Biomedical Engineering, University of Kentucky, Lexington, KY, USA
| | - Dustin N Hartzel
- Phenomic Analytics and Clinical Data Core, Geisinger, Danville, PA, USA
| | - Joseph B Leader
- Phenomic Analytics and Clinical Data Core, Geisinger, Danville, PA, USA
| | - Amro Alsaid
- Heart Institute, Geisinger, Danville, PA, USA
| | | | - H Lester Kirchner
- Department of Population Health Sciences, Geisinger, Danville, PA, USA
| | - John M Pfeifer
- Department of Translational Data Science and Informatics, Geisinger, Danville, PA, USA.,Heart and Vascular Center, Evangelical Hospital, Lewisburg, PA, USA
| | | | - Marios S Pattichis
- Electrical and Computer Engineering Department, University of New Mexico, Albuquerque, NM, USA
| | - Christopher M Haggerty
- Department of Translational Data Science and Informatics, Geisinger, Danville, PA, USA.,Heart Institute, Geisinger, Danville, PA, USA
| | - Brandon K Fornwalt
- Department of Translational Data Science and Informatics, Geisinger, Danville, PA, USA. .,Heart Institute, Geisinger, Danville, PA, USA. .,Department of Radiology, Geisinger, Danville, PA, USA.
| |
Collapse
|
16
|
Akkus Z, Aly YH, Attia IZ, Lopez-Jimenez F, Arruda-Olson AM, Pellikka PA, Pislaru SV, Kane GC, Friedman PA, Oh JK. Artificial Intelligence (AI)-Empowered Echocardiography Interpretation: A State-of-the-Art Review. J Clin Med 2021; 10:1391. [PMID: 33808513 PMCID: PMC8037652 DOI: 10.3390/jcm10071391] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 03/23/2021] [Accepted: 03/24/2021] [Indexed: 12/12/2022] Open
Abstract
Echocardiography (Echo), a widely available, noninvasive, and portable bedside imaging tool, is the most frequently used imaging modality in assessing cardiac anatomy and function in clinical practice. On the other hand, its operator dependability introduces variability in image acquisition, measurements, and interpretation. To reduce these variabilities, there is an increasing demand for an operator- and interpreter-independent Echo system empowered with artificial intelligence (AI), which has been incorporated into diverse areas of clinical medicine. Recent advances in AI applications in computer vision have enabled us to identify conceptual and complex imaging features with the self-learning ability of AI models and efficient parallel computing power. This has resulted in vast opportunities such as providing AI models that are robust to variations with generalizability for instantaneous image quality control, aiding in the acquisition of optimal images and diagnosis of complex diseases, and improving the clinical workflow of cardiac ultrasound. In this review, we provide a state-of-the art overview of AI-empowered Echo applications in cardiology and future trends for AI-powered Echo technology that standardize measurements, aid physicians in diagnosing cardiac diseases, optimize Echo workflow in clinics, and ultimately, reduce healthcare costs.
Collapse
Affiliation(s)
- Zeynettin Akkus
- Department of Cardiovascular Medicine, Mayo Clinic, Rochester, MN 55905, USA; (Y.H.A.); (I.Z.A.); (F.L.-J.); (A.M.A.-O.); (P.A.P.); (S.V.P.); (G.C.K.); (P.A.F.); (J.K.O.)
| | | | | | | | | | | | | | | | | | | |
Collapse
|
17
|
Xiaoxue CMD, Shaoling YP, Qianqian HMD, Yin WP, Linyan FMD, Fengling WMD, Kun ZMD, Jing HMD. Automated Measurements of Left Ventricular Ejection Fraction and Volumes Using the EchoPAC System. ADVANCED ULTRASOUND IN DIAGNOSIS AND THERAPY 2021. [DOI: 10.37015/audt.2021.200072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
|
18
|
Smistad E, Ostvik A, Salte IM, Melichova D, Nguyen TM, Haugaa K, Brunvand H, Edvardsen T, Leclerc S, Bernard O, Grenne B, Lovstakken L. Real-Time Automatic Ejection Fraction and Foreshortening Detection Using Deep Learning. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2020; 67:2595-2604. [PMID: 32175861 DOI: 10.1109/tuffc.2020.2981037] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Volume and ejection fraction (EF) measurements of the left ventricle (LV) in 2-D echocardiography are associated with a high uncertainty not only due to interobserver variability of the manual measurement, but also due to ultrasound acquisition errors such as apical foreshortening. In this work, a real-time and fully automated EF measurement and foreshortening detection method is proposed. The method uses several deep learning components, such as view classification, cardiac cycle timing, segmentation and landmark extraction, to measure the amount of foreshortening, LV volume, and EF. A data set of 500 patients from an outpatient clinic was used to train the deep neural networks, while a separate data set of 100 patients from another clinic was used for evaluation, where LV volume and EF were measured by an expert using clinical protocols and software. A quantitative analysis using 3-D ultrasound showed that EF is considerably affected by apical foreshortening, and that the proposed method can detect and quantify the amount of apical foreshortening. The bias and standard deviation of the automatic EF measurements were -3.6 ± 8.1%, while the mean absolute difference was measured at 7.2% which are all within the interobserver variability and comparable with related studies. The proposed real-time pipeline allows for a continuous acquisition and measurement workflow without user interaction, and has the potential to significantly reduce the time spent on the analysis and measurement error due to foreshortening, while providing quantitative volume measurements in the everyday echo lab.
Collapse
|
19
|
Yi J, Kang HK, Kwon JH, Kim KS, Park MH, Seong YK, Kim DW, Ahn B, Ha K, Lee J, Hah Z, Bang WC. Technology trends and applications of deep learning in ultrasonography: image quality enhancement, diagnostic support, and improving workflow efficiency. Ultrasonography 2020; 40:7-22. [PMID: 33152846 PMCID: PMC7758107 DOI: 10.14366/usg.20102] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Accepted: 09/14/2020] [Indexed: 12/12/2022] Open
Abstract
In this review of the most recent applications of deep learning to ultrasound imaging, the architectures of deep learning networks are briefly explained for the medical imaging applications of classification, detection, segmentation, and generation. Ultrasonography applications for image processing and diagnosis are then reviewed and summarized, along with some representative imaging studies of the breast, thyroid, heart, kidney, liver, and fetal head. Efforts towards workflow enhancement are also reviewed, with an emphasis on view recognition, scanning guide, image quality assessment, and quantification and measurement. Finally some future prospects are presented regarding image quality enhancement, diagnostic support, and improvements in workflow efficiency, along with remarks on hurdles, benefits, and necessary collaborations.
Collapse
Affiliation(s)
- Jonghyon Yi
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Ho Kyung Kang
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Jae-Hyun Kwon
- DR Imaging R&D Lab, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Kang-Sik Kim
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Moon Ho Park
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Yeong Kyeong Seong
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Dong Woo Kim
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Byungeun Ahn
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Kilsu Ha
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Jinyong Lee
- System R&D Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Zaegyoo Hah
- System R&D Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Won-Chul Bang
- Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seoul, Korea.,Product Strategy Team, Samsung Medison Co., Ltd., Seoul, Korea
| |
Collapse
|
20
|
Arafati A, Morisawa D, Avendi MR, Amini MR, Assadi RA, Jafarkhani H, Kheradvar A. Generalizable fully automated multi-label segmentation of four-chamber view echocardiograms based on deep convolutional adversarial networks. J R Soc Interface 2020; 17:20200267. [PMID: 32811299 PMCID: PMC7482559 DOI: 10.1098/rsif.2020.0267] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2020] [Accepted: 07/27/2020] [Indexed: 11/12/2022] Open
Abstract
A major issue in translation of the artificial intelligence platforms for automatic segmentation of echocardiograms to clinics is their generalizability. The present study introduces and verifies a novel generalizable and efficient fully automatic multi-label segmentation method for four-chamber view echocardiograms based on deep fully convolutional networks (FCNs) and adversarial training. For the first time, we used generative adversarial networks for pixel classification training, a novel method in machine learning not currently used for cardiac imaging, to overcome the generalization problem. The method's performance was validated against manual segmentations as the ground-truth. Furthermore, to verify our method's generalizability in comparison with other existing techniques, we compared our method's performance with a state-of-the-art method on our dataset in addition to an independent dataset of 450 patients from the CAMUS (cardiac acquisitions for multi-structure ultrasound segmentation) challenge. On our test dataset, automatic segmentation of all four chambers achieved a dice metric of 92.1%, 86.3%, 89.6% and 91.4% for LV, RV, LA and RA, respectively. LV volumes' correlation between automatic and manual segmentation were 0.94 and 0.93 for end-diastolic volume and end-systolic volume, respectively. Excellent agreement with chambers' reference contours and significant improvement over previous FCN-based methods suggest that generative adversarial networks for pixel classification training can effectively design generalizable fully automatic FCN-based networks for four-chamber segmentation of echocardiograms even with limited number of training data.
Collapse
Affiliation(s)
- Arghavan Arafati
- The Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California, 2410 Engineering Hall, Irvine, CA 92697-2730, USA
| | - Daisuke Morisawa
- The Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California, 2410 Engineering Hall, Irvine, CA 92697-2730, USA
| | - Michael R. Avendi
- The Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California, 2410 Engineering Hall, Irvine, CA 92697-2730, USA
- Center for Pervasive Communications and Computing, University of California, 4217 Engineering Hall, Irvine, CA 92697-2700, USA
| | - M. Reza Amini
- Loma Linda University Medical Center, Loma Linda, CA 92354, USA
| | - Ramin A. Assadi
- Division of Cardiology, David Geffen School of Medicine at UCLA, Los Angeles, CA 90095, USA
| | - Hamid Jafarkhani
- Center for Pervasive Communications and Computing, University of California, 4217 Engineering Hall, Irvine, CA 92697-2700, USA
| | - Arash Kheradvar
- The Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California, 2410 Engineering Hall, Irvine, CA 92697-2730, USA
| |
Collapse
|
21
|
Ge R, Yang G, Chen Y, Luo L, Feng C, Ma H, Ren J, Li S. K-Net: Integrate Left Ventricle Segmentation and Direct Quantification of Paired Echo Sequence. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1690-1702. [PMID: 31765307 DOI: 10.1109/tmi.2019.2955436] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The integration of segmentation and direct quantification on the left ventricle (LV) from the paired apical views(i.e., apical 4-chamber and 2-chamber together) echo sequence clinically achieves the comprehensive cardiac assessment: multiview segmentation for anatomical morphology, and multidimensional quantification for contractile function. Direct quantification of LV, i.e., to automatically quantify multiple LV indices directly from the image via task-aware feature representation and regression, avoids accumulative error from the inter-step target. This integration sequentially makes a stereoscopical reflection of cardiac activity jointly from the paired orthogonal cross views sequences, overcoming limited observation with a single plane. We propose a K-shaped Unified Network (K-Net), the first end-to-end framework to simultaneously segment LV from apical 4-chamber and 2-chamber views, and directly quantify LV from major- and minor-axis dimensions (1D), area (2D), and volume (3D), in sequence. It works via four components: 1) the K-Net architecture with the Attention Junction enables heterogeneous tasks learning of segmentation task of pixel-wise classification, and direct quantification task of image-wise regression, by interactively introducing the information from segmentation to jointly promote spatial attention map to guide quantification focusing on LV-related region, and transferring quantification feedback to make global constraint on segmentation; 2) the Bi-ResLSTMs distributed in K-Net layer-by-layer hierarchically extract spatial-temporal information in echo sequence, with bidirectional recurrent and short-cut connection to model spatial-temporal information among all frames; 3) the Information Valve tailing the Bi-ResLSTMs selectively exchanges information among multiple views, by stimulating complementary information and suppressing redundant information to make the efficient cross-flow for each view; 4) the Evolution Loss comprehensively guides sequential data learning, with static constraint for frame values, and dynamic constraint for inter-frame value changes. The experiments show that our K-Net gains high performance with a Dice coefficient up to 91.44% and a mean absolute error of the major-axis dimension down to 2.74mm, which reveal its clinical potential.
Collapse
|
22
|
Jafari MH, Girgis H, Van Woudenberg N, Moulson N, Luong C, Fung A, Balthazaar S, Jue J, Tsang M, Nair P, Gin K, Rohling R, Abolmaesumi P, Tsang T. Cardiac point-of-care to cart-based ultrasound translation using constrained CycleGAN. Int J Comput Assist Radiol Surg 2020; 15:877-886. [PMID: 32314226 DOI: 10.1007/s11548-020-02141-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Accepted: 03/25/2020] [Indexed: 12/17/2022]
Abstract
PURPOSE The emerging market of cardiac handheld ultrasound (US) is on the rise. Despite the advantages in ease of access and the lower cost, a gap in image quality can still be observed between the echocardiography (echo) data captured by point-of-care ultrasound (POCUS) compared to conventional cart-based US, which limits the further adaptation of POCUS. In this work, we aim to present a machine learning solution based on recent advances in adversarial training to investigate the feasibility of translating POCUS echo images to the quality level of high-end cart-based US systems. METHODS We propose a constrained cycle-consistent generative adversarial architecture for unpaired translation of cardiac POCUS to cart-based US data. We impose a structured shape-wise regularization via a critic segmentation network to preserve the underlying shape of the heart during quality translation. The proposed deep transfer model is constrained to the anatomy of the left ventricle (LV) in apical two-chamber (AP2) echo views. RESULTS A total of 1089 echo studies from 841 patients are used in this study. The AP2 frames are captured by POCUS (Philips Lumify and Clarius) and cart-based (Philips iE33 and Vivid E9) US machines. The dataset of quality translation comprises a total of 441 echo studies from 395 patients. Data from both POCUS and cart-based systems of the same patient were available in 122 cases. The deep-quality transfer model is integrated into a pipeline for an automated cardiac evaluation task, namely segmentation of LV in AP2 view. By transferring the low-quality POCUS data to the cart-based US, a significant average improvement of 30% and 34 mm is obtained in the LV segmentation Dice score and Hausdorff distance metrics, respectively. CONCLUSION This paper presents the feasibility of a machine learning solution to transform the image quality of POCUS data to that of high-quality high-end cart-based systems. The experiments show that by leveraging the quality translation through the proposed constrained adversarial training, the accuracy of automatic segmentation with POCUS data could be improved.
Collapse
Affiliation(s)
| | - Hany Girgis
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | | | - Nathaniel Moulson
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - Christina Luong
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - Andrea Fung
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - Shane Balthazaar
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - John Jue
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - Micheal Tsang
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - Parvathy Nair
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - Ken Gin
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | | | | | - Teresa Tsang
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| |
Collapse
|
23
|
Chen C, Qin C, Qiu H, Tarroni G, Duan J, Bai W, Rueckert D. Deep Learning for Cardiac Image Segmentation: A Review. Front Cardiovasc Med 2020; 7:25. [PMID: 32195270 PMCID: PMC7066212 DOI: 10.3389/fcvm.2020.00025] [Citation(s) in RCA: 303] [Impact Index Per Article: 75.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Accepted: 02/17/2020] [Indexed: 12/15/2022] Open
Abstract
Deep learning has become the most widely used approach for cardiac image segmentation in recent years. In this paper, we provide a review of over 100 cardiac image segmentation papers using deep learning, which covers common imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound and major anatomical structures of interest (ventricles, atria, and vessels). In addition, a summary of publicly available cardiac image datasets and code repositories are included to provide a base for encouraging reproducible research. Finally, we discuss the challenges and limitations with current deep learning-based approaches (scarcity of labels, model generalizability across different domains, interpretability) and suggest potential directions for future research.
Collapse
Affiliation(s)
- Chen Chen
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Chen Qin
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Huaqi Qiu
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Giacomo Tarroni
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
- CitAI Research Centre, Department of Computer Science, City University of London, London, United Kingdom
| | - Jinming Duan
- School of Computer Science, University of Birmingham, Birmingham, United Kingdom
| | - Wenjia Bai
- Data Science Institute, Imperial College London, London, United Kingdom
- Department of Brain Sciences, Faculty of Medicine, Imperial College London, London, United Kingdom
| | - Daniel Rueckert
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| |
Collapse
|