1
|
Feng Y, Yang J, Li M, Tang L, Sun S, Wang Y. A Bayesian network for simultaneous keyframe and landmark detection in ultrasonic cine. Med Image Anal 2024; 97:103228. [PMID: 38850623 DOI: 10.1016/j.media.2024.103228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 05/28/2024] [Accepted: 05/30/2024] [Indexed: 06/10/2024]
Abstract
Accurate landmark detection in medical imaging is essential for quantifying various anatomical structures and assisting in diagnosis and treatment planning. In ultrasound cine, landmark detection is often associated with identifying keyframes, which represent the occurrence of specific events, such as measuring target dimensions at specific temporal phases. Existing methods predominantly treat landmark and keyframe detection as separate tasks without harnessing their underlying correlations. Additionally, owing to the intrinsic characteristics of ultrasound imaging, both tasks are constrained by inter-observer variability, leading to potentially higher levels of uncertainty. In this paper, we propose a Bayesian network to achieve simultaneous keyframe and landmark detection in ultrasonic cine, especially under highly sparse training data conditions. We follow a coarse-to-fine landmark detection architecture and propose an adaptive Bayesian hypergraph for coordinate refinement on the results of heatmap-based regression. In addition, we propose Order Loss for training bi-directional Gated Recurrent Unit to identify keyframes based on the relative likelihoods within the sequence. Furthermore, to exploit the underlying correlation between the two tasks, we use a shared encoder to extract features for both tasks and enhance the detection accuracy through the interaction of temporal and motion information. Experiments on two in-house datasets (multi-view transesophageal and transthoracic echocardiography) and one public dataset (transthoracic echocardiography) demonstrate that our method outperforms state-of-the-art approaches. The mean absolute errors for dimension measurements of the left atrial appendage, aortic annulus, and left ventricle are 2.40 mm, 0.83 mm, and 1.63 mm, respectively. The source code is available at github.com/warmestwind/ABHG.
Collapse
Affiliation(s)
- Yong Feng
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China; School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China; School of Computer Science and Engineering, Northeastern University, Shenyang, China; National Frontiers Science Center for Industrial Intelligence and Systems Optimization, Shenyang, China.
| | - Meng Li
- Department of Cardiovascular Ultrasound, The First Hospital of China Medical University, Shenyang, China
| | - Lingzhi Tang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China; School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Song Sun
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China; School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Yonghuai Wang
- Department of Cardiovascular Ultrasound, The First Hospital of China Medical University, Shenyang, China.
| |
Collapse
|
2
|
Pu B, Li K, Chen J, Lu Y, Zeng Q, Yang J, Li S. HFSCCD: A Hybrid Neural Network for Fetal Standard Cardiac Cycle Detection in Ultrasound Videos. IEEE J Biomed Health Inform 2024; 28:2943-2954. [PMID: 38412077 DOI: 10.1109/jbhi.2024.3370507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/29/2024]
Abstract
In the fetal cardiac ultrasound examination, standard cardiac cycle (SCC) recognition is the essential foundation for diagnosing congenital heart disease. Previous studies have mostly focused on the detection of adult CCs, which may not be applicable to the fetus. In clinical practice, localization of SCCs needs to recognize end-systole (ES) and end-diastole (ED) frames accurately, ensuring that every frame in the cycle is a standard view. Most existing methods are not based on the detection of key anatomical structures, which may not recognize irrelevant views and background frames, results containing non-standard frames, or even it does not work in clinical practice. We propose an end-to-end hybrid neural network based on an object detector to detect SCCs from fetal ultrasound videos efficiently, which consists of 3 modules, namely Anatomical Structure Detection (ASD), Cardiac Cycle Localization (CCL), and Standard Plane Recognition (SPR). Specifically, ASD uses an object detector to identify 9 key anatomical structures, 3 cardiac motion phases, and the corresponding confidence scores from fetal ultrasound videos. On this basis, we propose a joint probability method in the CCL to learn the cardiac motion cycle based on the 3 cardiac motion phases. In SPR, to reduce the impact of structure detection errors on the accuracy of the standard plane recognition, we use XGBoost algorithm to learn the relation knowledge of the detected anatomical structures. We evaluate our method on the test fetal ultrasound video datasets and clinical examination cases and achieve remarkable results. This study may pave the way for clinical practices.
Collapse
|
3
|
Alajrami E, Ng T, Jevsikov J, Naidoo P, Fernandes P, Azarmehr N, Dinmohammadi F, Shun-Shin MJ, Dadashi Serej N, Francis DP, Zolgharni M. Active learning for left ventricle segmentation in echocardiography. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 248:108111. [PMID: 38479147 DOI: 10.1016/j.cmpb.2024.108111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 02/21/2024] [Accepted: 03/01/2024] [Indexed: 04/02/2024]
Abstract
BACKGROUND AND OBJECTIVE Training deep learning models for medical image segmentation require large annotated datasets, which can be expensive and time-consuming to create. Active learning is a promising approach to reduce this burden by strategically selecting the most informative samples for segmentation. This study investigates the use of active learning for efficient left ventricle segmentation in echocardiography with sparse expert annotations. METHODS We adapt and evaluate various sampling techniques, demonstrating their effectiveness in judiciously selecting samples for segmentation. Additionally, we introduce a novel strategy, Optimised Representativeness Sampling, which combines feature-based outliers with the most representative samples to enhance annotation efficiency. RESULTS Our findings demonstrate a substantial reduction in annotation costs, achieving a remarkable 99% upper bound performance while utilising only 20% of the labelled data. This equates to a reduction of 1680 images needing annotation within our dataset. When applied to a publicly available dataset, our approach yielded a remarkable 70% reduction in required annotation efforts, representing a significant advancement compared to baseline active learning strategies, which achieved only a 50% reduction. Our experiments highlight the nuanced performance of diverse sampling strategies across datasets within the same domain. CONCLUSIONS The study provides a cost-effective approach to tackle the challenges of limited expert annotations in echocardiography. By introducing a distinct dataset, made publicly available for research purposes, our work contributes to the field's understanding of efficient annotation strategies in medical image segmentation.
Collapse
Affiliation(s)
- Eman Alajrami
- Intelligent Sensing and Vision, University of West London, London, UK.
| | - Tiffany Ng
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Jevgeni Jevsikov
- Intelligent Sensing and Vision, University of West London, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| | - Preshen Naidoo
- Intelligent Sensing and Vision, University of West London, London, UK
| | | | - Neda Azarmehr
- Intelligent Sensing and Vision, University of West London, London, UK
| | | | | | | | - Darrel P Francis
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Massoud Zolgharni
- Intelligent Sensing and Vision, University of West London, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| |
Collapse
|
4
|
G S, Gopalakrishnan U, Parthinarupothi RK, Madathil T. Deep learning supported echocardiogram analysis: A comprehensive review. Artif Intell Med 2024; 151:102866. [PMID: 38593684 DOI: 10.1016/j.artmed.2024.102866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Revised: 03/20/2024] [Accepted: 03/30/2024] [Indexed: 04/11/2024]
Abstract
An echocardiogram is a sophisticated ultrasound imaging technique employed to diagnose heart conditions. The transthoracic echocardiogram, one of the most prevalent types, is instrumental in evaluating significant cardiac diseases. However, interpreting its results heavily relies on the clinician's expertise. In this context, artificial intelligence has emerged as a vital tool for helping clinicians. This study critically analyzes key state-of-the-art research that uses deep learning techniques to automate transthoracic echocardiogram analysis and support clinical judgments. We have systematically organized and categorized articles that proffer solutions for view classification, enhancement of image quality and dataset, segmentation and identification of cardiac structures, detection of cardiac function abnormalities, and quantification of cardiac functions. We compared the performance of various deep learning approaches within each category, identifying the most promising methods. Additionally, we highlight limitations in current research and explore promising avenues for future exploration. These include addressing generalizability issues, incorporating novel AI approaches, and tackling the analysis of rare cardiac diseases.
Collapse
Affiliation(s)
- Sanjeevi G
- Center for Wireless Networks & Applications (WNA), Amrita Vishwa Vidyapeetham, Amritapuri, India
| | - Uma Gopalakrishnan
- Center for Wireless Networks & Applications (WNA), Amrita Vishwa Vidyapeetham, Amritapuri, India.
| | | | - Thushara Madathil
- Department of Cardiac Anesthesiology, Amrita Institute of Medical Sciences and Research Center, Kochi, India
| |
Collapse
|
5
|
Fermann BS, Nyberg J, Remme EW, Grue JF, Grue H, Haland R, Lovstakken L, Dalen H, Grenne B, Aase SA, Snare SR, Ostvik A. Cardiac Valve Event Timing in Echocardiography Using Deep Learning and Triplane Recordings. IEEE J Biomed Health Inform 2024; 28:2759-2768. [PMID: 38442058 DOI: 10.1109/jbhi.2024.3373124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/07/2024]
Abstract
Cardiac valve event timing plays a crucial role when conducting clinical measurements using echocardiography. However, established automated approaches are limited by the need of external electrocardiogram sensors, and manual measurements often rely on timing from different cardiac cycles. Recent methods have applied deep learning to cardiac timing, but they have mainly been restricted to only detecting two key time points, namely end-diastole (ED) and end-systole (ES). In this work, we propose a deep learning approach that leverages triplane recordings to enhance detection of valve events in echocardiography. Our method demonstrates improved performance detecting six different events, including valve events conventionally associated with ED and ES. Of all events, we achieve an average absolute frame difference (aFD) of maximum 1.4 frames (29 ms) for start of diastasis, down to 0.6 frames (12 ms) for mitral valve opening when performing a ten-fold cross-validation with test splits on triplane data from 240 patients. On an external independent test consisting of apical long-axis data from 180 other patients, the worst performing event detection had an aFD of 1.8 (30 ms). The proposed approach has the potential to significantly impact clinical practice by enabling more accurate, rapid and comprehensive event detection, leading to improved clinical measurements.
Collapse
|
6
|
Chernyshov A, Grue JF, Nyberg J, Grenne B, Dalen H, Aase SA, Østvik A, Lovstakken L. Automated Segmentation and Quantification of the Right Ventricle in 2-D Echocardiography. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:540-548. [PMID: 38290912 DOI: 10.1016/j.ultrasmedbio.2023.12.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 12/04/2023] [Accepted: 12/18/2023] [Indexed: 02/01/2024]
Abstract
OBJECTIVE The right ventricle receives less attention than its left counterpart in echocardiography research, practice and development of automated solutions. In the work described here, we sought to determine that the deep learning methods for automated segmentation of the left ventricle in 2-D echocardiograms are also valid for the right ventricle. Additionally, here we describe and explore a keypoint detection approach to segmentation that guards against erratic behavior often displayed by segmentation models. METHODS We used a data set of echo images focused on the right ventricle from 250 participants to train and evaluate several deep learning models for segmentation and keypoint detection. We propose a compact architecture (U-Net KP) employing the latter approach. The architecture is designed to balance high speed with accuracy and robustness. RESULTS All featured models achieved segmentation accuracy close to the inter-observer variability. When computing the metrics of right ventricular systolic function from contour predictions of U-Net KP, we obtained the bias and 95% limits of agreement of 0.8 ± 10.8% for the right ventricular fractional area change measurements, -0.04 ± 0.54 cm for the tricuspid annular plane systolic excursion measurements and 0.2 ± 6.6% for the right ventricular free wall strain measurements. These results were also comparable to the semi-automatically derived inter-observer discrepancies of 0.4 ± 11.8%, -0.37 ± 0.58 cm and -1.0 ± 7.7% for the aforementioned metrics, respectively. CONCLUSION Given the appropriate data, automated segmentation and quantification of the right ventricle in 2-D echocardiography are feasible with existing methods. However, keypoint detection architectures may offer higher robustness and information density for the same computational cost.
Collapse
Affiliation(s)
- Artem Chernyshov
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway.
| | - Jahn Frederik Grue
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - John Nyberg
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Bjørnar Grenne
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olav's Hospital, Trondheim, Norway
| | - Håvard Dalen
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olav's Hospital, Trondheim, Norway
| | | | - Andreas Østvik
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; Department of Health Research, SINTEF Digital, Trondheim, Norway
| | - Lasse Lovstakken
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
7
|
Li Y, Li H, Wu F, Luo J. Semi-supervised learning improves the performance of cardiac event detection in echocardiography. ULTRASONICS 2023; 134:107058. [PMID: 37295222 DOI: 10.1016/j.ultras.2023.107058] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 04/29/2023] [Accepted: 05/24/2023] [Indexed: 06/12/2023]
Abstract
Detection of end-diastole (ED) and end-systole (ES) frames in echocardiography video is a critical step for assessment of cardiac function. A recently released large public dataset, i.e., EchoNet-Dynamic, could be used as a benchmark for cardiac event detection. However, only a pair of ED and ES frames are annotated in each echocardiography video and the annotated ED comes before ES in most cases. This means that only a few frames during systole in each video are utilizable for training, which makes it challenging to train a cardiac event detection model using the dataset. Semi-supervised learning (SSL) could alleviate the problems. An architecture combining convolutional neural network (CNN), recurrent neural network (RNN) and fully-connected layers (FC) is adopted. Experimental results indicate that SSL brings at least three benefits: faster convergence rate, performance improvement and more reasonable volume curves. The best mean absolute errors (MAEs) for ED and ES detection are 40.2 ms (2.1 frames) and 32.6 ms (1.7 frames), respectively. In addition, the results show that models trained on apical four-chamber (A4C) view could work well on other standard views, such as other apical views and parasternal short axis (PSAX) views.
Collapse
Affiliation(s)
- Yongshuai Li
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - He Li
- Research Institute, VINNO Technology Co., Ltd., Suzhou, Jiangsu, China
| | - Fanggang Wu
- Research Institute, VINNO Technology Co., Ltd., Suzhou, Jiangsu, China
| | - Jianwen Luo
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|
8
|
Farhad M, Masud MM, Beg A, Ahmad A, Ahmed L, Memon S. Cardiac phase detection in echocardiography using convolutional neural networks. Sci Rep 2023; 13:8908. [PMID: 37264094 DOI: 10.1038/s41598-023-36047-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 05/28/2023] [Indexed: 06/03/2023] Open
Abstract
Echocardiography is a commonly used and cost-effective test to assess heart conditions. During the test, cardiologists and technicians observe two cardiac phases-end-systolic (ES) and end-diastolic (ED)-which are critical for calculating heart chamber size and ejection fraction. However, non-essential frames called Non-ESED frames may appear between these phases. Currently, technicians or cardiologists manually detect these phases, which is time-consuming and prone to errors. To address this, an automated and efficient technique is needed to accurately detect cardiac phases and minimize diagnostic errors. In this paper, we propose a deep learning model called DeepPhase to assist cardiology personnel. Our convolutional neural network (CNN) learns from echocardiography images to identify the ES, ED, and Non-ESED phases without the need for left ventricle segmentation or electrocardiograms. We evaluate our model on three echocardiography image datasets, including the CAMUS dataset, the EchoNet Dynamic dataset, and a new dataset we collected from a cardiac hospital (CardiacPhase). Our model outperforms existing techniques, achieving 0.96 and 0.82 area under the curve (AUC) on the CAMUS and CardiacPhase datasets, respectively. We also propose a novel cropping technique to enhance the model's performance and ensure its relevance to real-world scenarios for ES, ED, and Non ES-ED classification.
Collapse
Affiliation(s)
- Moomal Farhad
- College of Information Technology, United Arab Emirates University, Al Ain, P.O. Box 15551, United Arab Emirates
| | - Mohammad Mehedy Masud
- College of Information Technology, United Arab Emirates University, Al Ain, P.O. Box 15551, United Arab Emirates.
| | - Azam Beg
- College of Information Technology, United Arab Emirates University, Al Ain, P.O. Box 15551, United Arab Emirates
| | - Amir Ahmad
- College of Information Technology, United Arab Emirates University, Al Ain, P.O. Box 15551, United Arab Emirates
| | - Luai Ahmed
- Institute of Public Health, College of Medicine and Health Sciences, United Arab Emirates University, Al Ain, United Arab Emirates
| | | |
Collapse
|
9
|
Lane ES, Jevsikov J, Shun-Shin MJ, Dhutia N, Matoorian N, Cole GD, Francis DP, Zolgharni M. Automated multi-beat tissue Doppler echocardiography analysis using deep neural networks. Med Biol Eng Comput 2023; 61:911-926. [PMID: 36631666 DOI: 10.1007/s11517-022-02753-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 12/24/2022] [Indexed: 01/13/2023]
Abstract
Tissue Doppler imaging is an essential echocardiographic technique for the non-invasive assessment of myocardial blood velocity. Image acquisition and interpretation are performed by trained operators who visually localise landmarks representing Doppler peak velocities. Current clinical guidelines recommend averaging measurements over several heartbeats. However, this manual process is both time-consuming and disruptive to workflow. An automated system for accurate beat isolation and landmark identification would be highly desirable. A dataset of tissue Doppler images was annotated by three cardiologist experts, providing a gold standard and allowing for observer variability comparisons. Deep neural networks were trained for fully automated predictions on multiple heartbeats and tested on tissue Doppler strips of arbitrary length. Automated measurements of peak Doppler velocities show good Bland-Altman agreement (average standard deviation of 0.40 cm/s) with consensus expert values; less than the inter-observer variability (0.65 cm/s). Performance is akin to individual experts (standard deviation of 0.40 to 0.75 cm/s). Our approach allows for > 26 times as many heartbeats to be analysed, compared to a manual approach. The proposed automated models can accurately and reliably make measurements on tissue Doppler images spanning several heartbeats, with performance indistinguishable from that of human experts, but with significantly shorter processing time. HIGHLIGHTS: • Novel approach successfully identifies heartbeats from Tissue Doppler Images • Accurately measures peak velocities on several heartbeats • Framework is fast and can make predictions on arbitrary length images • Patient dataset and models made public for future benchmark studies.
Collapse
Affiliation(s)
- Elisabeth S Lane
- School of Computing and Engineering, University of West London, St Mary's Rd, Ealing, London, W5 5RF, UK.
| | - Jevgeni Jevsikov
- School of Computing and Engineering, University of West London, St Mary's Rd, Ealing, London, W5 5RF, UK
| | | | - Niti Dhutia
- New York University Abu Dhabi, Saadiyat Island, Abu Dhabi, United Arab Emirates
| | - Nasser Matoorian
- School of Computing and Engineering, University of West London, St Mary's Rd, Ealing, London, W5 5RF, UK
| | - Graham D Cole
- National Heart and Lung Institute, Imperial College, London, UK
| | | | - Massoud Zolgharni
- School of Computing and Engineering, University of West London, St Mary's Rd, Ealing, London, W5 5RF, UK
- National Heart and Lung Institute, Imperial College, London, UK
| |
Collapse
|
10
|
Govil S, Crabb BT, Deng Y, Dal Toso L, Puyol-Antón E, Pushparajah K, Hegde S, Perry JC, Omens JH, Hsiao A, Young AA, McCulloch AD. A deep learning approach for fully automated cardiac shape modeling in tetralogy of Fallot. J Cardiovasc Magn Reson 2023; 25:15. [PMID: 36849960 PMCID: PMC9969707 DOI: 10.1186/s12968-023-00924-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 01/25/2023] [Indexed: 03/01/2023] Open
Abstract
BACKGROUND Cardiac shape modeling is a useful computational tool that has provided quantitative insights into the mechanisms underlying dysfunction in heart disease. The manual input and time required to make cardiac shape models, however, limits their clinical utility. Here we present an end-to-end pipeline that uses deep learning for automated view classification, slice selection, phase selection, anatomical landmark localization, and myocardial image segmentation for the automated generation of three-dimensional, biventricular shape models. With this approach, we aim to make cardiac shape modeling a more robust and broadly applicable tool that has processing times consistent with clinical workflows. METHODS Cardiovascular magnetic resonance (CMR) images from a cohort of 123 patients with repaired tetralogy of Fallot (rTOF) from two internal sites were used to train and validate each step in the automated pipeline. The complete automated pipeline was tested using CMR images from a cohort of 12 rTOF patients from an internal site and 18 rTOF patients from an external site. Manually and automatically generated shape models from the test set were compared using Euclidean projection distances, global ventricular measurements, and atlas-based shape mode scores. RESULTS The mean absolute error (MAE) between manually and automatically generated shape models in the test set was similar to the voxel resolution of the original CMR images for end-diastolic models (MAE = 1.9 ± 0.5 mm) and end-systolic models (MAE = 2.1 ± 0.7 mm). Global ventricular measurements computed from automated models were in good agreement with those computed from manual models. The average mean absolute difference in shape mode Z-score between manually and automatically generated models was 0.5 standard deviations for the first 20 modes of a reference statistical shape atlas. CONCLUSIONS Using deep learning, accurate three-dimensional, biventricular shape models can be reliably created. This fully automated end-to-end approach dramatically reduces the manual input required to create shape models, thereby enabling the rapid analysis of large-scale datasets and the potential to deploy statistical atlas-based analyses in point-of-care clinical settings. Training data and networks are available from cardiacatlas.org.
Collapse
Affiliation(s)
- Sachin Govil
- Department of Bioengineering, University of California San Diego, 9500 Gilman Drive, MC 0412, La Jolla, CA 92093-0412 USA
| | - Brendan T. Crabb
- Department of Bioengineering, University of California San Diego, 9500 Gilman Drive, MC 0412, La Jolla, CA 92093-0412 USA
| | - Yu Deng
- Department of Biomedical Engineering, King’s College London, London, UK
| | - Laura Dal Toso
- Department of Biomedical Engineering, King’s College London, London, UK
| | | | | | - Sanjeet Hegde
- Department of Pediatrics, University of California San Diego, La Jolla, CA USA
- Division of Cardiology, Rady Children’s Hospital San Diego, San Diego, CA USA
| | - James C. Perry
- Department of Pediatrics, University of California San Diego, La Jolla, CA USA
- Division of Cardiology, Rady Children’s Hospital San Diego, San Diego, CA USA
| | - Jeffrey H. Omens
- Department of Bioengineering, University of California San Diego, 9500 Gilman Drive, MC 0412, La Jolla, CA 92093-0412 USA
| | - Albert Hsiao
- Department of Radiology, University of California San Diego, La Jolla, CA USA
| | - Alistair A. Young
- Department of Biomedical Engineering, King’s College London, London, UK
| | - Andrew D. McCulloch
- Department of Bioengineering, University of California San Diego, 9500 Gilman Drive, MC 0412, La Jolla, CA 92093-0412 USA
| |
Collapse
|
11
|
Zeng Y, Tsui PH, Pang K, Bin G, Li J, Lv K, Wu X, Wu S, Zhou Z. MAEF-Net: Multi-attention efficient feature fusion network for left ventricular segmentation and quantitative analysis in two-dimensional echocardiography. ULTRASONICS 2023; 127:106855. [PMID: 36206610 DOI: 10.1016/j.ultras.2022.106855] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 09/03/2022] [Accepted: 09/21/2022] [Indexed: 06/16/2023]
Abstract
The segmentation of cardiac chambers and the quantification of clinical functional metrics in dynamic echocardiography are the keys to the clinical diagnosis of heart disease. Identifying the end-diastolic frames (EDFs) and end-systolic frames (ESFs) and manually segmenting the left ventricle in the echocardiographic cardiac cycle before obtaining the left ventricular ejection fraction (LVEF) is a time-consuming and tedious task for clinicians. In this work, we proposed a deep learning-based fully automated echocardiographic analysis method. We proposed a multi-attention efficient feature fusion network (MAEF-Net) to automatically segment the left ventricle. Then, EDFs and ESFs in all cardiac cycles were automatically detected to compute LVEF. The MAEF-Net method used a multi-attention mechanism to guide the network to capture heartbeat features effectively, while suppressing noise, and incorporated deep supervision mechanism and spatial pyramid feature fusion to enhance feature extraction capabilities. The proposed method was validated on the public EchoNet-Dynamic dataset (n = 1226). The Dice similarity coefficient (DSC) of the left ventricular segmentation reached (93.10 ± 2.22)%, and the mean absolute error (MAE) of cardiac phase detection was (2.36 ± 2.23) frames. The MAE for predicting LVEF was 6.29 %. The proposed method was also validated on a private clinical dataset (n = 22). The DSC of the left ventricular segmentation reached (92.81 ± 2.85)%, and the MAE of cardiac phase detection was (2.25 ± 2.27) frames. The MAE for predicting LVEF was 5.91 %, and the Pearson correlation coefficient r reached 0.96. The proposed method may be used as a new method for automatic left ventricular segmentation and quantitative analysis in two-dimensional echocardiography. Our code and trained models will be made available publicly at https://github.com/xiaojinmao-code/MAEF-Net.
Collapse
Affiliation(s)
- Yan Zeng
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing 100124, China
| | - Po-Hsiang Tsui
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan 333323, Taiwan; Institute for Radiological Research, Chang Gung University, Taoyuan 333323, Taiwan; Division of Pediatric Gastroenterology, Department of Pediatrics, Chang Gung Memorial Hospital at Linkou, Taoyuan 333423, Taiwan
| | - Kunjing Pang
- Department of Echocardiography, State Key Laboratory of Cardiovascular Disease, Fuwai Hospital, National Center for Cardiovascular Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100037, China
| | - Guangyu Bin
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing 100124, China
| | - Jiehui Li
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing 100124, China; Department of Cardiac Surgery, State Key Laboratory of Cardiovascular Disease, Fuwai Hospital, and National Center for Cardiovascular Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100037, China
| | - Ke Lv
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Xining Wu
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Shuicai Wu
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing 100124, China.
| | - Zhuhuang Zhou
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing 100124, China.
| |
Collapse
|
12
|
Moal O, Roger E, Lamouroux A, Younes C, Bonnet G, Moal B, Lafitte S. Explicit and automatic ejection fraction assessment on 2D cardiac ultrasound with a deep learning-based approach. Comput Biol Med 2022; 146:105637. [PMID: 35617727 DOI: 10.1016/j.compbiomed.2022.105637] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 03/01/2022] [Accepted: 04/29/2022] [Indexed: 12/16/2022]
Abstract
BACKGROUND Ejection fraction (EF) is a key parameter for assessing cardiovascular functions in cardiac ultrasound, but its manual assessment is time-consuming and subject to high inter and intra-observer variability. Deep learning-based methods have the potential to perform accurate fully automatic EF predictions but suffer from a lack of explainability and interpretability. This study proposes a fully automatic method to reliably and explicitly evaluate the biplane left ventricular EF on 2D echocardiography following the recommended modified Simpson's rule. METHODS A deep learning model was trained on apical 4 and 2-chamber echocardiography to segment the left ventricle and locate the mitral valve. Predicted segmentations are then validated with a statistical shape model, which detects potential failures that could impact the EF evaluation. Finally, the end-diastolic and end-systolic frames are identified based on the remaining LV segmentations' areas and EF is estimated on all available cardiac cycles. RESULTS Our approach was trained on a dataset of 783 patients. Its performances were evaluated on an internal and external dataset of respectively 200 and 450 patients. On the internal dataset, EF assessment achieved a mean absolute error of 6.10% and a bias of 1.56 ± 7.58% using multiple cardiac cycles. The approach evaluated EF with a mean absolute error of 5.39% and a bias of -0.74 ± 7.12% on the external dataset. CONCLUSION Following the recommended guidelines, we proposed an end-to-end fully automatic approach that achieves state-of-the-art performance in biplane EF evaluation while giving explicit details to clinicians.
Collapse
Affiliation(s)
| | | | | | | | - Guillaume Bonnet
- Hôpital Cardiologique Haut Lévêque, CHU de Bordeaux, CIC 0005, Pessac, France.
| | | | - Stephane Lafitte
- Hôpital Cardiologique Haut Lévêque, CHU de Bordeaux, CIC 0005, Pessac, France.
| |
Collapse
|
13
|
Master Frame Extraction of Fetal Cardiac Images Using B Mode Ultrasound Images. JOURNAL OF BIOMIMETICS BIOMATERIALS AND BIOMEDICAL ENGINEERING 2022. [DOI: 10.4028/www.scientific.net/jbbbe.54.51] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Fetal Echocardiography is used for monitoring the fetal heart and for detection of Congenital Heart Disease (CHD). It is well known that fetal cardiac four chamber view has been widely used for preliminary examination for the detection of CHD. The end diastole frame is generally used for the analysis of the fetal cardiac chambers which is manually picked by the clinician during examination/screening. This method is subjected to intra and inter observer errors and also time consuming. The proposed study aims to automate this process by determining the frame, referred to as the Master frame from the cine loop sequences that can be used for the analysis of the fetal heart chambers instead of the clinically chosen diastole frame. The proposed framework determines the correlation between the reference (first) frame with the successive frames to identify one cardiac cycle. Then the Master frame is formed by superimposing all the frames belonging to one cardiac cycle. The master frame is then compared with the clinically chosen diastole frame in terms of fidelity metrics such as Dice coefficient, Hausdorff distance, mean square error and structural similarity index. The average value of the fidelity metrics considering the dataset used for this study 0.73 for Dice, 13.94 for Hausdorff distance, 0.99 for Structural Similarity Index and 0.035 for mean square error confirms the suitability of the proposed master frame extraction thereby avoiding manual intervention by the clinician. .
Collapse
|
14
|
Azarmehr N, Ye X, Howard JP, Lane ES, Labs R, Shun-Shin MJ, Cole GD, Bidaut L, Francis DP, Zolgharni M. Neural architecture search of echocardiography view classifiers. J Med Imaging (Bellingham) 2021; 8:034002. [PMID: 34179218 PMCID: PMC8217960 DOI: 10.1117/1.jmi.8.3.034002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2020] [Accepted: 06/04/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: Echocardiography is the most commonly used modality for assessing the heart in clinical practice. In an echocardiographic exam, an ultrasound probe samples the heart from different orientations and positions, thereby creating different viewpoints for assessing the cardiac function. The determination of the probe viewpoint forms an essential step in automatic echocardiographic image analysis. Approach: In this study, convolutional neural networks are used for the automated identification of 14 different anatomical echocardiographic views (larger than any previous study) in a dataset of 8732 videos acquired from 374 patients. Differentiable architecture search approach was utilized to design small neural network architectures for rapid inference while maintaining high accuracy. The impact of the image quality and resolution, size of the training dataset, and number of echocardiographic view classes on the efficacy of the models were also investigated. Results: In contrast to the deeper classification architectures, the proposed models had significantly lower number of trainable parameters (up to 99.9% reduction), achieved comparable classification performance (accuracy 88.4% to 96%, precision 87.8% to 95.2%, recall 87.1% to 95.1%) and real-time performance with inference time per image of 3.6 to 12.6 ms. Conclusion: Compared with the standard classification neural network architectures, the proposed models are faster and achieve comparable classification performance. They also require less training data. Such models can be used for real-time detection of the standard views.
Collapse
Affiliation(s)
- Neda Azarmehr
- University of Lincoln, School of Computer Science, Lincoln, United Kingdom
| | - Xujiong Ye
- University of Lincoln, School of Computer Science, Lincoln, United Kingdom
| | - James P. Howard
- Imperial College London, National Heart and Lung Institute, London, United Kingdom
| | - Elisabeth S. Lane
- University of West London, School of Computing and Engineering, London, United Kingdom
| | - Robert Labs
- University of West London, School of Computing and Engineering, London, United Kingdom
| | - Matthew J. Shun-Shin
- Imperial College London, National Heart and Lung Institute, London, United Kingdom
| | - Graham D. Cole
- Imperial College London, National Heart and Lung Institute, London, United Kingdom
| | - Luc Bidaut
- University of Lincoln, School of Computer Science, Lincoln, United Kingdom
| | - Darrel P. Francis
- Imperial College London, National Heart and Lung Institute, London, United Kingdom
| | - Massoud Zolgharni
- Imperial College London, National Heart and Lung Institute, London, United Kingdom
- University of West London, School of Computing and Engineering, London, United Kingdom
| |
Collapse
|