1
|
Jani VP, Ostovaneh M, Chamera E, Kato Y, Lima JAC, Ambale-Venkatesh B. Deep learning for automatic volumetric segmentation of left ventricular myocardium and ischaemic scar from multi-slice late gadolinium enhancement cardiovascular magnetic resonance. Eur Heart J Cardiovasc Imaging 2024; 25:829-838. [PMID: 38244222 DOI: 10.1093/ehjci/jeae022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 12/09/2023] [Accepted: 01/16/2024] [Indexed: 01/22/2024] Open
Abstract
AIMS This study details application of deep learning for automatic volumetric segmentation of left ventricular (LV) myocardium and scar and automated quantification of myocardial ischaemic scar burden from late gadolinium enhancement cardiovascular magnetic resonance (LGE-CMR). METHODS AND RESULTS We included 501 images and manual segmentations of short-axis LGE-CMR from over 20 multinational sites, from which 377 studies were used for training and 124 studies from unique participants for internal validation. A third test set of 52 images was used for external evaluation. Three models, U-Net, Cascaded U-Net, and U-Net++, were trained with a novel adaptive weighted categorical cross-entropy loss function. Model performance was evaluated using concordance correlation coefficients (CCCs) for LV mass and per cent myocardial scar burden. Cascaded U-Net was found to be the best model for the quantification of LV mass and scar percentage. The model exhibited a mean difference of -5 ± 23 g for LV mass, -0.4 ± 11.2 g for scar mass, and -0.8 ± 7% for per cent scar. CCC were 0.87, 0.77, and 0.78 for LV mass, scar mass, and per cent scar burden, respectively, in the internal validation set and 0.75, 0.71, and 0.69, respectively, in the external test set. For segmental scar mass, CCC was 0.74 for apical scar, 0.91 for mid-ventricular scar, and 0.73 for basal scar, demonstrating moderate to strong agreement. CONCLUSION We successfully trained a convolutional neural network for volumetric segmentation and analysis of LV scar burden from LGE-CMR images in a large, multinational cohort of participants with ischaemic scar.
Collapse
Affiliation(s)
- Vivek P Jani
- Division of Cardiology, Johns Hopkins University School of Medicine, 600 N Wolfe St, Blalock 524, Baltimore, MD 21297-0409, USA
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
| | - Mohammad Ostovaneh
- Division of Cardiology, Johns Hopkins University School of Medicine, 600 N Wolfe St, Blalock 524, Baltimore, MD 21297-0409, USA
| | - Elzbieta Chamera
- Division of Cardiology, Johns Hopkins University School of Medicine, 600 N Wolfe St, Blalock 524, Baltimore, MD 21297-0409, USA
| | - Yoko Kato
- Division of Cardiology, Johns Hopkins University School of Medicine, 600 N Wolfe St, Blalock 524, Baltimore, MD 21297-0409, USA
| | - Joao A C Lima
- Division of Cardiology, Johns Hopkins University School of Medicine, 600 N Wolfe St, Blalock 524, Baltimore, MD 21297-0409, USA
| | | |
Collapse
|
2
|
Ding W, Li L, Qiu J, Wang S, Huang L, Chen Y, Yang S, Zhuang X. Aligning Multi-Sequence CMR Towards Fully Automated Myocardial Pathology Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3474-3486. [PMID: 37347625 DOI: 10.1109/tmi.2023.3288046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/24/2023]
Abstract
Myocardial pathology segmentation (MyoPS) is critical for the risk stratification and treatment planning of myocardial infarction (MI). Multi-sequence cardiac magnetic resonance (MS-CMR) images can provide valuable information. For instance, balanced steady-state free precession cine sequences present clear anatomical boundaries, while late gadolinium enhancement and T2-weighted CMR sequences visualize myocardial scar and edema of MI, respectively. Existing methods usually fuse anatomical and pathological information from different CMR sequences for MyoPS, but assume that these images have been spatially aligned. However, MS-CMR images are usually unaligned due to the respiratory motions in clinical practices, which poses additional challenges for MyoPS. This work presents an automatic MyoPS framework for unaligned MS-CMR images. Specifically, we design a combined computing model for simultaneous image registration and information fusion, which aggregates multi-sequence features into a common space to extract anatomical structures (i.e., myocardium). Consequently, we can highlight the informative regions in the common space via the extracted myocardium to improve MyoPS performance, considering the spatial relationship between myocardial pathologies and myocardium. Experiments on a private MS-CMR dataset and a public dataset from the MYOPS2020 challenge show that our framework could achieve promising performance for fully automatic MyoPS.
Collapse
|
3
|
Sun S, Wang Y, Yang J, Feng Y, Tang L, Liu S, Ning H. Topology-sensitive weighting model for myocardial segmentation. Comput Biol Med 2023; 165:107286. [PMID: 37633088 DOI: 10.1016/j.compbiomed.2023.107286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 07/12/2023] [Accepted: 07/28/2023] [Indexed: 08/28/2023]
Abstract
Accurate myocardial segmentation is crucial for the diagnosis of various heart diseases. However, segmentation results often suffer from topology structural errors, such as broken connections and holes, especially in cases of poor image quality. These errors are unacceptable in clinical diagnosis. We proposed a Topology-Sensitive Weight (TSW) model to keep both pixel-wise accuracy and topological correctness. Specifically, the Position Weighting Update (PWU) strategy with the Boundary-Sensitive Topology (BST) module can guide the model to focus on positions where topological features are sensitive to pixel values. The Myocardial Integrity Topology (MIT) module can serve as a guide for maintaining myocardial integrity. We evaluate the TSW model on the CAMUS dataset and a private echocardiography myocardial segmentation dataset. The qualitative and quantitative experimental results show that the TSW model significantly enhances topological accuracy while maintaining pixel-wise precision.
Collapse
Affiliation(s)
- Song Sun
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China; National Frontiers Science Center for Industrial Intelligence and Systems Optimization, Shenyang, China
| | - Yonghuai Wang
- Department of Cardiovascular Ultrasound, The First Hospital of China Medical University, Shenyang, China
| | - Jinzhu Yang
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Yong Feng
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Lingzhi Tang
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Shuo Liu
- Department of Cardiovascular Ultrasound, The First Hospital of China Medical University, Shenyang, China
| | - Hongxia Ning
- Department of Cardiovascular Ultrasound, The First Hospital of China Medical University, Shenyang, China
| |
Collapse
|
4
|
Fallahdizcheh A, Laroia S, Wang C. Sequential Active Contour Based on Morphological-Driven Thresholding for Ultrasound Image Segmentation of Ascites. IEEE J Biomed Health Inform 2023; 27:4305-4316. [PMID: 37335794 DOI: 10.1109/jbhi.2023.3286869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2023]
Abstract
Paracentesis is a high-demanding and routine operation, which has great potentials and benefits if semi-autonomous procedures can be developed. One of the most important techniques that facilitate semi-autonomous paracentesis is to segment the ascites from ultrasound images accurately and efficiently. The ascites, however, is usually with significantly different shapes and noise among different patients, and its shape/size changes dynamically during the paracentesis. This makes most of existing image segmentation methods either time consuming or inaccurate for segmenting ascites from its background. In this article, we propose a two-stage active contour method to facilitate accurate and efficient segmentation of ascites. First, a morphological-driven thresholding method is developed to locate the initial contour of the ascites automatically. Then, the identified initial contour is fed into a novel sequential active contour algorithm to segment the ascites from background accurately. The proposed method is tested and compared with state-of-the-art active contour methods on over 100 real ultrasound images of ascites, and the results show the superiority of our method in both accuracy and time efficiency.
Collapse
|
5
|
Mamalakis M, Garg P, Nelson T, Lee J, Swift AJ, Wild JM, Clayton RH. Artificial Intelligence framework with traditional computer vision and deep learning approaches for optimal automatic segmentation of left ventricle with scar. Artif Intell Med 2023; 143:102610. [PMID: 37673578 DOI: 10.1016/j.artmed.2023.102610] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 05/17/2023] [Accepted: 06/06/2023] [Indexed: 09/08/2023]
Abstract
Automatic segmentation of the cardiac left ventricle with scars remains a challenging and clinically significant task, as it is essential for patient diagnosis and treatment pathways. This study aimed to develop a novel framework and cost function to achieve optimal automatic segmentation of the left ventricle with scars using LGE-MRI images. To ensure the generalization of the framework, an unbiased validation protocol was established using out-of-distribution (OOD) internal and external validation cohorts, and intra-observation and inter-observer variability ground truths. The framework employs a combination of traditional computer vision techniques and deep learning, to achieve optimal segmentation results. The traditional approach uses multi-atlas techniques, active contours, and k-means methods, while the deep learning approach utilizes various deep learning techniques and networks. The study found that the traditional computer vision technique delivered more accurate results than deep learning, except in cases where there was breath misalignment error. The optimal solution of the framework achieved robust and generalized results with Dice scores of 82.8 ± 6.4% and 72.1 ± 4.6% in the internal and external OOD cohorts, respectively. The developed framework offers a high-performance solution for automatic segmentation of the left ventricle with scars using LGE-MRI. Unlike existing state-of-the-art approaches, it achieves unbiased results across different hospitals and vendors without the need for training or tuning in hospital cohorts. This framework offers a valuable tool for experts to accomplish the task of fully automatic segmentation of the left ventricle with scars based on a single-modality cardiac scan.
Collapse
Affiliation(s)
- Michail Mamalakis
- Insigneo Institute for in-silico, Medicine, University of Sheffield, Sheffield, S1 4DP, UK; Department of Computer Science, University of Sheffield, Regent Court, Sheffield, S1 4DP, UK.
| | - Pankaj Garg
- Department of Cardiology, Sheffield Teaching Hospitals Sheffield S5 7AU, UK
| | - Tom Nelson
- Department of Cardiology, Sheffield Teaching Hospitals Sheffield S5 7AU, UK
| | - Justin Lee
- Department of Cardiology, Sheffield Teaching Hospitals Sheffield S5 7AU, UK
| | - Andrew J Swift
- Department of Computer Science, University of Sheffield, Regent Court, Sheffield, S1 4DP, UK; Department of Infection, Immunity & Cardiovascular Disease, University of Sheffield, Sheffield, UK
| | - James M Wild
- Insigneo Institute for in-silico, Medicine, University of Sheffield, Sheffield, S1 4DP, UK; Polaris, Imaging Sciences, Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, UK
| | - Richard H Clayton
- Insigneo Institute for in-silico, Medicine, University of Sheffield, Sheffield, S1 4DP, UK; Department of Computer Science, University of Sheffield, Regent Court, Sheffield, S1 4DP, UK.
| |
Collapse
|
6
|
Liu Y, Xing W, Zhao M, Lin M. A new classification method for diagnosing COVID-19 pneumonia based on joint CNN features of chest X-ray images and parallel pyramid MLP-mixer module. Neural Comput Appl 2023; 35:1-13. [PMID: 37362575 PMCID: PMC10147369 DOI: 10.1007/s00521-023-08604-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 04/11/2023] [Indexed: 06/28/2023]
Abstract
During the past three years, the coronavirus disease 2019 (COVID-19) has swept the world. The rapid and accurate recognition of covid-19 pneumonia are ,therefore, of great importance. To handle this problem, we propose a new pipeline of deep learning framework for diagnosing COVID-19 pneumonia via chest X-ray images from normal, COVID-19, and other pneumonia patients. In detail, the self-trained YOLO-v4 network was first used to locate and segment the thoracic region, and the output images were scaled to the same size. Subsequently, the pre-trained convolutional neural network was adopted to extract the features of X-ray images from 13 convolutional layers, which were fused with the original image to form a 14-dimensional image matrix. It was then put into three parallel pyramid multi-layer perceptron (MLP)-Mixer modules for comprehensive feature extraction through spatial fusion and channel fusion based on different scales so as to grasp more extensive feature correlation. Finally, by combining all image features from the 14-channel output, the classification task was achieved using two fully connected layers as well as Softmax classifier for classification. Extensive simulations based on a total of 4099 chest X-ray images were conducted to verify the effectiveness of the proposed method. Experimental results indicated that our proposed method can achieve the best performance in almost all cases, which is good for auxiliary diagnosis of COVID-19 and has great clinical application potential.
Collapse
Affiliation(s)
- Yiwen Liu
- College of Information Science and Technology, Donghua University, Shanghai, People’s Republic of China
| | - Wenyu Xing
- School of Information Science and Technology, Fudan University, Shanghai, People’s Republic of China
| | - Mingbo Zhao
- College of Information Science and Technology, Donghua University, Shanghai, People’s Republic of China
- Department of Electrical Engineering, City University of Hong Kong, Kowloon Tong, Hong Kong People’s Republic of China
| | - Mingquan Lin
- Department of Electrical Engineering, City University of Hong Kong, Kowloon Tong, Hong Kong People’s Republic of China
| |
Collapse
|
7
|
Primary Open-Angle Glaucoma Diagnosis From Optic Disc Photographs Using a Siamese Network. OPHTHALMOLOGY SCIENCE 2022; 2:100209. [PMID: 36531584 PMCID: PMC9754976 DOI: 10.1016/j.xops.2022.100209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 08/01/2022] [Accepted: 08/05/2022] [Indexed: 11/20/2022]
Abstract
Purpose Primary open-angle glaucoma (POAG) is one of the leading causes of irreversible blindness in the United States and worldwide. Although deep learning methods have been proposed to diagnose POAG, these methods all used a single image as input. Contrastingly, glaucoma specialists typically compare the follow-up image with the baseline image to diagnose incident glaucoma. To simulate this process, we proposed a Siamese neural network, POAGNet, to detect POAG from optic disc photographs. Design The POAGNet, an algorithm for glaucoma diagnosis, is developed using optic disc photographs. Participants The POAGNet was trained and evaluated on 2 data sets: (1) 37 339 optic disc photographs from 1636 Ocular Hypertension Treatment Study (OHTS) participants and (2) 3684 optic disc photographs from the Sequential fundus Images for Glaucoma (SIG) data set. Gold standard labels were obtained using reading center grades. Methods We proposed a Siamese network model, POAGNet, to simulate the clinical process of identifying POAG from optic disc photographs. The POAGNet consists of 2 side outputs for deep supervision and uses convolution to measure the similarity between 2 networks. Main Outcome Measures The main outcome measures are the area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity. Results In POAG diagnosis, extensive experiments show that POAGNet performed better than the best state-of-the-art model on the OHTS test set (area under the curve [AUC] 0.9587 versus 0.8750). It also outperformed the baseline models on the SIG test set (AUC 0.7518 versus 0.6434). To assess the transferability of POAGNet, we also validated the impact of cross-data set variability on our model. The model trained on OHTS achieved an AUC of 0.7490 on SIG, comparable to the previous model trained on the same data set. When using the combination of SIG and OHTS for training, our model achieved superior AUC to the single-data model (AUC 0.8165 versus 0.7518). These demonstrate the relative generalizability of POAGNet. Conclusions By simulating the clinical grading process, POAGNet demonstrated high accuracy in POAG diagnosis. These results highlight the potential of deep learning to assist and enhance clinical POAG diagnosis. The POAGNet is publicly available on https://github.com/bionlplab/poagnet.
Collapse
|
8
|
Lin M, Liu L, Gorden M, Kass M, Van Tassel S, Wang F, Peng Y. Multi-scale Multi-structure Siamese Network (MMSNet) for Primary Open-Angle Glaucoma Prediction. MACHINE LEARNING IN MEDICAL IMAGING. MLMI (WORKSHOP) 2022; 13583:436-445. [PMID: 36656619 PMCID: PMC9844668 DOI: 10.1007/978-3-031-21014-3_45] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Primary open-angle glaucoma (POAG) is one of the leading causes of irreversible blindness in the United States and worldwide. POAG prediction before onset plays an important role in early treatment. Although deep learning methods have been proposed to predict POAG, these methods mainly focus on current status prediction. In addition, all these methods used a single image as input. On the other hand, glaucoma specialists determine a glaucomatous eye by comparing the follow-up optic nerve image with the baseline along with supplementary clinical data. To simulate this process, we proposed a Multi-scale Multi-structure Siamese Network (MMSNet) to predict future POAG event from fundus photographs. The MMSNet consists of two side-outputs for deep supervision and 2D blocks to utilize two-dimensional features to assist classification. The MMSNet network was trained and evaluated on a large dataset: 37,339 fundus photographs from 1,636 Ocular Hypertension Treatment Study (OHTS) participants. Extensive experiments show that MMSNet outperforms the state-of-the-art on two "POAG prediction before onset" tasks. Our AUC are 0.9312 and 0.9507, which are 0.2204 and 0.1490 higher than the state-of-the-art, respectively. In addition, an ablation study is performed to check the contribution of different components. These results highlight the potential of deep learning to assist and enhance the prediction of future POAG event. The proposed network will be publicly available on https://github.com/bionlplab/MMSNet.
Collapse
Affiliation(s)
| | - Lei Liu
- Washington University School of Medicine in St. Louis, St. Louis, MO, USA
| | - Mae Gorden
- Washington University School of Medicine in St. Louis, St. Louis, MO, USA
| | - Michael Kass
- Washington University School of Medicine in St. Louis, St. Louis, MO, USA
| | | | - Fei Wang
- Weill Cornell Medicine, New York, NY, USA
| | - Yifan Peng
- Weill Cornell Medicine, New York, NY, USA
| |
Collapse
|
9
|
Lin M, Hou B, Liu L, Gordon M, Kass M, Wang F, Van Tassel SH, Peng Y. Automated diagnosing primary open-angle glaucoma from fundus image by simulating human's grading with deep learning. Sci Rep 2022; 12:14080. [PMID: 35982106 PMCID: PMC9388536 DOI: 10.1038/s41598-022-17753-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 07/30/2022] [Indexed: 11/09/2022] Open
Abstract
Primary open-angle glaucoma (POAG) is a leading cause of irreversible blindness worldwide. Although deep learning methods have been proposed to diagnose POAG, it remains challenging to develop a robust and explainable algorithm to automatically facilitate the downstream diagnostic tasks. In this study, we present an automated classification algorithm, GlaucomaNet, to identify POAG using variable fundus photographs from different populations and settings. GlaucomaNet consists of two convolutional neural networks to simulate the human grading process: learning the discriminative features and fusing the features for grading. We evaluated GlaucomaNet on two datasets: Ocular Hypertension Treatment Study (OHTS) participants and the Large-scale Attention-based Glaucoma (LAG) dataset. GlaucomaNet achieved the highest AUC of 0.904 and 0.997 for POAG diagnosis on OHTS and LAG datasets. An ensemble of network architectures further improved diagnostic accuracy. By simulating the human grading process, GlaucomaNet demonstrated high accuracy with increased transparency in POAG diagnosis (comprehensiveness scores of 97% and 36%). These methods also address two well-known challenges in the field: the need for increased image data diversity and relying heavily on perimetry for POAG diagnosis. These results highlight the potential of deep learning to assist and enhance clinical POAG diagnosis. GlaucomaNet is publicly available on https://github.com/bionlplab/GlaucomaNet .
Collapse
Affiliation(s)
- Mingquan Lin
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
| | - Bojian Hou
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
| | - Lei Liu
- Institute for Public Health, Washington University School of Medicine, St. Louis, MO, USA
| | - Mae Gordon
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO, USA
| | - Michael Kass
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO, USA
| | - Fei Wang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA.
| | | | - Yifan Peng
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA.
| |
Collapse
|
10
|
|