1
|
Zhong S, Wang W, Feng Q, Zhang Y, Ning Z. Cross-view discrepancy-dependency network for volumetric medical image segmentation. Med Image Anal 2025; 99:103329. [PMID: 39236632 DOI: 10.1016/j.media.2024.103329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 07/28/2024] [Accepted: 08/26/2024] [Indexed: 09/07/2024]
Abstract
The limited data poses a crucial challenge for deep learning-based volumetric medical image segmentation, and many methods have tried to represent the volume by its subvolumes (i.e., multi-view slices) for alleviating this issue. However, such methods generally sacrifice inter-slice spatial continuity. Currently, a promising avenue involves incorporating multi-view information into the network to enhance volume representation learning, but most existing studies tend to overlook the discrepancy and dependency across different views, ultimately limiting the potential of multi-view representations. To this end, we propose a cross-view discrepancy-dependency network (CvDd-Net) to task with volumetric medical image segmentation, which exploits multi-view slice prior to assist volume representation learning and explore view discrepancy and view dependency for performance improvement. Specifically, we develop a discrepancy-aware morphology reinforcement (DaMR) module to effectively learn view-specific representation by mining morphological information (i.e., boundary and position of object). Besides, we design a dependency-aware information aggregation (DaIA) module to adequately harness the multi-view slice prior, enhancing individual view representations of the volume and integrating them based on cross-view dependency. Extensive experiments on four medical image datasets (i.e., Thyroid, Cervix, Pancreas, and Glioma) demonstrate the efficacy of the proposed method on both fully-supervised and semi-supervised tasks.
Collapse
Affiliation(s)
- Shengzhou Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, Guangdong, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China
| | - Wenxu Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, Guangdong, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, Guangdong, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China
| | - Yu Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, Guangdong, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Zhenyuan Ning
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, Guangdong, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| |
Collapse
|
2
|
Gi Y, Oh G, Jo Y, Lim H, Ko Y, Hong J, Lee E, Park S, Kwak T, Kim S, Yoon M. Study of multistep Dense U-Net-based automatic segmentation for head MRI scans. Med Phys 2024; 51:2230-2238. [PMID: 37956307 DOI: 10.1002/mp.16824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2023] [Revised: 09/25/2023] [Accepted: 10/22/2023] [Indexed: 11/15/2023] Open
Abstract
BACKGROUND Despite extensive efforts to obtain accurate segmentation of magnetic resonance imaging (MRI) scans of a head, it remains challenging primarily due to variations in intensity distribution, which depend on the equipment and parameters used. PURPOSE The goal of this study is to evaluate the effectiveness of an automatic segmentation method for head MRI scans using a multistep Dense U-Net (MDU-Net) architecture. METHODS The MDU-Net-based method comprises two steps. The first step is to segment the scalp, skull, and whole brain from head MRI scans using a convolutional neural network (CNN). In the first step, a hybrid network is used to combine 2.5D Dense U-Net and 3D Dense U-Net structure. This hybrid network acquires logits in three orthogonal planes (axial, coronal, and sagittal) using 2.5D Dense U-Nets and fuses them by averaging. The resultant fused probability map with head MRI scans then serves as the input to a 3D Dense U-Net. In this process, different ratios of active contour loss and focal loss are applied. The second step is to segment the cerebrospinal fluid (CSF), white matter, and gray matter from extracted brain MRI scans using CNNs. In the second step, the histogram of the extracted brain MRI scans is standardized and then a 2.5D Dense U-Net is used to further segment the brain's specific tissues using the focal loss. A dataset of 100 head MRI scans from an OASIS-3 dataset was used for training, internal validation, and testing, with ratios of 80%, 10%, and 10%, respectively. Using the proposed approach, we segmented the head MRI scans into five areas (scalp, skull, CSF, white matter, and gray matter) and evaluated the segmentation results using the Dice similarity coefficient (DSC) score, Hausdorff distance (HD), and the average symmetric surface distance (ASSD) as evaluation metrics. We compared these results with those obtained using the Res-U-Net, Dense U-Net, U-Net++, Swin-Unet, and H-Dense U-Net models. RESULTS The MDU-Net model showed DSC values of 0.933, 0.830, 0.833, 0.953, and 0.917 in the scalp, skull, CSF, white matter, and gray matter, respectively. The corresponding HD values were 2.37, 2.89, 2.13, 1.52, and 1.53 mm, respectively. The ASSD values were 0.50, 1.63, 1.28, 0.26, and 0.27 mm, respectively. Comparing these results with other models revealed that the MDU-Net model demonstrated the best performance in terms of the DSC values for the scalp, CSF, white matter, and gray matter. When compared with the H-Dense U-Net model, which showed the highest performance among the other models, the MDU-Net model showed substantial improvements in the HD view, particularly in the gray matter region, with a difference of approximately 9%. In addition, in terms of the ASSD, the MDU-Net model outperformed the H-Dense U-Net model, showing an approximately 7% improvements in the white matter and approximately 9% improvements in the gray matter. CONCLUSION Compared with existing models in terms of DSC, HD, and ASSD, the proposed MDU-Net model demonstrated the best performance on average and showed its potential to enhance the accuracy of automatic segmentation for head MRI scans.
Collapse
Affiliation(s)
- Yongha Gi
- Department of Bio-medical Engineering, Korea University, Seoul, Republic of Korea
| | - Geon Oh
- Department of Bio-medical Engineering, Korea University, Seoul, Republic of Korea
| | - Yunhui Jo
- Institute of Global Health Technology (IGHT), Korea University, Seoul, Republic of Korea
| | - Hyeongjin Lim
- Department of Bio-medical Engineering, Korea University, Seoul, Republic of Korea
| | - Yousun Ko
- Department of Bio-medical Engineering, Korea University, Seoul, Republic of Korea
| | - Jinyoung Hong
- Department of Bio-medical Engineering, Korea University, Seoul, Republic of Korea
| | - Eunjun Lee
- Department of Bio-medical Engineering, Korea University, Seoul, Republic of Korea
| | - Sangmin Park
- Department of Bio-medical Engineering, Korea University, Seoul, Republic of Korea
- Field Cure Ltd., Seoul, Republic of Korea
| | - Taemin Kwak
- Department of Bio-medical Engineering, Korea University, Seoul, Republic of Korea
- Field Cure Ltd., Seoul, Republic of Korea
| | - Sangcheol Kim
- Department of Bio-medical Engineering, Korea University, Seoul, Republic of Korea
- Field Cure Ltd., Seoul, Republic of Korea
| | - Myonggeun Yoon
- Department of Bio-medical Engineering, Korea University, Seoul, Republic of Korea
- Field Cure Ltd., Seoul, Republic of Korea
| |
Collapse
|
3
|
Baheti B, Pati S, Menze B, Bakas S. Leveraging 2D Deep Learning ImageNet-trained models for Native 3D Medical Image Analysis. BRAINLESION : GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES. BRAINLES (WORKSHOP) 2023; 13769:68-79. [PMID: 37928819 PMCID: PMC10623403 DOI: 10.1007/978-3-031-33842-7_6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2023]
Abstract
Convolutional neural networks (CNNs) have shown promising performance in various 2D computer vision tasks due to availability of large amounts of 2D training data. Contrarily, medical imaging deals with 3D data and usually lacks the equivalent extent and diversity of data, for developing AI models. Transfer learning provides the means to use models trained for one application as a starting point to another application. In this work, we leverage 2D pre-trained models as a starting point in 3D medical applications by exploring the concept of Axial-Coronal-Sagittal (ACS) convolutions. We have incorporated ACS as an alternative of native 3D convolutions in the Generally Nuanced Deep Learning Framework (GaNDLF), providing various well-established and state-of-the-art network architectures with the availability of pre-trained encoders from 2D data. Results of our experimental evaluation on 3D MRI data of brain tumor patients for i) tumor segmentation and ii) radiogenomic classification, show model size reduction by ~22% and improvement in validation accuracy by ~33%. Our findings support the advantage of ACS convolutions in pre-trained 2D CNNs over 3D CNN without pre-training, for 3D segmentation and classification tasks, democratizing existing models trained in datasets of unprecedented size and showing promise in the field of healthcare.
Collapse
Affiliation(s)
- Bhakti Baheti
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Sarthak Pati
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Bjoern Menze
- Department of Informatics, Technical University of Munich, Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
4
|
Extension-contraction transformation network for pancreas segmentation in abdominal CT scans. Comput Biol Med 2023; 152:106410. [PMID: 36516578 DOI: 10.1016/j.compbiomed.2022.106410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 11/08/2022] [Accepted: 12/03/2022] [Indexed: 12/12/2022]
Abstract
Accurate and automatic pancreas segmentation from abdominal computed tomography (CT) scans is crucial for the diagnosis and prognosis of pancreatic diseases. However, the pancreas accounts for a relatively small portion of the scan and presents high anatomical variability and low contrast, making traditional automated segmentation methods fail to generate satisfactory results. In this paper, we propose an extension-contraction transformation network (ECTN) and deploy it into a cascaded two-stage segmentation framework for accurate pancreas segmenting. This model can enhance the perception of 3D context by distinguishing and exploiting the extension and contraction transformation of the pancreas between slices. It consists of an encoder, a segmentation decoder, and an extension-contraction (EC) decoder. The EC decoder is responsible for predicting the inter-slice extension and contraction transformation of the pancreas by feeding the extension and contraction information generated by the segmentation decoder; meanwhile, its output is combined with the output of the segmentation decoder to reconstruct and refine the segmentation results. Quantitative evaluation is performed on NIH Pancreas Segmentation (Pancreas-CT) dataset using 4-fold cross-validation. We obtained average Precision of 86.59±6.14% , Recall of 85.11±5.96%, Dice similarity coefficient (DSC) of 85.58±3.98%. and Jaccard Index (JI) of 74.99±5.86%. The performance of our method outperforms several baseline and state-of-the-art methods.
Collapse
|
5
|
Abstract
Three-dimensional (3D) medical image segmentation plays a crucial role in medical care applications. Although various two-dimensional (2D) and 3D neural network models have been applied to 3D medical image segmentation and achieved impressive results, a trade-off remains between efficiency and accuracy. To address this issue, a novel mixture convolutional network (MixConvNet) is proposed, in which traditional 2D/3D convolutional blocks are replaced with novel MixConv blocks. In the MixConv block, 3D convolution is decomposed into a mixture of 2D convolutions from different views. Therefore, the MixConv block fully utilizes the advantages of 2D convolution and maintains the learning ability of 3D convolution. It acts as 3D convolutions and thus can process volumetric input directly and learn intra-slice features, which are absent in the traditional 2D convolutional block. By contrast, the proposed MixConv block only contains 2D convolutions; hence, it has significantly fewer trainable parameters and less computation budget than a block containing 3D convolutions. Furthermore, the proposed MixConvNet is pre-trained with small input patches and fine-tuned with large input patches to improve segmentation performance further. In experiments on the Decathlon Heart dataset and Sliver07 dataset, the proposed MixConvNet outperformed the state-of-the-art methods such as UNet3D, VNet, and nnUnet.
Collapse
Affiliation(s)
- Jianyong Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, Sichuan, P. R. China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, Sichuan, P. R. China
| | - Yi Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, Sichuan, P. R. China
| |
Collapse
|
6
|
Shen C, Roth HR, Hayashi Y, Oda M, Miyamoto T, Sato G, Mori K. A cascaded fully convolutional network framework for dilated pancreatic duct segmentation. Int J Comput Assist Radiol Surg 2021; 17:343-354. [PMID: 34951681 DOI: 10.1007/s11548-021-02530-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 10/27/2021] [Indexed: 10/19/2022]
Abstract
PURPOSE Pancreatic duct dilation can be considered an early sign of pancreatic ductal adenocarcinoma (PDAC). However, there is little existing research focused on dilated pancreatic duct segmentation as a potential screening tool for people without PDAC. Dilated pancreatic duct segmentation is difficult due to the lack of readily available labeled data and strong voxel imbalance between the pancreatic duct region and other regions. To overcome these challenges, we propose a two-step approach for dilated pancreatic duct segmentation from abdominal computed tomography (CT) volumes using fully convolutional networks (FCNs). METHODS Our framework segments the pancreatic duct in a cascaded manner. The pancreatic duct occupies a tiny portion of abdominal CT volumes. Therefore, to concentrate on the pancreas regions, we use a public pancreas dataset to train an FCN to generate an ROI covering the pancreas and use a 3D U-Net-like FCN for coarse pancreas segmentation. To further improve the dilated pancreatic duct segmentation, we deploy a skip connection on each corresponding resolution level and an attention mechanism in the bottleneck layer. Moreover, we introduce a combined loss function based on Dice loss and Focal loss. Random data augmentation is adopted throughout the experiments to improve the generalizability of the model. RESULTS We manually created a dilated pancreatic duct dataset with semi-automated annotation tools. Experimental results showed that our proposed framework is practical for dilated pancreatic duct segmentation. The average Dice score and sensitivity were 49.9% and 51.9%, respectively. These results show the potential of our approach as a clinical screening tool. CONCLUSIONS We investigate an automated framework for dilated pancreatic duct segmentation. The cascade strategy effectively improved the segmentation performance of the pancreatic duct. Our modifications to the FCNs together with random data augmentation and the proposed combined loss function facilitate automated segmentation.
Collapse
Affiliation(s)
- Chen Shen
- Graduate School of Informatics, Nagoya University, Nagoya, Japan
| | | | - Yuichiro Hayashi
- Graduate School of Informatics, Nagoya University, Nagoya, Japan
| | - Masahiro Oda
- Graduate School of Informatics, Nagoya University, Nagoya, Japan.,Information Strategy Office, Information and Communications, Nagoya University, Nagoya, Japan
| | | | - Gen Sato
- Chiba Kensei Hospital, Chiba, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Nagoya, Japan. .,Research Center for Medical Bigdata, National Institute of Informatics, Tokyo, Japan.
| |
Collapse
|
7
|
Yang X, Chen Y, Yue X, Ma C, Yang P. Local linear embedding based interpolation neural network in pancreatic tumor segmentation. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02847-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
8
|
Li Z, Yu J, Wang Y, Zhou H, Yang H, Qiao Z. DeepVolume: Brain Structure and Spatial Connection-Aware Network for Brain MRI Super-Resolution. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:3441-3454. [PMID: 31484151 DOI: 10.1109/tcyb.2019.2933633] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Thin-section magnetic resonance imaging (MRI) can provide higher resolution anatomical structures and more precise clinical information than thick-section images. However, thin-section MRI is not always available due to the imaging cost issue. In multicenter retrospective studies, a large number of data are often in thick-section manner with different section thickness. The lack of thin-section data and the difference in section thickness bring considerable difficulties in the study based on the image big data. In this article, we introduce DeepVolume, a two-step deep learning architecture to address the challenge of accurate thin-section MR image reconstruction. The first stage is the brain structure-aware network, in which the thick-section MR images in axial and sagittal planes are fused by a multitask 3-D U-net with prior knowledge of brain volume segmentation, which encourages the reconstruction result to have correct brain structure. The second stage is the spatial connection-aware network, in which the preliminary reconstruction results are adjusted slice-by-slice by a recurrent convolutional network embedding convolutional long short-term memory (LSTM) block, which enhances the precision of the reconstruction by utilizing the previously unassessed sagittal information. We used 305 paired brain MRI samples with thickness of 1.0 mm and 6.5 mm in this article. Extensive experiments illustrate that DeepVolume can produce the state-of-the-art reconstruction results by embedding more anatomical knowledge. Furthermore, considering DeepVolume as an intermediate step, the practical and clinical value of our method is validated by applying the brain volume estimation and voxel-based morphometry. The results show that DeepVolume can provide much more reliable brain volume estimation in the normalized space based on the thick-section MR images compared with the traditional solutions.
Collapse
|
9
|
Zheng H, Qian L, Qin Y, Gu Y, Yang J. Improving the slice interaction of 2.5D CNN for automatic pancreas segmentation. Med Phys 2020; 47:5543-5554. [PMID: 32502278 DOI: 10.1002/mp.14303] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 04/20/2020] [Accepted: 05/14/2020] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Volumetric pancreas segmentation can be used in the diagnosis of pancreatic diseases, the research about diabetes and surgical planning. Since manual delineation is time-consuming and laborious, we develop a deep learning-based framework for automatic pancreas segmentation in three dimensional (3D) medical images. METHODS A two-stage framework is designed for automatic pancreas delineation. In the localization stage, a Square Root Dice loss is developed to handle the trade-off between sensitivity and specificity. In refinement stage, a novel 2.5D slice interaction network with slice correlation module is proposed to capture the non-local cross-slice information at multiple feature levels. Also a self-supervised learning-based pre-training method, slice shuffle, is designed to encourage the inter-slice communication. To further improve the accuracy and robustness, ensemble learning and a recurrent refinement process are adopted in the segmentation flow. RESULTS The segmentation technique is validated in a public dataset (NIH Pancreas-CT) with 82 abdominal contrast-enhanced 3D CT scans. Fourfold cross-validation is performed to assess the capability and robustness of our method. The dice similarity coefficient, sensitivity, and specificity of our results are 86.21 ± 4.37%, 87.49 ± 6.38% and 85.11 ± 6.49% respectively, which is the state-of-the-art performance in this dataset. CONCLUSIONS We proposed an automatic pancreas segmentation framework and validate in an open dataset. It is found that 2.5D network benefits from multi-level slice interaction and suitable self-supervised learning method for pre-training can boost the performance of neural network. This technique could provide new image findings for the routine diagnosis of pancreatic disease.
Collapse
Affiliation(s)
- Hao Zheng
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, 800 Dongchuan RD, Minhang District, Shanghai, 200240, China
- School of Biomedical Engineering, Shanghai Jiao Tong University, 800 Dongchuan RD, Minhang District, Shanghai, 200240, China
- Institute of Medical Robotics, Shanghai Jiao Tong University, 800 Dongchuan RD, Minhang District, Shanghai, 200240, China
| | - Lijun Qian
- Department of Radiology, Renji Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, 200240, China
| | - Yulei Qin
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, 800 Dongchuan RD, Minhang District, Shanghai, 200240, China
- Institute of Medical Robotics, Shanghai Jiao Tong University, 800 Dongchuan RD, Minhang District, Shanghai, 200240, China
| | - Yun Gu
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, 800 Dongchuan RD, Minhang District, Shanghai, 200240, China
- Institute of Medical Robotics, Shanghai Jiao Tong University, 800 Dongchuan RD, Minhang District, Shanghai, 200240, China
| | - Jie Yang
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, 800 Dongchuan RD, Minhang District, Shanghai, 200240, China
- Institute of Medical Robotics, Shanghai Jiao Tong University, 800 Dongchuan RD, Minhang District, Shanghai, 200240, China
| |
Collapse
|
10
|
Abstract
Acute pancreatitis (AP) is caused by acute inflammation of the pancreas and adjacent tissue and is a common source of abdominal pain. The current CT and MRI evaluation of AP is mostly based on morphologic features. Recent advances in image acquisition and analysis offer the opportunity to go beyond morphologic features. Advanced MR techniques such as diffusion-weighted imaging, as well as T1 and T2 mapping, can potentially quantify signal changes reflective of underlying tissue abnormalities. Advanced analytic techniques such as radiomics and artificial neural networks (ANNs) offer the promise of uncovering imaging biomarkers that can provide additional classification and prognostic information. The purpose of this article is to review recent advances in imaging acquisition and analytic techniques in the evaluation of AP.
Collapse
|
11
|
Ma Z, Wu X, Wang X, Song Q, Yin Y, Cao K, Wang Y, Zhou J. An iterative multi-path fully convolutional neural network for automatic cardiac segmentation in cine MR images. Med Phys 2019; 46:5652-5665. [PMID: 31605627 DOI: 10.1002/mp.13859] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2019] [Revised: 09/02/2019] [Accepted: 10/01/2019] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Segmentation of the left ventricle (LV), right ventricle (RV) cavities and the myocardium (MYO) from cine cardiac magnetic resonance (MR) images is an important step for diagnosis and monitoring cardiac diseases. Spatial context information may be highly beneficial for segmentation performance improvement. To this end, this paper proposes an iterative multi-path fully convolutional network (IMFCN) to effectively leverage spatial context for automatic cardiac segmentation in cine MR images. METHODS To effectively leverage spatial context information, the proposed IMFCN explicitly models the interslice spatial correlations using a multi-path late fusion strategy. First, the contextual inputs including both the adjacent slices and the already predicted mask of the above adjacent slice are processed by independent feature-extraction paths. Then, an atrous spatial pyramid pooling (ASPP) module is employed at the feature fusion process to combine the extracted high-level contextual features in a more effective way. Finally, deep supervision (DS) and batch-wise class re-weighting mechanism are utilized to enhance the training of the proposed network. RESULTS The proposed IMFCN was evaluated and analyzed on the MICCAI 2017 automatic cardiac diagnosis challenge (ACDC) dataset. On the held-out training dataset reserved for testing, our method effectively improved its counterparts that without spatial context and that with spatial context but using an early fusion strategy. On the 50 subjects test dataset, our method achieved Dice similarity coefficient of 0.935, 0.920, and 0.905, and Hausdorff distance of 7.66, 12.10, and 8.80 mm for LV, RV, and MYO, respectively, which are comparable or even better than the state-of-the-art methods of ACDC Challenge. In addition, to explore the applicability to other datasets, the proposed IMFCN was retrained on the Sunnybrook dataset for LV segmentation and also produced comparable performance to the state-of-the-art methods. CONCLUSIONS We have presented an automatic end-to-end fully convolutional architecture for accurate cardiac segmentation. The proposed method provides an effective way to leverage spatial context in a two-dimensional manner and results in precise and consistent segmentation results.
Collapse
Affiliation(s)
- Zongqing Ma
- College of Computer Science, Sichuan University, Chengdu, Sichuan, 610065, China
| | - Xi Wu
- School of Computer Science, Chengdu University of Information Technology, Chengdu, Sichuan, 610225, China
| | - Xin Wang
- CuraCloud Corporation, Seattle, WA, 98104, USA
| | - Qi Song
- CuraCloud Corporation, Seattle, WA, 98104, USA
| | - Youbing Yin
- CuraCloud Corporation, Seattle, WA, 98104, USA
| | - Kunlin Cao
- CuraCloud Corporation, Seattle, WA, 98104, USA
| | - Yan Wang
- College of Computer Science, Sichuan University, Chengdu, Sichuan, 610065, China
| | - Jiliu Zhou
- College of Computer Science, Sichuan University, Chengdu, Sichuan, 610065, China.,School of Computer Science, Chengdu University of Information Technology, Chengdu, Sichuan, 610225, China
| |
Collapse
|
12
|
Fully Automated Pancreas Segmentation with Two-Stage 3D Convolutional Neural Networks. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32245-8_23] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
13
|
Fang C, Li G, Pan C, Li Y, Yu Y. Globally Guided Progressive Fusion Network for 3D Pancreas Segmentation. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32245-8_24] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
14
|
Zhu Z, Xia Y, Xie L, Fishman EK, Yuille AL. Multi-scale Coarse-to-Fine Segmentation for Screening Pancreatic Ductal Adenocarcinoma. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32226-7_1] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
15
|
Chen H, Wang X, Huang Y, Wu X, Yu Y, Wang L. Harnessing 2D Networks and 3D Features for Automated Pancreas Segmentation from Volumetric CT Images. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32226-7_38] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|