1
|
Wang B, Yang J, Zhou Y, Yang Y, Tian X, Zhang G, Zhang X. LEACS: a learnable and efficient active contour model with space-frequency pooling for medical image segmentation. Phys Med Biol 2024; 69:015026. [PMID: 38048633 DOI: 10.1088/1361-6560/ad1212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Accepted: 12/04/2023] [Indexed: 12/06/2023]
Abstract
Diseases can be diagnosed and monitored by extracting regions of interest (ROIs) from medical images. However, accurate and efficient delineation and segmentation of ROIs in medical images remain challenging due to unrefined boundaries, inhomogeneous intensity and limited image acquisition. To overcome these problems, we propose an end-to-end learnable and efficient active contour segmentation model, which integrates a global convex segmentation (GCS) module into a light-weighted encoder-decoder convolutional segmentation network with a multiscale attention module (ED-MSA). The GCS automatically obtains the initialization and corresponding parameters of the curve deformation according to the prediction map generated by the ED-MSA, while provides the refined object boundary prediction for ED-MSA optimization. To provide precise and reliable initial contour for the GCS, we design the space-frequency pooling operation layers in the encoder stage of ED-MSA, which can effectively reduce the number of iterations of the GCS. Beside, we construct ED-MSA using the depth-wise separable convolutional residual module to mitigate the overfitting of the model. The effectiveness of our method is validated on four challenging medical image datasets. Code is here:https://github.com/Yang-fashion/ED-MSA_GCS.
Collapse
Affiliation(s)
- Bing Wang
- College of Mathematics and Information Science, Hebei University, Baoding, 071000, Hebei, People's Republic of China
- Hebei Key Laboratory of machine Learning and Computational Intelligence, Hebei University, Baoding, 071000, Hebei, People's Republic of China
| | - Jie Yang
- College of Mathematics and Information Science, Hebei University, Baoding, 071000, Hebei, People's Republic of China
| | - Yunlai Zhou
- College of Mathematics and Information Science, Hebei University, Baoding, 071000, Hebei, People's Republic of China
| | - Ying Yang
- Hebei University Affiliated Hospital, Baoding, 071000, Hebei, People's Republic of China
| | - Xuedong Tian
- College of Cyber Security and Computer, Hebei University, Baoding, 071000, Hebei, People's Republic of China
| | - Guochun Zhang
- Hebei Key Laboratory of machine Learning and Computational Intelligence, Hebei University, Baoding, 071000, Hebei, People's Republic of China
| | - Xin Zhang
- College of Electronic Information Engineering, Hebei University, Baoding, 071000, Hebei, People's Republic of China
| |
Collapse
|
2
|
Chen Z, Peng C, Guo W, Xie L, Wang S, Zhuge Q, Wen C, Feng Y. Uncertainty-guided transformer for brain tumor segmentation. Med Biol Eng Comput 2023; 61:3289-3301. [PMID: 37665558 DOI: 10.1007/s11517-023-02899-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 06/07/2023] [Indexed: 09/05/2023]
Abstract
Multi-model data can enhance brain tumor segmentation for the rich information it provides. However, it also introduces some redundant information that interferes with the segmentation estimation, as some modalities may catch features irrelevant to the tissue of interest. Besides, the ambiguous boundaries and irregulate shapes of different grade tumors lead to a non-confidence estimate of segmentation quality. Given these concerns, we exploit an uncertainty-guided U-shaped transformer with multiple heads to construct drop-out format masks for robust training. Specifically, our drop-out masks are composed of boundary mask, prior probability mask, and conditional probability mask, which can help our approach focus more on uncertainty regions. Extensive experimental results show that our method achieves comparable or higher results than previous state-of-the-art brain tumor segmentation methods, achieving average dice coefficients of [Formula: see text] and Hausdorff distance of 4.91 on the BraTS2021 dataset. Our code is freely available at https://github.com/chaineypung/BTS-UGT.
Collapse
Affiliation(s)
- Zan Chen
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, China
| | - Chenxu Peng
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, China
| | - Wenlong Guo
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, China
| | - Lei Xie
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, SIAT, CAS Shenzhen, 518055, China
| | - Qichuan Zhuge
- First Affilated Hospital of Wenzhou Medical University, Wenzhou, 325000, China
| | - Caiyun Wen
- First Affilated Hospital of Wenzhou Medical University, Wenzhou, 325000, China
| | - Yuanjing Feng
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, China.
- Zhejiang Provincial United Key Laboratory of Embedded Systems, Hangzhou, 310023, China.
| |
Collapse
|
3
|
Cheng D, Gao X, Mao Y, Xiao B, You P, Gai J, Zhu M, Kang J, Zhao F, Mao N. Brain tumor feature extraction and edge enhancement algorithm based on U-Net network. Heliyon 2023; 9:e22536. [PMID: 38034799 PMCID: PMC10687284 DOI: 10.1016/j.heliyon.2023.e22536] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 11/02/2023] [Accepted: 11/14/2023] [Indexed: 12/02/2023] Open
Abstract
Background Statistics show that each year more than 100,000 patients pass away from brain tumors. Due to the diverse morphology, hazy boundaries, or unbalanced categories of medical data lesions, segmentation prediction of brain tumors has significant challenges. Purpose In this thesis, we highlight EAV-UNet, a system designed to accurately detect lesion regions. Optimizing feature extraction, utilizing automatic segmentation techniques to detect anomalous regions, and strengthening the structure. We prioritize the segmentation problem of lesion regions, especially in cases where the margins of the tumor are more hazy. Methods The VGG-19 network structure is incorporated into the coding stage of the U-Net, resulting in a deeper network structure, and an attention mechanism module is introduced to augment the feature information. Additionally, an edge detection module is added to the encoder to extract edge information in the image, which is then passed to the decoder to aid in reconstructing the original image. Our method uses the VGG-19 in place of the U-Net encoder. To strengthen feature details, we integrate a CBAM (Channel and Spatial Attention Mechanism) module into the decoder to enhance it. To extract vital edge details from the data, we incorporate an edge recognition section into the encoder. Results All evaluation metrics show major improvements with our recommended EAV-UNet technique, which is based on a thorough analysis of experimental data. Specifically, for low contrast and blurry lesion edge images, the EAV-Unet method consistently produces forecasts that are very similar to the initial images. This technique reduced the Hausdorff distance to 1.82, achieved an F1 score of 96.1%, and attained a precision of 93.2% on Dataset 1. It obtained an F1 score of 76.8%, a Precision of 85.3%, and a Hausdorff distance reduction to 1.31 on Dataset 2. Dataset 3 displayed a Hausdorff distance cut in 2.30, an F1 score of 86.9%, and Precision of 95.3%. Conclusions We conducted extensive segmentation experiments using various datasets related to brain tumors. We refined the network architecture by employing smaller convolutional kernels in our strategy. To further improve segmentation accuracy, we integrated attention modules and an edge enhancement module to reinforce edge information and boost attention scores.
Collapse
Affiliation(s)
- Dapeng Cheng
- School of Computer Science and Technology, Shandong Business and Technology University, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
- Shandong Co-Innovation Center of Future Intelligent Computing, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
| | - Xiaolian Gao
- School of Computer Science and Technology, Shandong Business and Technology University, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
| | - Yanyan Mao
- School of Computer Science and Technology, Shandong Business and Technology University, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
| | - Baozhen Xiao
- Early Spring Garden Primary School, 6452 Fushou East Street, Weifang City, Shandong Province, Weifang, 261000, China
| | - Panlu You
- School of Computer Science and Technology, Shandong Business and Technology University, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
| | - Jiale Gai
- School of Computer Science and Technology, Shandong Business and Technology University, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
| | - Minghui Zhu
- School of Computer Science and Technology, Shandong Business and Technology University, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
| | - Jialong Kang
- School of Information and Electronic Engineering, Shandong Business and Technology University, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
| | - Feng Zhao
- School of Computer Science and Technology, Shandong Business and Technology University, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
- Shandong Co-Innovation Center of Future Intelligent Computing, No.191 Binhai Middle Road, Yantai City, Shandong Province, Yantai, 264000, China
| | - Ning Mao
- Department of Radiology, Yantai Yuhuangding Hospital, No.20, Yudong Road, Yantai City, Shandong Province, Yantai, 264000, China
| |
Collapse
|
4
|
Hu Z, Li L, Sui A, Wu G, Wang Y, Yu J. An efficient R-Transformer network with dual encoders for brain glioma segmentation in MR images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
5
|
Gao H, Miao Q, Ma D, Liua R. Deep Mutual Learning for Brain Tumor Segmentation with the Fusion Network. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.11.038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
6
|
Abstract
AbstractBrain tumor segmentation is one of the most challenging problems in medical image analysis. The goal of brain tumor segmentation is to generate accurate delineation of brain tumor regions. In recent years, deep learning methods have shown promising performance in solving various computer vision problems, such as image classification, object detection and semantic segmentation. A number of deep learning based methods have been applied to brain tumor segmentation and achieved promising results. Considering the remarkable breakthroughs made by state-of-the-art technologies, we provide this survey with a comprehensive study of recently developed deep learning based brain tumor segmentation techniques. More than 150 scientific papers are selected and discussed in this survey, extensively covering technical aspects such as network architecture design, segmentation under imbalanced conditions, and multi-modality processes. We also provide insightful discussions for future development directions.
Collapse
|
7
|
HGG and LGG Brain Tumor Segmentation in Multi-Modal MRI Using Pretrained Convolutional Neural Networks of Amazon Sagemaker. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073620] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Automatic brain tumor segmentation from multimodal MRI plays a significant role in assisting the diagnosis, treatment, and surgery of glioblastoma and lower glade glioma. In this article, we propose applying several deep learning techniques implemented in AWS SageMaker Framework. The different CNN architectures are adapted and fine-tuned for our purpose of brain tumor segmentation.The experiments are evaluated and analyzed in order to obtain the best parameters as possible for the models created. The selected architectures are trained on the publicly available BraTS 2017–2020 dataset. The segmentation distinguishes the background, healthy tissue, whole tumor, edema, enhanced tumor, and necrosis. Further, a random search for parameter optimization is presented to additionally improve the architectures obtained. Lastly, we also compute the detection results of the ensemble model created from the weighted average of the six models described. The goal of the ensemble is to improve the segmentation at the tumor tissue boundaries. Our results are compared to the BraTS 2020 competition and leaderboard and are among the first 25% considering the ranking of Dice scores.
Collapse
|
8
|
Zhu J, Bolsterlee B, Chow BVY, Cai C, Herbert RD, Song Y, Meijering E. Deep learning methods for automatic segmentation of lower leg muscles and bones from MRI scans of children with and without cerebral palsy. NMR IN BIOMEDICINE 2021; 34:e4609. [PMID: 34545647 DOI: 10.1002/nbm.4609] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 08/10/2021] [Accepted: 08/12/2021] [Indexed: 06/13/2023]
Abstract
Cerebral palsy is a neurological condition that is known to affect muscle growth. Detailed investigations of muscle growth require segmentation of muscles from MRI scans, which is typically done manually. In this study, we evaluated the performance of 2D, 3D, and hybrid deep learning models for automatic segmentation of 11 lower leg muscles and two bones from MRI scans of children with and without cerebral palsy. All six models were trained and evaluated on manually segmented T1 -weighted MRI scans of the lower legs of 20 children, six of whom had cerebral palsy. The segmentation results were assessed using the median Dice similarity coefficient (DSC), average symmetric surface distance (ASSD), and volume error (VError) of all 13 labels of every scan. The best performance was achieved by H-DenseUNet, a hybrid model (DSC 0.90, ASSD 0.5 mm, and VError 2.6 cm3 ). The performance was equivalent to the inter-rater performance of manual segmentation (DSC 0.89, ASSD 0.6 mm, and VError 3.3 cm3 ). Models trained with the Dice loss function outperformed models trained with the cross-entropy loss function. Near-optimal performance could be attained using only 11 scans for training. Segmentation performance was similar for scans of typically developing children (DSC 0.90, ASSD 0.5 mm, and VError 2.8 cm3 ) and children with cerebral palsy (DSC 0.85, ASSD 0.6 mm, and VError 2.4 cm3 ). These findings demonstrate the feasibility of fully automatic segmentation of individual muscles and bones from MRI scans of children with and without cerebral palsy.
Collapse
Affiliation(s)
- Jiayi Zhu
- School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
- Neuroscience Research Australia (NeuRA), Sydney, Australia
| | - Bart Bolsterlee
- Neuroscience Research Australia (NeuRA), Sydney, Australia
- Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Australia
| | - Brian V Y Chow
- Neuroscience Research Australia (NeuRA), Sydney, Australia
- School of Medical Sciences, University of New South Wales, Sydney, Australia
| | - Chengxue Cai
- School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
| | - Robert D Herbert
- Neuroscience Research Australia (NeuRA), Sydney, Australia
- School of Medical Sciences, University of New South Wales, Sydney, Australia
| | - Yang Song
- School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
| |
Collapse
|
9
|
Huang YJ, Dou Q, Wang ZX, Liu LZ, Jin Y, Li CF, Wang L, Chen H, Xu RH. 3-D RoI-Aware U-Net for Accurate and Efficient Colorectal Tumor Segmentation. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:5397-5408. [PMID: 32248143 DOI: 10.1109/tcyb.2020.2980145] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Segmentation of colorectal cancerous regions from 3-D magnetic resonance (MR) images is a crucial procedure for radiotherapy. Automatic delineation from 3-D whole volumes is in urgent demand yet very challenging. Drawbacks of existing deep-learning-based methods for this task are two-fold: 1) extensive graphics processing unit (GPU) memory footprint of 3-D tensor limits the trainable volume size, shrinks effective receptive field, and therefore, degrades speed and segmentation performance and 2) in-region segmentation methods supported by region-of-interest (RoI) detection are either blind to global contexts, detail richness compromising, or too expensive for 3-D tasks. To tackle these drawbacks, we propose a novel encoder-decoder-based framework for 3-D whole volume segmentation, referred to as 3-D RoI-aware U-Net (3-D RU-Net). 3-D RU-Net fully utilizes the global contexts covering large effective receptive fields. Specifically, the proposed model consists of a global image encoder for global understanding-based RoI localization, and a local region decoder that operates on pyramid-shaped in-region global features, which is GPU memory efficient and thereby enables training and prediction with large 3-D whole volumes. To facilitate the global-to-local learning procedure and enhance contour detail richness, we designed a dice-based multitask hybrid loss function. The efficiency of the proposed framework enables an extensive model ensemble for further performance gain at acceptable extra computational costs. Over a dataset of 64 T2-weighted MR images, the experimental results of four-fold cross-validation show that our method achieved 75.5% dice similarity coefficient (DSC) in 0.61 s per volume on a GPU, which significantly outperforms competing methods in terms of accuracy and efficiency. The code is publicly available.
Collapse
|
10
|
Zhang Y, Zhong P, Jie D, Wu J, Zeng S, Chu J, Liu Y, Wu EX, Tang X. Brain Tumor Segmentation From Multi-Modal MR Images via Ensembling UNets. FRONTIERS IN RADIOLOGY 2021; 1:704888. [PMID: 37492172 PMCID: PMC10365098 DOI: 10.3389/fradi.2021.704888] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 09/27/2021] [Indexed: 07/27/2023]
Abstract
Glioma is a type of severe brain tumor, and its accurate segmentation is useful in surgery planning and progression evaluation. Based on different biological properties, the glioma can be divided into three partially-overlapping regions of interest, including whole tumor (WT), tumor core (TC), and enhancing tumor (ET). Recently, UNet has identified its effectiveness in automatically segmenting brain tumor from multi-modal magnetic resonance (MR) images. In this work, instead of network architecture, we focus on making use of prior knowledge (brain parcellation), training and testing strategy (joint 3D+2D), ensemble and post-processing to improve the brain tumor segmentation performance. We explore the accuracy of three UNets with different inputs, and then ensemble the corresponding three outputs, followed by post-processing to achieve the final segmentation. Similar to most existing works, the first UNet uses 3D patches of multi-modal MR images as the input. The second UNet uses brain parcellation as an additional input. And the third UNet is inputted by 2D slices of multi-modal MR images, brain parcellation, and probability maps of WT, TC, and ET obtained from the second UNet. Then, we sequentially unify the WT segmentation from the third UNet and the fused TC and ET segmentation from the first and the second UNets as the complete tumor segmentation. Finally, we adopt a post-processing strategy by labeling small ET as non-enhancing tumor to correct some false-positive ET segmentation. On one publicly-available challenge validation dataset (BraTS2018), the proposed segmentation pipeline yielded average Dice scores of 91.03/86.44/80.58% and average 95% Hausdorff distances of 3.76/6.73/2.51 mm for WT/TC/ET, exhibiting superior segmentation performance over other state-of-the-art methods. We then evaluated the proposed method on the BraTS2020 training data through five-fold cross validation, with similar performance having also been observed. The proposed method was finally evaluated on 10 in-house data, the effectiveness of which has been established qualitatively by professional radiologists.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Tencent Music Entertainment, Shenzhen, China
| | - Pinyuan Zhong
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Dabin Jie
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Jiewei Wu
- School of Electronics and Information Technology, Sun Yat-Sen University, Guangzhou, China
| | - Shanmei Zeng
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Jianping Chu
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Yilong Liu
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Ed X. Wu
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Xiaoying Tang
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| |
Collapse
|
11
|
Liu L, Wolterink JM, Brune C, Veldhuis RNJ. Anatomy-aided deep learning for medical image segmentation: a review. Phys Med Biol 2021; 66. [PMID: 33906186 DOI: 10.1088/1361-6560/abfbf4] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 04/27/2021] [Indexed: 01/17/2023]
Abstract
Deep learning (DL) has become widely used for medical image segmentation in recent years. However, despite these advances, there are still problems for which DL-based segmentation fails. Recently, some DL approaches had a breakthrough by using anatomical information which is the crucial cue for manual segmentation. In this paper, we provide a review of anatomy-aided DL for medical image segmentation which covers systematically summarized anatomical information categories and corresponding representation methods. We address known and potentially solvable challenges in anatomy-aided DL and present a categorized methodology overview on using anatomical information with DL from over 70 papers. Finally, we discuss the strengths and limitations of the current anatomy-aided DL approaches and suggest potential future work.
Collapse
Affiliation(s)
- Lu Liu
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands.,Data Management and Biometrics, Department of Computer Science, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Jelmer M Wolterink
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Christoph Brune
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Raymond N J Veldhuis
- Data Management and Biometrics, Department of Computer Science, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| |
Collapse
|
12
|
Cheng G, Ji H, He L. Correcting and reweighting false label masks in brain tumor segmentation. Med Phys 2020; 48:169-177. [PMID: 32974920 DOI: 10.1002/mp.14480] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2020] [Revised: 07/26/2020] [Accepted: 08/10/2020] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Recently, brain tumor segmentation has made important progress. However, the quality of manual labels plays an important role in the performance, while in practice, it could vary greatly and in turn could substantially mislead the learning process and decrease the accuracy. We need to design a mechanism to combine label correction and sample reweighting to improve the effectiveness of brain tumor segmentation. METHODS We propose a novel sample reweighting and label refinement method, and a novel three-dimensional (3D) generative adversarial network (GAN) is introduced to combine these two models into an united framework. RESULTS Extensive experiments on the BraTS19 dataset have demonstrated that our approach obtains competitive results when compared with other state-of-the-art approaches when handling the false labels in brain tumor segmentation. CONCLUSIONS The 3D GAN-based approach is an effective approach to handle false label masks by simultaneously applying label correction and sample reweighting. Our method is robust to variations in tumor shape and background clutter.
Collapse
Affiliation(s)
- Guohua Cheng
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, 200433, China.,Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, Shanghai, 200433, China
| | - Hongli Ji
- Jianpei Technology Co. Ltd, Hangzhou, 311200, China
| | - Linyang He
- Jianpei Technology Co. Ltd, Hangzhou, 311200, China
| |
Collapse
|
13
|
Zhou C, Ding C, Wang X, Lu Z, Tao D. One-pass Multi-task Networks with Cross-task Guided Attention for Brain Tumor Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:4516-4529. [PMID: 32086210 DOI: 10.1109/tip.2020.2973510] [Citation(s) in RCA: 62] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Class imbalance has emerged as one of the major challenges for medical image segmentation. The model cascade (MC) strategy, a popular scheme, significantly alleviates the class imbalance issue via running a set of individual deep models for coarse-to-fine segmentation. Despite its outstanding performance, however, this method leads to undesired system complexity and also ignores the correlation among the models. To handle these flaws in the MC approach, we propose in this paper a light-weight deep model, i.e., the One-pass Multi-task Network (OM-Net) to solve class imbalance better than MC does, while requiring only one-pass computation for brain tumor segmentation. First, OM-Net integrates the separate segmentation tasks into one deep model, which consists of shared parameters to learn joint features, as well as task-specific parameters to learn discriminative features. Second, to more effectively optimize OM-Net, we take advantage of the correlation among tasks to design both an online training data transfer strategy and a curriculum learning-based training strategy. Third, we further propose sharing prediction results between tasks, which enables us to design a cross-task guided attention (CGA) module. By following the guidance of the prediction results provided by the previous task, CGA can adaptively recalibrate channel-wise feature responses based on the category-specific statistics. Finally, a simple yet effective post-processing method is introduced to refine the segmentation results of the proposed attention network. Extensive experiments are conducted to demonstrate the effectiveness of the proposed techniques. Most impressively, we achieve state-of-the-art performance on the BraTS 2015 testing set and BraTS 2017 online validation set. Using these proposed approaches, we also won joint third place in the BraTS 2018 challenge among 64 participating teams.The code will be made publicly available at https://github.com/chenhong-zhou/OM-Net.
Collapse
|
14
|
Hu X, Luo W, Hu J, Guo S, Huang W, Scott MR, Wiest R, Dahlweid M, Reyes M. Brain SegNet: 3D local refinement network for brain lesion segmentation. BMC Med Imaging 2020; 20:17. [PMID: 32046685 PMCID: PMC7014943 DOI: 10.1186/s12880-020-0409-2] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Accepted: 01/03/2020] [Indexed: 11/20/2022] Open
Abstract
MR images (MRIs) accurate segmentation of brain lesions is important for improving cancer diagnosis, surgical planning, and prediction of outcome. However, manual and accurate segmentation of brain lesions from 3D MRIs is highly expensive, time-consuming, and prone to user biases. We present an efficient yet conceptually simple brain segmentation network (referred as Brain SegNet), which is a 3D residual framework for automatic voxel-wise segmentation of brain lesion. Our model is able to directly predict dense voxel segmentation of brain tumor or ischemic stroke regions in 3D brain MRIs. The proposed 3D segmentation network can run at about 0.5s per MRIs - about 50 times faster than previous approaches Med Image Anal 43: 98–111, 2018, Med Image Anal 36:61–78, 2017. Our model is evaluated on the BRATS 2015 benchmark for brain tumor segmentation, where it obtains state-of-the-art results, by surpassing recently published results reported in Med Image Anal 43: 98–111, 2018, Med Image Anal 36:61–78, 2017. We further applied the proposed Brain SegNet for ischemic stroke lesion outcome prediction, with impressive results achieved on the Ischemic Stroke Lesion Segmentation (ISLES) 2017 database.
Collapse
Affiliation(s)
- Xiaojun Hu
- Malong Technologies, Shenzhen, China.,Shenzhen Malong Artificial Intelligence Research Center, Shenzhen, China
| | - Weijian Luo
- Department of Neurosurgery, Second Clinical Medical College of Jinan University (Shenzhen People's Hospital), Shenzhen, China
| | - Jiliang Hu
- Department of Neurosurgery, Second Clinical Medical College of Jinan University (Shenzhen People's Hospital), Shenzhen, China.
| | - Sheng Guo
- Malong Technologies, Shenzhen, China.,Shenzhen Malong Artificial Intelligence Research Center, Shenzhen, China
| | - Weilin Huang
- Malong Technologies, Shenzhen, China. .,Shenzhen Malong Artificial Intelligence Research Center, Shenzhen, China.
| | - Matthew R Scott
- Malong Technologies, Shenzhen, China.,Shenzhen Malong Artificial Intelligence Research Center, Shenzhen, China
| | - Roland Wiest
- Imaging A.I. Lab, Insel Data Science Center, Bern University Hospital, Bern, Switzerland
| | - Michael Dahlweid
- Imaging A.I. Lab, Insel Data Science Center, Bern University Hospital, Bern, Switzerland
| | - Mauricio Reyes
- Imaging A.I. Lab, Insel Data Science Center, Bern University Hospital, Bern, Switzerland
| |
Collapse
|
15
|
Two-Stage Cascaded U-Net: 1st Place Solution to BraTS Challenge 2019 Segmentation Task. BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES 2020. [DOI: 10.1007/978-3-030-46640-4_22] [Citation(s) in RCA: 96] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
16
|
Pereira S, Pinto A, Amorim J, Ribeiro A, Alves V, Silva CA. Adaptive Feature Recombination and Recalibration for Semantic Segmentation With Fully Convolutional Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2914-2925. [PMID: 31135354 DOI: 10.1109/tmi.2019.2918096] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Fully convolutional networks have been achieving remarkable results in image semantic segmentation, while being efficient. Such efficiency results from the capability of segmenting several voxels in a single forward pass. So, there is a direct spatial correspondence between a unit in a feature map and the voxel in the same location. In a convolutional layer, the kernel spans over all channels and extracts information from them. We observe that linear recombination of feature maps by increasing the number of channels followed by compression may enhance their discriminative power. Moreover, not all feature maps have the same relevance for the classes being predicted. In order to learn the inter-channel relationships and recalibrate the channels to suppress the less relevant ones, squeeze and excitation blocks were proposed in the context of image classification with convolutional neural networks. However, this is not well adapted for segmentation with fully convolutional networks since they segment several objects simultaneously, hence a feature map may contain relevant information only in some locations. In this paper, we propose recombination of features and a spatially adaptive recalibration block that is adapted for semantic segmentation with fully convolutional networks- the SegSE block. Feature maps are recalibrated by considering the cross-channel information together with spatial relevance. The experimental results indicate that recombination and recalibration improve the results of a competitive baseline, and generalize across three different problems: brain tumor segmentation, stroke penumbra estimation, and ischemic stroke lesion outcome prediction. The obtained results are competitive or outperform the state of the art in the three applications.
Collapse
|
17
|
Novikov AA, Major D, Wimmer M, Lenis D, Buhler K. Deep Sequential Segmentation of Organs in Volumetric Medical Scans. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1207-1215. [PMID: 30452352 DOI: 10.1109/tmi.2018.2881678] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Segmentation in 3-D scans is playing an increasingly important role in current clinical practice supporting diagnosis, tissue quantification, or treatment planning. The current 3-D approaches based on convolutional neural networks usually suffer from at least three main issues caused predominantly by implementation constraints-first, they require resizing the volume to the lower-resolutional reference dimensions, and second, the capacity of such approaches is very limited due to memory restrictions, and third, all slices of volumes have to be available at any given training or testing time. We address these problems by a U-Net-like architecture consisting of bidirectional convolutional long short-term memory and convolutional, pooling, upsampling, and concatenation layers enclosed into time-distributed wrappers. Our network can either process the full volumes in a sequential manner or segment slabs of slices on demand. We demonstrate performance of our architecture on vertebrae and liver segmentation tasks in 3-D computed tomography scans.
Collapse
|
18
|
Zhou C, Chen S, Ding C, Tao D. Learning Contextual and Attentive Information for Brain Tumor Segmentation. BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES 2019. [DOI: 10.1007/978-3-030-11726-9_44] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
19
|
CU-Net: Cascaded U-Net with Loss Weighted Sampling for Brain Tumor Segmentation. MULTIMODAL BRAIN IMAGE ANALYSIS AND MATHEMATICAL FOUNDATIONS OF COMPUTATIONAL ANATOMY 2019. [DOI: 10.1007/978-3-030-33226-6_12] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
20
|
Learning Data Augmentation for Brain Tumor Segmentation with Coarse-to-Fine Generative Adversarial Networks. BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES 2019. [DOI: 10.1007/978-3-030-11723-8_7] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
21
|
Jiang J, Hu YC, Tyagi N, Zhang P, Rimner A, Mageras GS, Deasy JO, Veeraraghavan H. Tumor-aware, Adversarial Domain Adaptation from CT to MRI for Lung Cancer Segmentation. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2018; 11071:777-785. [PMID: 30294726 PMCID: PMC6169798 DOI: 10.1007/978-3-030-00934-2_86] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
We present an adversarial domain adaptation based deep learning approach for automatic tumor segmentation from T2-weighted MRI. Our approach is composed of two steps: (i) a tumor-aware unsupervised cross-domain adaptation (CT to MRI), followed by (ii) semi-supervised tumor segmentation using Unet trained with synthesized and limited number of original MRIs. We introduced a novel target specific loss, called tumor-aware loss, for unsupervised cross-domain adaptation that helps to preserve tumors on synthesized MRIs produced from CT images. In comparison, state-of-the art adversarial networks trained without our tumor-aware loss produced MRIs with ill-preserved or missing tumors. All networks were trained using labeled CT images from 377 patients with non-small cell lung cancer obtained from the Cancer Imaging Archive and unlabeled T2w MRIs from a completely unrelated cohort of 6 patients with pre-treatment and 36 on-treatment scans. Next, we combined 6 labeled pre-treatment MRI scans with the synthesized MRIs to boost tumor segmentation accuracy through semi-supervised learning. Semi-supervised training of cycle-GAN produced a segmentation accuracy of 0.66 computed using Dice Score Coefficient (DSC). Our method trained with only synthesized MRIs produced an accuracy of 0.74 while the same method trained in semi-supervised setting produced the best accuracy of 0.80 on test. Our results show that tumor-aware adversarial domain adaptation helps to achieve reasonably accurate cancer segmentation from limited MRI data by leveraging large CT datasets.
Collapse
Affiliation(s)
- Jue Jiang
- Medical Physics, Memorial Sloan Kettering Cancer Center
| | - Yu-Chi Hu
- Medical Physics, Memorial Sloan Kettering Cancer Center
| | - Neelam Tyagi
- Medical Physics, Memorial Sloan Kettering Cancer Center
| | | | - Andreas Rimner
- Radiation Oncology, Memorial Sloan Kettering Cancer Center
| | | | | | | |
Collapse
|
22
|
Mahapatra D, Ge Z, Sedai S, Chakravorty R. Joint Registration And Segmentation Of Xray Images Using Generative Adversarial Networks. MACHINE LEARNING IN MEDICAL IMAGING 2018. [DOI: 10.1007/978-3-030-00919-9_9] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|