1
|
Wang P, Zhang H, Zhu M, Jiang X, Qin J, Yuan Y. MGIML: Cancer Grading With Incomplete Radiology-Pathology Data via Memory Learning and Gradient Homogenization. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2113-2124. [PMID: 38231819 DOI: 10.1109/tmi.2024.3355142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Taking advantage of multi-modal radiology-pathology data with complementary clinical information for cancer grading is helpful for doctors to improve diagnosis efficiency and accuracy. However, radiology and pathology data have distinct acquisition difficulties and costs, which leads to incomplete-modality data being common in applications. In this work, we propose a Memory- and Gradient-guided Incomplete Modal-modal Learning (MGIML) framework for cancer grading with incomplete radiology-pathology data. Firstly, to remedy missing-modality information, we propose a Memory-driven Hetero-modality Complement (MH-Complete) scheme, which constructs modal-specific memory banks constrained by a coarse-grained memory boosting (CMB) loss to record generic radiology and pathology feature patterns, and develops a cross-modal memory reading strategy enhanced by a fine-grained memory consistency (FMC) loss to take missing-modality information from well-stored memories. Secondly, as gradient conflicts exist between missing-modality situations, we propose a Rotation-driven Gradient Homogenization (RG-Homogenize) scheme, which estimates instance-specific rotation matrices to smoothly change the feature-level gradient directions, and computes confidence-guided homogenization weights to dynamically balance gradient magnitudes. By simultaneously mitigating gradient direction and magnitude conflicts, this scheme well avoids the negative transfer and optimization imbalance problems. Extensive experiments on CPTAC-UCEC and CPTAC-PDA datasets show that the proposed MGIML framework performs favorably against state-of-the-art multi-modal methods on missing-modality situations.
Collapse
|
2
|
Zhang H, Liu J, Liu W, Chen H, Yu Z, Yuan Y, Wang P, Qin J. MHD-Net: Memory-Aware Hetero-Modal Distillation Network for Thymic Epithelial Tumor Typing With Missing Pathology Modality. IEEE J Biomed Health Inform 2024; 28:3003-3014. [PMID: 38470599 DOI: 10.1109/jbhi.2024.3376462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
Fusing multi-modal radiology and pathology data with complementary information can improve the accuracy of tumor typing. However, collecting pathology data is difficult since it is high-cost and sometimes only obtainable after the surgery, which limits the application of multi-modal methods in diagnosis. To address this problem, we propose comprehensively learning multi-modal radiology-pathology data in training, and only using uni-modal radiology data in testing. Concretely, a Memory-aware Hetero-modal Distillation Network (MHD-Net) is proposed, which can distill well-learned multi-modal knowledge with the assistance of memory from the teacher to the student. In the teacher, to tackle the challenge in hetero-modal feature fusion, we propose a novel spatial-differentiated hetero-modal fusion module (SHFM) that models spatial-specific tumor information correlations across modalities. As only radiology data is accessible to the student, we store pathology features in the proposed contrast-boosted typing memory module (CTMM) that achieves type-wise memory updating and stage-wise contrastive memory boosting to ensure the effectiveness and generalization of memory items. In the student, to improve the cross-modal distillation, we propose a multi-stage memory-aware distillation (MMD) scheme that reads memory-aware pathology features from CTMM to remedy missing modal-specific information. Furthermore, we construct a Radiology-Pathology Thymic Epithelial Tumor (RPTET) dataset containing paired CT and WSI images with annotations. Experiments on the RPTET and CPTAC-LUAD datasets demonstrate that MHD-Net significantly improves tumor typing and outperforms existing multi-modal methods on missing modality situations.
Collapse
|
3
|
Natarajan SK, S J, Mathivanan SK, Rajadurai H, M B BAM, Shah MA. Exploring fetal brain tumor glioblastoma symptom verification with self organizing maps and vulnerability data analysis. Sci Rep 2024; 14:8738. [PMID: 38627421 DOI: 10.1038/s41598-024-59111-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 04/08/2024] [Indexed: 04/19/2024] Open
Abstract
Brain tumor glioblastoma is a disease that is caused for a child who has abnormal cells in the brain, which is found using MRI "Magnetic Resonance Imaging" brain image using a powerful magnetic field, radio waves, and a computer to produce detailed images of the body's internal structures it is a standard diagnostic tool for a wide range of medical conditions, from detecting brain and spinal cord injuries to identifying tumors and also in evaluating joint problems. This is treatable, and by enabling the factor for happening, the factor for dissolving the dead tissues. If the brain tumor glioblastoma is untreated, the child will go to death; to avoid this, the child has to treat the brain problem using the scan of MRI images. Using the neural network, brain-related difficulties have to be resolved. It is identified to make the diagnosis of glioblastoma. This research deals with the techniques of max rationalizing and min rationalizing images, and the method of boosted division time attribute extraction has been involved in diagnosing glioblastoma. The process of maximum and min rationalization is used to recognize the Brain tumor glioblastoma in the brain images for treatment efficiency. The image segment is created for image recognition. The method of boosted division time attribute extraction is used in image recognition with the help of MRI for image extraction. The proposed boosted division time attribute extraction method helps to recognize the fetal images and find Brain tumor glioblastoma with feasible accuracy using image rationalization against the brain tumor glioblastoma diagnosis. In addition, 45% of adults are affected by the tumor, 40% of children and 5% are in death situations. To reduce this ratio, in this study, the Brain tumor glioblastoma is identified and segmented to recognize the fetal images and find the Brain tumor glioblastoma diagnosis. Then the tumor grades were analyzed using the efficient method for the imaging MRI with the diagnosis result of partially high. The accuracy of the proposed TAE-PIS system is 98.12% which is higher when compared to other methods like Genetic algorithm, Convolution neural network, fuzzy-based minimum and maximum neural network and kernel-based support vector machine respectively. Experimental results show that the proposed method archives rate of 98.12% accuracy with low response time and compared with the Genetic algorithm (GA), Convolutional Neural Network (CNN), fuzzy-based minimum and maximum neural network (Fuzzy min-max NN), and kernel-based support vector machine. Specifically, the proposed method achieves a substantial improvement of 80.82%, 82.13%, 85.61%, and 87.03% compared to GA, CNN, Fuzzy min-max NN, and kernel-based support vector machine, respectively.
Collapse
Affiliation(s)
- Suresh Kumar Natarajan
- School of Computer Science and Engineering, JAIN (Deemed-to-be University), Ramanagara, India
| | - Jayanthi S
- Department of Information Technology, Guru Nanak Institute of Technology, Ibrahimpatnam, Hyderabad, Telangana, India
| | - Sandeep Kumar Mathivanan
- School of Computer Science and Engineering, Galgotias University, Greater Noida, 203201, Uttar Pradesh, India
| | - Hariharan Rajadurai
- School of Computing Science and Engineering, VIT Bhopal University, Bhopal-Indore Highway Kothrikalan, Sehore, MP, India
| | - Benjula Anbu Malar M B
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Mohd Asif Shah
- Kebri Dehar University, Kebri Dehar, 250, Somali, Ethiopia.
- Centre of Research Impact and Outcome, Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, 140401, Punjab, India.
- Division of Research and Development, Lovely Professional University, Phagwara, 144001, Punjab, India.
| |
Collapse
|
4
|
Sun Y, Wang C. Brain tumor detection based on a novel and high-quality prediction of the tumor pixel distributions. Comput Biol Med 2024; 172:108196. [PMID: 38493601 DOI: 10.1016/j.compbiomed.2024.108196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 01/31/2024] [Accepted: 02/18/2024] [Indexed: 03/19/2024]
Abstract
The work presented in this paper is in the area of brain tumor detection. We propose a fast detection system with 3D MRI scans of Flair modality. It performs 2 functions, predicting the gray level distribution and location distribution of the pixels in the tumor regions and generating tumor masks with pixel-wise precision. To facilitate 3D data analysis and processing, we introduce a 2D histogram presentation encompassing the gray-level distribution and pixel-location distribution of a 3D object. In the proposed system, specific 2D histograms highlighting tumor-related features are established by exploiting the left-right asymmetry of a brain structure. A modulation function, generated from the input data of each patient case, is applied to the 2D histograms to transform them into coarsely or finely predicted distributions of tumor pixels. The prediction result helps to identify/remove tumor-free slices. The prediction and removal operations are performed to the axial, coronal and sagittal slice series of a brain image, transforming it into a 3D minimum bounding box of its tumor region. The bounding box is utilized to finalize the prediction and generate a 3D tumor mask. The proposed system has been tested extensively with the data of more than 1200 patient cases in BraTS2018∼2021 datasets. The test results demonstrate that the predicted 2D histograms resemble closely the true ones. The system delivers also very good tumor detection results, comparable to those of state-of-the-art CNN systems with mono-modality inputs. They are reproducible and obtained at an extremely low computation cost and without need for training.
Collapse
Affiliation(s)
- Yanming Sun
- Department of Electrical and Computer Engineering, Concordia University, 1455 De Maisonneuve Blvd. W, Montreal, Quebec, Canada, H3G 1M8
| | - Chunyan Wang
- Department of Electrical and Computer Engineering, Concordia University, 1455 De Maisonneuve Blvd. W, Montreal, Quebec, Canada, H3G 1M8.
| |
Collapse
|
5
|
Zhang D, Wang C, Chen T, Chen W, Shen Y. Scalable Swin Transformer network for brain tumor segmentation from incomplete MRI modalities. Artif Intell Med 2024; 149:102788. [PMID: 38462288 DOI: 10.1016/j.artmed.2024.102788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 12/19/2023] [Accepted: 01/25/2024] [Indexed: 03/12/2024]
Abstract
BACKGROUND Deep learning methods have shown great potential in processing multi-modal Magnetic Resonance Imaging (MRI) data, enabling improved accuracy in brain tumor segmentation. However, the performance of these methods can suffer when dealing with incomplete modalities, which is a common issue in clinical practice. Existing solutions, such as missing modality synthesis, knowledge distillation, and architecture-based methods, suffer from drawbacks such as long training times, high model complexity, and poor scalability. METHOD This paper proposes IMS2Trans, a novel lightweight scalable Swin Transformer network by utilizing a single encoder to extract latent feature maps from all available modalities. This unified feature extraction process enables efficient information sharing and fusion among the modalities, resulting in efficiency without compromising segmentation performance even in the presence of missing modalities. RESULTS Two datasets, BraTS 2018 and BraTS 2020, containing incomplete modalities for brain tumor segmentation are evaluated against popular benchmarks. On the BraTS 2018 dataset, our model achieved higher average Dice similarity coefficient (DSC) scores for the whole tumor, tumor core, and enhancing tumor regions (86.57, 75.67, and 58.28, respectively), in comparison with a state-of-the-art model, i.e. mmFormer (86.45, 75.51, and 57.79, respectively). Similarly, on the BraTS 2020 dataset, our model scored higher DSC scores in these three brain tumor regions (87.33, 79.09, and 62.11, respectively) compared to mmFormer (86.17, 78.34, and 60.36, respectively). We also conducted a Wilcoxon test on the experimental results, and the generated p-value confirmed that our model's performance was statistically significant. Moreover, our model exhibits significantly reduced complexity with only 4.47 M parameters, 121.89G FLOPs, and a model size of 77.13 MB, whereas mmFormer comprises 34.96 M parameters, 265.79 G FLOPs, and a model size of 559.74 MB. These indicate our model, being light-weighted with significantly reduced parameters, is still able to achieve better performance than a state-of-the-art model. CONCLUSION By leveraging a single encoder for processing the available modalities, IMS2Trans offers notable scalability advantages over methods that rely on multiple encoders. This streamlined approach eliminates the need for maintaining separate encoders for each modality, resulting in a lightweight and scalable network architecture. The source code of IMS2Trans and the associated weights are both publicly available at https://github.com/hudscomdz/IMS2Trans.
Collapse
Affiliation(s)
- Dongsong Zhang
- School of Big Data and Artificial Intelligence, Xinyang College, Xinyang, 464000, Henan, China; School of Computing and Engineering, University of Huddersfield, Huddersfield, HD13DH, UK
| | - Changjian Wang
- National Key Laboratory of Parallel and Distributed Computing, Changsha, 410073, Hunan, China
| | - Tianhua Chen
- School of Computing and Engineering, University of Huddersfield, Huddersfield, HD13DH, UK
| | - Weidao Chen
- Beijing Infervision Technology Co., Ltd., Beijing, 100020, China
| | - Yiqing Shen
- Department of Computer Science, Johns Hopkins University, Baltimore, 21218, MD, USA.
| |
Collapse
|
6
|
Liu H, Ni Z, Nie D, Shen D, Wang J, Tang Z. Multimodal Brain Tumor Segmentation Boosted by Monomodal Normal Brain Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:1199-1210. [PMID: 38315584 DOI: 10.1109/tip.2024.3359815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2024]
Abstract
Many deep learning based methods have been proposed for brain tumor segmentation. Most studies focus on deep network internal structure to improve the segmentation accuracy, while valuable external information, such as normal brain appearance, is often ignored. Inspired by the fact that radiologists often screen lesion regions with normal appearance as reference in mind, in this paper, we propose a novel deep framework for brain tumor segmentation, where normal brain images are adopted as reference to compare with tumor brain images in a learned feature space. In this way, features at tumor regions, i.e., tumor-related features, can be highlighted and enhanced for accurate tumor segmentation. It is known that routine tumor brain images are multimodal, while normal brain images are often monomodal. This causes the feature comparison a big issue, i.e., multimodal vs. monomodal. To this end, we present a new feature alignment module (FAM) to make the feature distribution of monomodal normal brain images consistent/inconsistent with multimodal tumor brain images at normal/tumor regions, making the feature comparison effective. Both public (BraTS2022) and in-house tumor brain image datasets are used to evaluate our framework. Experimental results demonstrate that for both datasets, our framework can effectively improve the segmentation accuracy and outperforms the state-of-the-art segmentation methods. Codes are available at https://github.com/hb-liu/Normal-Brain-Boost-Tumor-Segmentation.
Collapse
|
7
|
Liu H, Huang J, Li Q, Guan X, Tseng M. A deep convolutional neural network for the automatic segmentation of glioblastoma brain tumor: Joint spatial pyramid module and attention mechanism network. Artif Intell Med 2024; 148:102776. [PMID: 38325925 DOI: 10.1016/j.artmed.2024.102776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 12/20/2023] [Accepted: 01/14/2024] [Indexed: 02/09/2024]
Abstract
This study proposes a deep convolutional neural network for the automatic segmentation of glioblastoma brain tumors, aiming sat replacing the manual segmentation method that is both time-consuming and labor-intensive. There are many challenges for automatic segmentation to finely segment sub-regions from multi-sequence magnetic resonance images because of the complexity and variability of glioblastomas, such as the loss of boundary information, misclassified regions, and subregion size. To overcome these challenges, this study introduces a spatial pyramid module and attention mechanism to the automatic segmentation algorithm, which focuses on multi-scale spatial details and context information. The proposed method has been tested in the public benchmarks BraTS 2018, BraTS 2019, BraTS 2020 and BraTS 2021 datasets. The Dice score on the enhanced tumor, whole tumor, and tumor core were respectively 79.90 %, 89.63 %, and 85.89 % on the BraTS 2018 dataset, respectively 77.14 %, 89.58 %, and 83.33 % on the BraTS 2019 dataset, and respectively 77.80 %, 90.04 %, and 83.18 % on the BraTS 2020 dataset, and respectively 83.48 %, 90.70 %, and 88.94 % on the BraTS 2021 dataset offering performance on par with that of state-of-the-art methods with only 1.90 M parameters. In addition, our approach significantly reduced the requirements for experimental equipment, and the average time taken to segment one case was only 1.48 s; these two benefits rendered the proposed network intensely competitive for clinical practice.
Collapse
Affiliation(s)
- Hengxin Liu
- School of Microelectronics, Tianjin University, Tianjin, China
| | - Jingteng Huang
- School of Microelectronics, Tianjin University, Tianjin, China
| | - Qiang Li
- School of Microelectronics, Tianjin University, Tianjin, China
| | - Xin Guan
- School of Microelectronics, Tianjin University, Tianjin, China.
| | - Minglang Tseng
- Institute of Innovation and Circular Economy, Asia University, Taichung, Taiwan; Department of Medical Research, China Medical University Hospital, China Medical University, Taichung, Taiwan; UKM-Graduate School of Business, Universiti Kebangsaan Malaysia, 43000 Bangi, Selangor, Malaysia; Department of Industrial Engineering, Khon Kaen University, 40002, Thailand.
| |
Collapse
|
8
|
Chen Y, Pan Y, Xia Y, Yuan Y. Disentangle First, Then Distill: A Unified Framework for Missing Modality Imputation and Alzheimer's Disease Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3566-3578. [PMID: 37450359 DOI: 10.1109/tmi.2023.3295489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/18/2023]
Abstract
Multi-modality medical data provide complementary information, and hence have been widely explored for computer-aided AD diagnosis. However, the research is hindered by the unavoidable missing-data problem, i.e., one data modality was not acquired on some subjects due to various reasons. Although the missing data can be imputed using generative models, the imputation process may introduce unrealistic information to the classification process, leading to poor performance. In this paper, we propose the Disentangle First, Then Distill (DFTD) framework for AD diagnosis using incomplete multi-modality medical images. First, we design a region-aware disentanglement module to disentangle each image into inter-modality relevant representation and intra-modality specific representation with emphasis on disease-related regions. To progressively integrate multi-modality knowledge, we then construct an imputation-induced distillation module, in which a lateral inter-modality transition unit is created to impute representation of the missing modality. The proposed DFTD framework has been evaluated against six existing methods on an ADNI dataset with 1248 subjects. The results show that our method has superior performance in both AD-CN classification and MCI-to-AD prediction tasks, substantially over-performing all competing methods.
Collapse
|
9
|
Ahamed MF, Hossain MM, Nahiduzzaman M, Islam MR, Islam MR, Ahsan M, Haider J. A review on brain tumor segmentation based on deep learning methods with federated learning techniques. Comput Med Imaging Graph 2023; 110:102313. [PMID: 38011781 DOI: 10.1016/j.compmedimag.2023.102313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 11/13/2023] [Accepted: 11/13/2023] [Indexed: 11/29/2023]
Abstract
Brain tumors have become a severe medical complication in recent years due to their high fatality rate. Radiologists segment the tumor manually, which is time-consuming, error-prone, and expensive. In recent years, automated segmentation based on deep learning has demonstrated promising results in solving computer vision problems such as image classification and segmentation. Brain tumor segmentation has recently become a prevalent task in medical imaging to determine the tumor location, size, and shape using automated methods. Many researchers have worked on various machine and deep learning approaches to determine the most optimal solution using the convolutional methodology. In this review paper, we discuss the most effective segmentation techniques based on the datasets that are widely used and publicly available. We also proposed a survey of federated learning methodologies to enhance global segmentation performance and ensure privacy. A comprehensive literature review is suggested after studying more than 100 papers to generalize the most recent techniques in segmentation and multi-modality information. Finally, we concentrated on unsolved problems in brain tumor segmentation and a client-based federated model training strategy. Based on this review, future researchers will understand the optimal solution path to solve these issues.
Collapse
Affiliation(s)
- Md Faysal Ahamed
- Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Munawar Hossain
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Nahiduzzaman
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Rabiul Islam
- Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Robiul Islam
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Mominul Ahsan
- Department of Computer Science, University of York, Deramore Lane, Heslington, York YO10 5GH, UK
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Chester St, Manchester M1 5GD, UK.
| |
Collapse
|
10
|
Wu S, Cao Y, Li X, Liu Q, Ye Y, Liu X, Zeng L, Tian M. Attention-guided multi-scale context aggregation network for multi-modal brain glioma segmentation. Med Phys 2023; 50:7629-7640. [PMID: 37151131 DOI: 10.1002/mp.16452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 03/17/2023] [Accepted: 03/20/2023] [Indexed: 05/09/2023] Open
Abstract
BACKGROUND Accurate segmentation of brain glioma is a critical prerequisite for clinical diagnosis, surgical planning and treatment evaluation. In current clinical workflow, physicians typically perform delineation of brain tumor subregions slice-by-slice, which is more susceptible to variabilities in raters and also time-consuming. Besides, even though convolutional neural networks (CNNs) are driving progress, the performance of standard models still have some room for further improvement. PURPOSE To deal with these issues, this paper proposes an attention-guided multi-scale context aggregation network (AMCA-Net) for the accurate segmentation of brain glioma in the magnetic resonance imaging (MRI) images with multi-modalities. METHODS AMCA-Net extracts the multi-scale features from the MRI images and fuses the extracted discriminative features via a self-attention mechanism for brain glioma segmentation. The extraction is performed via a series of down-sampling, convolution layers, and the global context information guidance (GCIG) modules are developed to fuse the features extracted for contextual features. At the end of the down-sampling, a multi-scale fusion (MSF) module is designed to exploit and combine all the extracted multi-scale features. Each of the GCIG and MSF modules contain a channel attention (CA) module that can adaptively calibrate feature responses and emphasize the most relevant features. Finally, multiple predictions with different resolutions are fused through different weightings given by a multi-resolution adaptation (MRA) module instead of the use of averaging or max-pooling to improve the final segmentation results. RESULTS Datasets used in this paper are publicly accessible, that is, the Multimodal Brain Tumor Segmentation Challenges 2018 (BraTS2018) and 2019 (BraTS2019). BraTS2018 contains 285 patient cases and BraTS2019 contains 335 cases. Simulations show that the AMCA-Net has better or comparable performance against that of the other state-of-the-art models. In terms of the Dice score and Hausdorff 95 for the BraTS2018 dataset, 90.4% and 10.2 mm for the whole tumor region (WT), 83.9% and 7.4 mm for the tumor core region (TC), 80.2% and 4.3 mm for the enhancing tumor region (ET), whereas the Dice score and Hausdorff 95 for the BraTS2019 dataset, 91.0% and 10.7 mm for the WT, 84.2% and 8.4 mm for the TC, 80.1% and 4.8 mm for the ET. CONCLUSIONS The proposed AMCA-Net performs comparably well in comparison to several state-of-the-art neural net models in identifying the areas involving the peritumoral edema, enhancing tumor, and necrotic and non-enhancing tumor core of brain glioma, which has great potential for clinical practice. In future research, we will further explore the feasibility of applying AMCA-Net to other similar segmentation tasks.
Collapse
Affiliation(s)
- Shaozhi Wu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yunjian Cao
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou, China
| | - Xinke Li
- West China School of Medicine, Sichuan University, Chengdu, China
| | - Qiyu Liu
- Radiology Department, Mianyang Central Hospital, Mianyang, China
| | - Yuyun Ye
- Department of Electrical and Computer Engineering, University of Tulsa, Tulsa, Oklahoma, USA
| | - Xingang Liu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Liaoyuan Zeng
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Miao Tian
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
11
|
Gu Y, Otake Y, Uemura K, Soufi M, Takao M, Talbot H, Okada S, Sugano N, Sato Y. Bone mineral density estimation from a plain X-ray image by learning decomposition into projections of bone-segmented computed tomography. Med Image Anal 2023; 90:102970. [PMID: 37774535 DOI: 10.1016/j.media.2023.102970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 07/25/2023] [Accepted: 09/11/2023] [Indexed: 10/01/2023]
Abstract
Osteoporosis is a prevalent bone disease that causes fractures in fragile bones, leading to a decline in daily living activities. Dual-energy X-ray absorptiometry (DXA) and quantitative computed tomography (QCT) are highly accurate for diagnosing osteoporosis; however, these modalities require special equipment and scan protocols. To frequently monitor bone health, low-cost, low-dose, and ubiquitously available diagnostic methods are highly anticipated. In this study, we aim to perform bone mineral density (BMD) estimation from a plain X-ray image for opportunistic screening, which is potentially useful for early diagnosis. Existing methods have used multi-stage approaches consisting of extraction of the region of interest and simple regression to estimate BMD, which require a large amount of training data. Therefore, we propose an efficient method that learns decomposition into projections of bone-segmented QCT for BMD estimation under limited datasets. The proposed method achieved high accuracy in BMD estimation, where Pearson correlation coefficients of 0.880 and 0.920 were observed for DXA-measured BMD and QCT-measured BMD estimation tasks, respectively, and the root mean square of the coefficient of variation values were 3.27 to 3.79% for four measurements with different poses. Furthermore, we conducted extensive validation experiments, including multi-pose, uncalibrated-CT, and compression experiments toward actual application in routine clinical practice.
Collapse
Affiliation(s)
- Yi Gu
- Graduate School of Science and Technology, Nara Institute of Science and Technology, Ikoma, Nara 630-0192, Japan; CentraleSupélec, Université Paris-Saclay, Inria, Gif-sur-Yvette 91190, France.
| | - Yoshito Otake
- Graduate School of Science and Technology, Nara Institute of Science and Technology, Ikoma, Nara 630-0192, Japan.
| | - Keisuke Uemura
- Department of Orthopeadic Medical Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871, Japan.
| | - Mazen Soufi
- Graduate School of Science and Technology, Nara Institute of Science and Technology, Ikoma, Nara 630-0192, Japan
| | - Masaki Takao
- Department of Bone and Joint Surgery, Ehime University Graduate School of Medicine, Toon, Ehime 791-0295, Japan
| | - Hugues Talbot
- CentraleSupélec, Université Paris-Saclay, Inria, Gif-sur-Yvette 91190, France
| | - Seiji Okada
- Department of Orthopaedics, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871, Japan
| | - Nobuhiko Sugano
- Department of Orthopeadic Medical Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871, Japan
| | - Yoshinobu Sato
- Graduate School of Science and Technology, Nara Institute of Science and Technology, Ikoma, Nara 630-0192, Japan.
| |
Collapse
|
12
|
Feng X, Ghimire K, Kim DD, Chandra RS, Zhang H, Peng J, Han B, Huang G, Chen Q, Patel S, Bettagowda C, Sair HI, Jones C, Jiao Z, Yang L, Bai H. Brain Tumor Segmentation for Multi-Modal MRI with Missing Information. J Digit Imaging 2023; 36:2075-2087. [PMID: 37340197 PMCID: PMC10501967 DOI: 10.1007/s10278-023-00860-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Revised: 05/22/2023] [Accepted: 05/24/2023] [Indexed: 06/22/2023] Open
Abstract
Deep convolutional neural networks (DCNNs) have shown promise in brain tumor segmentation from multi-modal MRI sequences, accommodating heterogeneity in tumor shape and appearance. The fusion of multiple MRI sequences allows networks to explore complementary tumor information for segmentation. However, developing a network that maintains clinical relevance in situations where certain MRI sequence(s) might be unavailable or unusual poses a significant challenge. While one solution is to train multiple models with different MRI sequence combinations, it is impractical to train every model from all possible sequence combinations. In this paper, we propose a DCNN-based brain tumor segmentation framework incorporating a novel sequence dropout technique in which networks are trained to be robust to missing MRI sequences while employing all other available sequences. Experiments were performed on the RSNA-ASNR-MICCAI BraTS 2021 Challenge dataset. When all MRI sequences were available, there were no significant differences in performance of the model with and without dropout for enhanced tumor (ET), tumor (TC), and whole tumor (WT) (p-values 1.000, 1.000, 0.799, respectively), demonstrating that the addition of dropout improves robustness without hindering overall performance. When key sequences were unavailable, the network with sequence dropout performed significantly better. For example, when tested on only T1, T2, and FLAIR sequences together, DSC for ET, TC, and WT increased from 0.143 to 0.486, 0.431 to 0.680, and 0.854 to 0.901, respectively. Sequence dropout represents a relatively simple yet effective approach for brain tumor segmentation with missing MRI sequences.
Collapse
Affiliation(s)
- Xue Feng
- Biomedical Engineering, University of Virginia, 22903, Charlottesville, VA, USA
- Carina Medical LLC, Lexington, KY, 40513, USA
| | | | - Daniel D Kim
- Warren Alpert Medical School of Brown University, Providence, RI, USA
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA
| | - Rajat S Chandra
- Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| | - Helen Zhang
- Warren Alpert Medical School of Brown University, Providence, RI, USA
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA
| | - Jian Peng
- Department of Neurology, Second Xiangya Hospital, Changsha, China
| | - Binghong Han
- Department of Neurology, Second Xiangya Hospital, Changsha, China
| | | | - Quan Chen
- Carina Medical LLC, Lexington, KY, 40513, USA
- Radiation Medicine, University of Kentucky, Lexington, KY, 40536, USA
| | - Sohil Patel
- Radiology and Medical Imaging, University of Virginia, 22903, Charlottesville, VA, USA
| | - Chetan Bettagowda
- Department of Radiology and Radiological Science, Johns Hopkins University, 601 N Caroline St, Baltimore, MD, 21287, USA
| | - Haris I Sair
- Department of Radiology and Radiological Science, Johns Hopkins University, 601 N Caroline St, Baltimore, MD, 21287, USA
- The Malone Center for Engineering in Healthcare, The Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Craig Jones
- Department of Radiology and Radiological Science, Johns Hopkins University, 601 N Caroline St, Baltimore, MD, 21287, USA
- The Malone Center for Engineering in Healthcare, The Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Zhicheng Jiao
- Warren Alpert Medical School of Brown University, Providence, RI, USA
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA
| | - Li Yang
- Department of Neurology, Second Xiangya Hospital, Changsha, China.
| | - Harrison Bai
- Department of Radiology and Radiological Science, Johns Hopkins University, 601 N Caroline St, Baltimore, MD, 21287, USA.
| |
Collapse
|
13
|
Choi Y, Al-Masni MA, Jung KJ, Yoo RE, Lee SY, Kim DH. A single stage knowledge distillation network for brain tumor segmentation on limited MR image modalities. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107644. [PMID: 37307766 DOI: 10.1016/j.cmpb.2023.107644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 05/14/2023] [Accepted: 06/03/2023] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Precisely segmenting brain tumors using multimodal Magnetic Resonance Imaging (MRI) is an essential task for early diagnosis, disease monitoring, and surgical planning. Unfortunately, the complete four image modalities utilized in the well-known BraTS benchmark dataset: T1, T2, Fluid-Attenuated Inversion Recovery (FLAIR), and T1 Contrast-Enhanced (T1CE) are not regularly acquired in clinical practice due to the high cost and long acquisition time. Rather, it is common to utilize limited image modalities for brain tumor segmentation. METHODS In this paper, we propose a single stage learning of knowledge distillation algorithm that derives information from the missing modalities for better segmentation of brain tumors. Unlike the previous works that adopted a two-stage framework to distill the knowledge from a pre-trained network into a student network, where the latter network is trained on limited image modality, we train both models simultaneously using a single-stage knowledge distillation algorithm. We transfer the information by reducing the redundancy from a teacher network trained on full image modalities to the student network using Barlow Twins loss on a latent-space level. To distill the knowledge on the pixel level, we further employ a deep supervision idea that trains the backbone networks of both teacher and student paths using Cross-Entropy loss. RESULTS We demonstrate that the proposed single-stage knowledge distillation approach enables improving the performance of the student network in each tumor category with overall dice scores of 91.11% for Tumor Core, 89.70% for Enhancing Tumor, and 92.20% for Whole Tumor in the case of only using the FLAIR and T1CE images, outperforming the state-of-the-art segmentation methods. CONCLUSIONS The outcomes of this work prove the feasibility of exploiting the knowledge distillation in segmenting brain tumors using limited image modalities and hence make it closer to clinical practices.
Collapse
Affiliation(s)
- Yoonseok Choi
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul 03722, Republic of Korea
| | - Mohammed A Al-Masni
- Department of Artificial Intelligence, College of Software & Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
| | - Kyu-Jin Jung
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul 03722, Republic of Korea
| | - Roh-Eul Yoo
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Seoul National University College of Medicine, 103 Daehak-ro Jongno-gu, Seoul 03080, Republic of Korea
| | - Seong-Yeong Lee
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro Jongno-gu, Seoul 03080, Republic of Korea
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul 03722, Republic of Korea.
| |
Collapse
|
14
|
Diao Y, Li F, Li Z. Joint learning-based feature reconstruction and enhanced network for incomplete multi-modal brain tumor segmentation. Comput Biol Med 2023; 163:107234. [PMID: 37450967 DOI: 10.1016/j.compbiomed.2023.107234] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 06/12/2023] [Accepted: 07/01/2023] [Indexed: 07/18/2023]
Abstract
Multimodal Magnetic Resonance Imaging (MRI) can provide valuable complementary information and substantially enhance the performance of brain tumor segmentation. However, it is common for certain modalities to be absent or missing during clinical diagnosis, which can significantly impair segmentation techniques that rely on complete modalities. Current advanced methods attempt to address this challenge by developing shared feature representations via modal fusion to handle different missing modality situations. Considering the importance of missing modality information in multimodal segmentation, this paper utilize a feature reconstruction method to recover the missing information, and proposes a joint learning-based feature reconstruction and enhancement method for incomplete modality brain tumor segmentation. The method leverages an information learning mechanism to transfer information from the complete modality to a single modality, enabling it to obtain complete brain tumor information, even without the support of other modalities. Additionally, the method incorporates a module for reconstructing missing modality features, which recovers fused features of the absent modality through utilizing the abundant potential information obtained from the available modalities. Furthermore, the feature enhancement mechanism improves shared feature representation by utilizing the information obtained from the missing modalities that have been reconstructed. These processes enable the method to obtain more comprehensive information regarding brain tumors in various missing modality circumstances, thereby enhancing the model's robustness. The performance of the proposed model was evaluated on BraTS datasets and compared with other deep learning algorithms using Dice similarity scores. On the BraTS2018 dataset, the proposed algorithm achieved a Dice similarity score of 86.28%, 77.02%, and 59.64% for whole tumors, tumor cores, and enhanced tumors, respectively. These results demonstrate the superiority of our framework over state-of-the-art methods in missing modalities situations.
Collapse
Affiliation(s)
- Yueqin Diao
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China; Yunnan Key Laboratory of Artificial Intelligence, Kunming 650500, China.
| | - Fan Li
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China; Yunnan Key Laboratory of Artificial Intelligence, Kunming 650500, China.
| | - Zhiyuan Li
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China; Yunnan Key Laboratory of Artificial Intelligence, Kunming 650500, China.
| |
Collapse
|
15
|
Zhou T, Zhu S. Uncertainty quantification and attention-aware fusion guided multi-modal MR brain tumor segmentation. Comput Biol Med 2023; 163:107142. [PMID: 37331100 DOI: 10.1016/j.compbiomed.2023.107142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 05/17/2023] [Accepted: 06/05/2023] [Indexed: 06/20/2023]
Abstract
Brain tumor is one of the most aggressive cancers in the world, accurate brain tumor segmentation plays a critical role in clinical diagnosis and treatment planning. Although deep learning models have presented remarkable success in medical segmentation, they can only obtain the segmentation map without capturing the segmentation uncertainty. To achieve accurate and safe clinical results, it is necessary to produce extra uncertainty maps to assist the subsequent segmentation revision. To this end, we propose to exploit the uncertainty quantification in the deep learning model and apply it to multi-modal brain tumor segmentation. In addition, we develop an effective attention-aware multi-modal fusion method to learn the complimentary feature information from the multiple MR modalities. First, a multi-encoder-based 3D U-Net is proposed to obtain the initial segmentation results. Then, an estimated Bayesian model is presented to measure the uncertainty of the initial segmentation results. Finally, the obtained uncertainty maps are integrated into a deep learning-based segmentation network, serving as an additional constraint information to further refine the segmentation results. The proposed network is evaluated on publicly available BraTS 2018 and BraTS 2019 datasets. The experimental results demonstrate that the proposed method outperforms the previous state-of-the-art methods on Dice score, Hausdorff distance and Sensitivity metrics. Furthermore, the proposed components could be easily applied to other network architectures and other computer vision fields.
Collapse
Affiliation(s)
- Tongxue Zhou
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou 311121, China
| | - Shan Zhu
- School of Life and Environmental Science, Hangzhou Normal University, Hangzhou, 311121, China.
| |
Collapse
|
16
|
Murmu A, Kumar P. A novel Gateaux derivatives with efficient DCNN-Resunet method for segmenting multi-class brain tumor. Med Biol Eng Comput 2023:10.1007/s11517-023-02824-z. [PMID: 37338739 DOI: 10.1007/s11517-023-02824-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 03/14/2023] [Indexed: 06/21/2023]
Abstract
In hospitals and pathology, observing the features and locations of brain tumors in Magnetic Resonance Images (MRI) is a crucial task for assisting medical professionals in both treatment and diagnosis. The multi-class information about the brain tumor is often obtained from the patient's MRI dataset. However, this information may vary in different shapes and sizes for various brain tumors, making it difficult to detect their locations in the brain. To resolve these issues, a novel customized Deep Convolution Neural Network (DCNN) based Residual-Unet (ResUnet) model with Transfer Learning (TL) is proposed for predicting the locations of the brain tumor in an MRI dataset. The DCNN model has been used to extract the features from input images and select the Region Of Interest (ROI) by using the TL technique for training it faster. Furthermore, the min-max normalizing approach is used to enhance the color intensity value for particular ROI boundary edges in the brain tumor images. Specifically, the boundary edges of the brain tumors have been detected by utilizing Gateaux Derivatives (GD) method to identify the multi-class brain tumors precisely. The proposed scheme has been validated on two datasets namely the brain tumor, and Figshare MRI datasets for detecting multi-class Brain Tumor Segmentation (BTS).The experimental results have been analyzed by evaluation metrics namely, accuracy (99.78, and 99.03), Jaccard Coefficient (93.04, and 94.95), Dice Factor Coefficient (DFC) (92.37, and 91.94), Mean Absolute Error (MAE) (0.0019, and 0.0013), and Mean Squared Error (MSE) (0.0085, and 0.0012) for proper validation. The proposed system outperforms the state-of-the-art segmentation models on the MRI brain tumor dataset.
Collapse
Affiliation(s)
- Anita Murmu
- Computer Science and Engineering, National Institute of Technology Patna, Ashok Rajpath, Patna, 800005, Bihar, India.
| | - Piyush Kumar
- Computer Science and Engineering, National Institute of Technology Patna, Ashok Rajpath, Patna, 800005, Bihar, India
| |
Collapse
|
17
|
Jia Z, Zhu H, Zhu J, Ma P. Two-Branch network for brain tumor segmentation using attention mechanism and super-resolution reconstruction. Comput Biol Med 2023; 157:106751. [PMID: 36934534 DOI: 10.1016/j.compbiomed.2023.106751] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Revised: 02/12/2023] [Accepted: 03/06/2023] [Indexed: 03/17/2023]
Abstract
Accurate segmentation of brain tumor plays an important role in MRI diagnosis and treatment monitoring of brain tumor. However, the degree of lesions in each patient's brain tumor region is usually inconsistent, with large structural differences, and brain tumor MR images are characterized by low contrast and blur, current deep learning algorithms often cannot achieve accurate segmentation. To address this problem, we propose a novel end-to-end brain tumor segmentation algorithm by integrating the improved 3D U-Net network and super-resolution image reconstruction into one framework. In addition, the coordinate attention module is embedded before the upsampling operation of the backbone network, which enhances the capture ability of local texture feature information and global location feature information. To demonstrate the segmentation results of the proposed algorithm in different brain tumor MR images, we have trained and evaluated the proposed algorithm on BraTS datasets, and compared with other deep learning algorithms by dice similarity scores. On the BraTS2021 dataset, the proposed algorithm achieves the dice similarity score of 89.61%, 88.30%, 91.05%, and the Hausdorff distance (95%) of 1.414 mm, 7.810 mm, 4.583 mm for the enhancing tumors, tumor cores and whole tumors, respectively. The experimental results illuminate that our method outperforms the baseline 3D U-Net method and yields good performance on different datasets. It indicated that it is robust to segmentation of brain tumor MR images with structures vary considerably.
Collapse
Affiliation(s)
- Zhaohong Jia
- School of Internet, Anhui University, Hefei 230039, China
| | - Hongxin Zhu
- School of Internet, Anhui University, Hefei 230039, China
| | - Junan Zhu
- School of Internet, Anhui University, Hefei 230039, China.
| | - Ping Ma
- School of Internet, Anhui University, Hefei 230039, China
| |
Collapse
|
18
|
Liu Z, Wei J, Li R, Zhou J. Learning multi-modal brain tumor segmentation from privileged semi-paired MRI images with curriculum disentanglement learning. Comput Biol Med 2023; 159:106927. [PMID: 37105113 DOI: 10.1016/j.compbiomed.2023.106927] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Revised: 04/02/2023] [Accepted: 04/13/2023] [Indexed: 04/29/2023]
Abstract
Since the brain is the human body's primary command and control center, brain cancer is one of the most dangerous cancers. Automatic segmentation of brain tumors from multi-modal images is important in diagnosis and treatment. Due to the difficulties in obtaining multi-modal paired images in clinical practice, recent studies segment brain tumors solely relying on unpaired images and discarding the available paired images. Although these models solve the dependence on paired images, they cannot fully exploit the complementary information from different modalities, resulting in low unimodal segmentation accuracy. Hence, this work studies the unimodal segmentation with privileged semi-paired images, i.e., limited paired images are introduced to the training phase. Specifically, we present a novel two-step (intra-modality and inter-modality) curriculum disentanglement learning framework. The modality-specific style codes describe the attenuation of tissue features and image contrast, and modality-invariant content codes contain anatomical and functional information extracted from the input images. Besides, we address the problem of unthorough decoupling by introducing constraints on the style and content spaces. Experiments on the BraTS2020 dataset highlight that our model outperforms the competing models on unimodal segmentation, achieving average dice scores of 82.91%, 72.62%, and 54.80% for WT (the whole tumor), TC (the tumor core), and ET (the enhancing tumor), respectively. Finally, we further evaluate our model's variable multi-modal brain tumor segmentation performance by introducing a fusion block (TFusion). The experimental results reveal that our model achieves the best WT segmentation performance for all 15 possible modality combinations with 87.31% average accuracy. In summary, we propose a curriculum disentanglement learning framework for unimodal segmentation with privileged semi-paired images. Moreover, the benefits of the improved unimodal segmentation extend to variable multi-modal segmentation, demonstrating that improving the unimodal segmentation performance is significant for brain tumor segmentation with missing modalities. Our code is available at https://github.com/scut-cszcl/SpBTS.
Collapse
Affiliation(s)
- Zecheng Liu
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China.
| | - Jia Wei
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China.
| | - Rui Li
- Golisano College of Computing and Information Sciences, Rochester Institute of Technology, Rochester, NY, USA.
| | - Jianlong Zhou
- Data Science Institute, University of Technology Sydney, Ultimo, NSW 2007, Australia.
| |
Collapse
|
19
|
Qian S, Wang C. COM: Contrastive Masked-attention model for incomplete multimodal learning. Neural Netw 2023; 162:443-455. [PMID: 36965274 DOI: 10.1016/j.neunet.2023.03.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2022] [Revised: 12/30/2022] [Accepted: 03/02/2023] [Indexed: 03/07/2023]
Abstract
Most multimodal learning methods assume that all modalities are always available in data. However, in real-world applications, the assumption is often violated due to privacy protection, sensor failure etc. Previous works for incomplete multimodal learning often suffer from one of the following drawbacks: introducing noise, lacking flexibility to missing patterns and failing to capture interactions between modalities. To overcome these challenges, we propose a COntrastive Masked-attention model (COM). The framework performs cross-modal contrastive learning with GAN-based augmentation to reduce modality gap, and employs a masked-attention model to capture interactions between modalities. The augmentation adapts cross-modal contrastive learning to suit incomplete case by a two-player game, improving the effectiveness of multimodal representations. Interactions between modalities are modeled by stacking self-attention blocks, and attention masks limit them on the observed modalities to avoid extra noise. All kinds of modality combinations share a unified architecture, so the model is flexible to different missing patterns. Extensive experiments on six datasets demonstrate the effectiveness and robustness of the proposed method for incomplete multimodal learning.
Collapse
Affiliation(s)
- Shuwei Qian
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, 210023, China; Department of Computer Science and Technology, Nanjing University, Nanjing, 210023, China.
| | - Chongjun Wang
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, 210023, China; Department of Computer Science and Technology, Nanjing University, Nanjing, 210023, China.
| |
Collapse
|
20
|
Modality-level cross-connection and attentional feature fusion based deep neural network for multi-modal brain tumor segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
21
|
Mahesh Kumar G, Parthasarathy E. Development of an enhanced U-Net model for brain tumor segmentation with optimized architecture. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
22
|
Xu W, Bian Y, Lu Y, Meng Q, Zhu W, Shi F, Chen X, Shao C, Xiang D. Semi-supervised interactive fusion network for MR image segmentation. Med Phys 2023; 50:1586-1600. [PMID: 36345139 DOI: 10.1002/mp.16072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 10/06/2022] [Accepted: 10/15/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Medical image segmentation is an important task in the diagnosis and treatment of cancers. The low contrast and highly flexible anatomical structure make it challenging to accurately segment the organs or lesions. PURPOSE To improve the segmentation accuracy of the organs or lesions in magnetic resonance (MR) images, which can be useful in clinical diagnosis and treatment of cancers. METHODS First, a selective feature interaction (SFI) module is designed to selectively extract the similar features of the sequence images based on the similarity interaction. Second, a multi-scale guided feature reconstruction (MGFR) module is designed to reconstruct low-level semantic features and focus on small targets and the edges of the pancreas. Third, to reduce manual annotation of large amounts of data, a semi-supervised training method is also proposed. Uncertainty estimation is used to further improve the segmentation accuracy. RESULTS Three hundred ninety-five 3D MR images from 395 patients with pancreatic cancer, 259 3D MR images from 259 patients with brain tumors, and four-fold cross-validation strategy are used to evaluate the proposed method. Compared to state-of-the-art deep learning segmentation networks, the proposed method can achieve better segmentation of pancreas or tumors in MR images. CONCLUSIONS SFI-Net can fuse dual sequence MR images for abnormal pancreas or tumor segmentation. The proposed semi-supervised strategy can further improve the performance of SFI-Net.
Collapse
Affiliation(s)
- Wenxuan Xu
- School of Electronic and Information Engineering, Soochow University, Jiangsu, China
| | - Yun Bian
- Department of Radiology, Changhai Hospital, The Navy Military Medical University, Shanghai, China
| | - Yuxuan Lu
- School of Electronic and Information Engineering, Soochow University, Jiangsu, China
| | - Qingquan Meng
- School of Electronic and Information Engineering, Soochow University, Jiangsu, China
| | - Weifang Zhu
- School of Electronic and Information Engineering, Soochow University, Jiangsu, China
| | - Fei Shi
- School of Electronic and Information Engineering, Soochow University, Jiangsu, China
| | - Xinjian Chen
- School of Electronic and Information Engineering, Soochow University, Jiangsu, China
| | - Chengwei Shao
- Department of Radiology, Changhai Hospital, The Navy Military Medical University, Shanghai, China
| | - Dehui Xiang
- School of Electronic and Information Engineering, Soochow University, Jiangsu, China
| |
Collapse
|
23
|
Zhou T, Ruan S, Hu H. A literature survey of MR-based brain tumor segmentation with missing modalities. Comput Med Imaging Graph 2023; 104:102167. [PMID: 36584536 DOI: 10.1016/j.compmedimag.2022.102167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 11/01/2022] [Accepted: 12/22/2022] [Indexed: 12/28/2022]
Abstract
Multimodal MR brain tumor segmentation is one of the hottest issues in the community of medical image processing. However, acquiring the complete set of MR modalities is not always possible in clinical practice, due to the acquisition protocols, image corruption, scanner availability, scanning cost or allergies to certain contrast materials. The missing information can cause some restraints to brain tumor diagnosis, monitoring, treatment planning and prognosis. Thus, it is highly desirable to develop brain tumor segmentation methods to address the missing modalities problem. Based on the recent advancements, in this review, we provide a detailed analysis of the missing modality issue in MR-based brain tumor segmentation. First, we briefly introduce the biomedical background concerning brain tumor, MR imaging techniques, and the current challenges in brain tumor segmentation. Then, we provide a taxonomy of the state-of-the-art methods with five categories, namely, image synthesis-based method, latent feature space-based model, multi-source correlation-based method, knowledge distillation-based method, and domain adaptation-based method. In addition, the principles, architectures, benefits and limitations are elaborated in each method. Following that, the corresponding datasets and widely used evaluation metrics are described. Finally, we analyze the current challenges and provide a prospect for future development trends. This review aims to provide readers with a thorough knowledge of the recent contributions in the field of brain tumor segmentation with missing modalities and suggest potential future directions.
Collapse
Affiliation(s)
- Tongxue Zhou
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou 311121, China
| | - Su Ruan
- Université de Rouen Normandie, LITIS - QuantIF, Rouen 76183, France
| | - Haigen Hu
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China; Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou 310023, China.
| |
Collapse
|
24
|
Tong J, Wang C. A dual tri-path CNN system for brain tumor segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
25
|
Chang Y, Zheng Z, Sun Y, Zhao M, Lu Y, Zhang Y. DPAFNet: A Residual Dual-Path Attention-Fusion Convolutional Neural Network for Multimodal Brain Tumor Segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
26
|
Tian W, Li D, Lv M, Huang P. Axial Attention Convolutional Neural Network for Brain Tumor Segmentation with Multi-Modality MRI Scans. Brain Sci 2022; 13:brainsci13010012. [PMID: 36671994 PMCID: PMC9856007 DOI: 10.3390/brainsci13010012] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 12/13/2022] [Accepted: 12/18/2022] [Indexed: 12/24/2022] Open
Abstract
Accurately identifying tumors from MRI scans is of the utmost importance for clinical diagnostics and when making plans regarding brain tumor treatment. However, manual segmentation is a challenging and time-consuming process in practice and exhibits a high degree of variability between doctors. Therefore, an axial attention brain tumor segmentation network was established in this paper, automatically segmenting tumor subregions from multi-modality MRIs. The axial attention mechanism was employed to capture richer semantic information, which makes it easier for models to provide local-global contextual information by incorporating local and global feature representations while simplifying the computational complexity. The deep supervision mechanism is employed to avoid vanishing gradients and guide the AABTS-Net to generate better feature representations. The hybrid loss is employed in the model to handle the class imbalance of the dataset. Furthermore, we conduct comprehensive experiments on the BraTS 2019 and 2020 datasets. The proposed AABTS-Net shows greater robustness and accuracy, which signifies that the model can be employed in clinical practice and provides a new avenue for medical image segmentation systems.
Collapse
Affiliation(s)
- Weiwei Tian
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan 250358, China
| | - Dengwang Li
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan 250358, China
- Correspondence:
| | - Mengyu Lv
- School of Environment and Energy, South China University of Technology, Guangzhou 510006, China
| | - Pu Huang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan 250358, China
| |
Collapse
|
27
|
SGC-ARANet: scale-wise global contextual axile reverse attention network for automatic brain tumor segmentation. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04209-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
28
|
Huang Z, Zou S, Wang G, Chen Z, Shen H, Wang H, Zhang N, Zhang L, Yang F, Wang H, Liang D, Niu T, Zhu X, Hu Z. ISA-Net: Improved spatial attention network for PET-CT tumor segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107129. [PMID: 36156438 DOI: 10.1016/j.cmpb.2022.107129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 07/06/2022] [Accepted: 09/13/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE Achieving accurate and automated tumor segmentation plays an important role in both clinical practice and radiomics research. Segmentation in medicine is now often performed manually by experts, which is a laborious, expensive and error-prone task. Manual annotation relies heavily on the experience and knowledge of these experts. In addition, there is much intra- and interobserver variation. Therefore, it is of great significance to develop a method that can automatically segment tumor target regions. METHODS In this paper, we propose a deep learning segmentation method based on multimodal positron emission tomography-computed tomography (PET-CT), which combines the high sensitivity of PET and the precise anatomical information of CT. We design an improved spatial attention network(ISA-Net) to increase the accuracy of PET or CT in detecting tumors, which uses multi-scale convolution operation to extract feature information and can highlight the tumor region location information and suppress the non-tumor region location information. In addition, our network uses dual-channel inputs in the coding stage and fuses them in the decoding stage, which can take advantage of the differences and complementarities between PET and CT. RESULTS We validated the proposed ISA-Net method on two clinical datasets, a soft tissue sarcoma(STS) and a head and neck tumor(HECKTOR) dataset, and compared with other attention methods for tumor segmentation. The DSC score of 0.8378 on STS dataset and 0.8076 on HECKTOR dataset show that ISA-Net method achieves better segmentation performance and has better generalization. CONCLUSIONS The method proposed in this paper is based on multi-modal medical image tumor segmentation, which can effectively utilize the difference and complementarity of different modes. The method can also be applied to other multi-modal data or single-modal data by proper adjustment.
Collapse
Affiliation(s)
- Zhengyong Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Sijuan Zou
- Department of Nuclear Medicine and PET, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430000, China
| | - Guoshuai Wang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Zixiang Chen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, 518055, China
| | - Hao Shen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Haiyan Wang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, 518055, China
| | - Lu Zhang
- Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions,Shenzhen, 518055, China
| | - Fan Yang
- Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions,Shenzhen, 518055, China
| | - Haining Wang
- United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, 518045, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, 518055, China
| | - Tianye Niu
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, 518118, China
| | - Xiaohua Zhu
- Department of Nuclear Medicine and PET, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430000, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, 518055, China.
| |
Collapse
|
29
|
Ramprasad MVS, Rahman MZU, Bayleyegn MD. A Deep Probabilistic Sensing and Learning Model for Brain Tumor Classification With Fusion-Net and HFCMIK Segmentation. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2022; 3:178-188. [PMID: 36712319 PMCID: PMC9870266 DOI: 10.1109/ojemb.2022.3217186] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 10/14/2022] [Accepted: 10/17/2022] [Indexed: 11/06/2022] Open
Abstract
Goal: Implementation of an artificial intelli gence-based medical diagnosis tool for brain tumor classification, which is called the BTFSC-Net. Methods: Medical images are preprocessed using a hybrid probabilistic wiener filter (HPWF) The deep learning convolutional neural network (DLCNN) was utilized to fuse MRI and CT images with robust edge analysis (REA) properties, which are used to identify the slopes and edges of source images. Then, hybrid fuzzy c-means integrated k-means (HFCMIK) clustering is used to segment the disease affected region from the fused image. Further, hybrid features such as texture, colour, and low-level features are extracted from the fused image by using gray-level cooccurrence matrix (GLCM), redundant discrete wavelet transform (RDWT) descriptors. Finally, a deep learning based probabilistic neural network (DLPNN) is used to classify malignant and benign tumors. The BTFSC-Net attained 99.21% of segmentation accuracy and 99.46% of classification accuracy. Conclusions: The simulations showed that BTFSC-Net outperformed as compared to existing methods.
Collapse
Affiliation(s)
- M V S Ramprasad
- Koneru Lakshmaiah Education FoundationK L University Guntur 522302 India
- GITAM (Deemed to be University) Visakhapatnam AP 522502 India
| | - Md Zia Ur Rahman
- Department of Electronics and Communication Engineering, Koneru Lakshmaiah Education FoundationK L University Vaddeswaram Guntur 522502 India
| | | |
Collapse
|
30
|
Yang Q, Guo X, Chen Z, Woo PYM, Yuan Y. D 2-Net: Dual Disentanglement Network for Brain Tumor Segmentation With Missing Modalities. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2953-2964. [PMID: 35576425 DOI: 10.1109/tmi.2022.3175478] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Multi-modal Magnetic Resonance Imaging (MRI) can provide complementary information for automatic brain tumor segmentation, which is crucial for diagnosis and prognosis. While missing modality data is common in clinical practice and it can result in the collapse of most previous methods relying on complete modality data. Current state-of-the-art approaches cope with the situations of missing modalities by fusing multi-modal images and features to learn shared representations of tumor regions, which often ignore explicitly capturing the correlations among modalities and tumor regions. Inspired by the fact that modality information plays distinct roles to segment different tumor regions, we aim to explicitly exploit the correlations among various modality-specific information and tumor-specific knowledge for segmentation. To this end, we propose a Dual Disentanglement Network (D2-Net) for brain tumor segmentation with missing modalities, which consists of a modality disentanglement stage (MD-Stage) and a tumor-region disentanglement stage (TD-Stage). In the MD-Stage, a spatial-frequency joint modality contrastive learning scheme is designed to directly decouple the modality-specific information from MRI data. To decompose tumor-specific representations and extract discriminative holistic features, we propose an affinity-guided dense tumor-region knowledge distillation mechanism in the TD-Stage through aligning the features of a disentangled binary teacher network with a holistic student network. By explicitly discovering relations among modalities and tumor regions, our model can learn sufficient information for segmentation even if some modalities are missing. Extensive experiments on the public BraTS-2018 database demonstrate the superiority of our framework over state-of-the-art methods in missing modalities situations. Codes are available at https://github.com/CityU-AIM-Group/D2Net.
Collapse
|
31
|
Abstract
AbstractBrain tumor segmentation is one of the most challenging problems in medical image analysis. The goal of brain tumor segmentation is to generate accurate delineation of brain tumor regions. In recent years, deep learning methods have shown promising performance in solving various computer vision problems, such as image classification, object detection and semantic segmentation. A number of deep learning based methods have been applied to brain tumor segmentation and achieved promising results. Considering the remarkable breakthroughs made by state-of-the-art technologies, we provide this survey with a comprehensive study of recently developed deep learning based brain tumor segmentation techniques. More than 150 scientific papers are selected and discussed in this survey, extensively covering technical aspects such as network architecture design, segmentation under imbalanced conditions, and multi-modality processes. We also provide insightful discussions for future development directions.
Collapse
|
32
|
Segmentation for Multimodal Brain Tumor Images Using Dual-Tree Complex Wavelet Transform and Deep Reinforcement Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:5369516. [PMID: 35655520 PMCID: PMC9152408 DOI: 10.1155/2022/5369516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 04/27/2022] [Accepted: 05/04/2022] [Indexed: 11/18/2022]
Abstract
Image segmentation is an effective tool for computer-aided medical treatment, to retain the detailed features and edges of the segmented image and improve the segmentation accuracy. Therefore, a segmentation algorithm using deep reinforcement learning (DRL) and dual-tree complex wavelet transform (DTCWT) for multimodal brain tumor images is proposed. First, the bivariate concept in DTCWT is used to determine whether the image noise points belong to the real or imaginary region, and the noise probability is checked after calculation; second, the wavelet coefficients corresponding to the region where the noise is located are selected to transform the noise into normal pixel points by bivariate; then, the conditional probability of occurrence of marker points in the edge and center regions of the image is calculated with the target points, and the initial segmentation of the image is achieved by the known wavelet coefficients; finally, the segmentation framework is constructed using DRL, and the network is trained by loss function to optimize the segmentation results and achieve accurate image segmentation. The experiment was evaluated on BraTS2018 dataset, CQ500 dataset, and a hospital brain tumor dataset. The results show that the algorithm in this paper can effectively remove multimodal brain tumor image noise, and the segmented image has good retention of detail features and edges, and the segmented image has high similarity with the original image. The highest information loss index of the segmentation results is only 0.18, the image boundary error is only about 0.3, and F-value is high, which indicates that the proposed algorithm is accurate and can operate efficiently, and has practical applicability.
Collapse
|
33
|
Huang P, Li D, Jiao Z, Wei D, Cao B, Mo Z, Wang Q, Zhang H, Shen D. Common Feature Learning for Brain Tumor MRI Synthesis by Context-aware Generative Adversarial Network. Med Image Anal 2022; 79:102472. [DOI: 10.1016/j.media.2022.102472] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 02/18/2022] [Accepted: 05/03/2022] [Indexed: 11/28/2022]
|
34
|
Zhou T, Vera P, Canu S, Ruan S. Missing Data Imputation via Conditional Generator and Correlation Learning for Multimodal Brain Tumor Segmentation. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.04.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
35
|
Brochet T, Lapuyade-Lahorgue J, Huat A, Thureau S, Pasquier D, Gardin I, Modzelewski R, Gibon D, Thariat J, Grégoire V, Vera P, Ruan S. A Quantitative Comparison between Shannon and Tsallis–Havrda–Charvat Entropies Applied to Cancer Outcome Prediction. ENTROPY 2022; 24:e24040436. [PMID: 35455101 PMCID: PMC9031340 DOI: 10.3390/e24040436] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 03/18/2022] [Accepted: 03/18/2022] [Indexed: 11/16/2022]
Abstract
In this paper, we propose to quantitatively compare loss functions based on parameterized Tsallis–Havrda–Charvat entropy and classical Shannon entropy for the training of a deep network in the case of small datasets which are usually encountered in medical applications. Shannon cross-entropy is widely used as a loss function for most neural networks applied to the segmentation, classification and detection of images. Shannon entropy is a particular case of Tsallis–Havrda–Charvat entropy. In this work, we compare these two entropies through a medical application for predicting recurrence in patients with head–neck and lung cancers after treatment. Based on both CT images and patient information, a multitask deep neural network is proposed to perform a recurrence prediction task using cross-entropy as a loss function and an image reconstruction task. Tsallis–Havrda–Charvat cross-entropy is a parameterized cross-entropy with the parameter α. Shannon entropy is a particular case of Tsallis–Havrda–Charvat entropy for α=1. The influence of this parameter on the final prediction results is studied. In this paper, the experiments are conducted on two datasets including in total 580 patients, of whom 434 suffered from head–neck cancers and 146 from lung cancers. The results show that Tsallis–Havrda–Charvat entropy can achieve better performance in terms of prediction accuracy with some values of α.
Collapse
Affiliation(s)
- Thibaud Brochet
- LITIS, Quantif, University of Rouen, 76000 Rouen, France; (T.B.); (J.L.-L.); (A.H.); (S.T.); (I.G.); (R.M.); (P.V.)
| | - Jérôme Lapuyade-Lahorgue
- LITIS, Quantif, University of Rouen, 76000 Rouen, France; (T.B.); (J.L.-L.); (A.H.); (S.T.); (I.G.); (R.M.); (P.V.)
| | - Alexandre Huat
- LITIS, Quantif, University of Rouen, 76000 Rouen, France; (T.B.); (J.L.-L.); (A.H.); (S.T.); (I.G.); (R.M.); (P.V.)
- Centre Henri Becquerel, 76038 Rouen, France
- Société Aquilab, 59120 Lille, France;
| | - Sébastien Thureau
- LITIS, Quantif, University of Rouen, 76000 Rouen, France; (T.B.); (J.L.-L.); (A.H.); (S.T.); (I.G.); (R.M.); (P.V.)
- Centre Henri Becquerel, 76038 Rouen, France
| | - David Pasquier
- Département de Radiothérapie, Centre Oscar Lambret, 59000 Lille, France;
| | - Isabelle Gardin
- LITIS, Quantif, University of Rouen, 76000 Rouen, France; (T.B.); (J.L.-L.); (A.H.); (S.T.); (I.G.); (R.M.); (P.V.)
- Centre Henri Becquerel, 76038 Rouen, France
| | - Romain Modzelewski
- LITIS, Quantif, University of Rouen, 76000 Rouen, France; (T.B.); (J.L.-L.); (A.H.); (S.T.); (I.G.); (R.M.); (P.V.)
- Centre Henri Becquerel, 76038 Rouen, France
| | | | - Juliette Thariat
- Département de Radiothérapie, CLCC Francois Baclesse, 14000 Caen, France;
| | - Vincent Grégoire
- Département de Radiothérapie, Centre Léon Berard, 69008 Lyon, France;
| | - Pierre Vera
- LITIS, Quantif, University of Rouen, 76000 Rouen, France; (T.B.); (J.L.-L.); (A.H.); (S.T.); (I.G.); (R.M.); (P.V.)
- Centre Henri Becquerel, 76038 Rouen, France
| | - Su Ruan
- LITIS, Quantif, University of Rouen, 76000 Rouen, France; (T.B.); (J.L.-L.); (A.H.); (S.T.); (I.G.); (R.M.); (P.V.)
- Correspondence:
| |
Collapse
|
36
|
Zhang TC, Zhang J, Chen SC, Saada B. A Novel Prediction Model for Brain Glioma Image Segmentation Based on the Theory of Bose-Einstein Condensate. Front Med (Lausanne) 2022; 9:794125. [PMID: 35372409 PMCID: PMC8971582 DOI: 10.3389/fmed.2022.794125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 01/14/2022] [Indexed: 11/26/2022] Open
Abstract
Background The input image of a blurry glioma image segmentation is, usually, very unclear. It is difficult to obtain the accurate contour line of image segmentation. The main challenge facing the researchers is to correctly determine the area where the points on the contour line belong to the glioma image. This article highlights the mechanism of formation of glioma and provides an image segmentation prediction model to assist in the accurate division of glioma contour points. The proposed prediction model of segmentation associated with the process of the formation of glioma is innovative and challenging. Bose-Einstein Condensate (BEC) is a microscopic quantum phenomenon in which atoms condense to the ground state of energy as the temperature approaches absolute zero. In this article, we propose a BEC kernel function and a novel prediction model based on the BEC kernel to detect the relationship between the process of the BEC and the formation of a brain glioma. Furthermore, the theoretical derivation and proof of the prediction model are given from micro to macro through quantum mechanics, wave, oscillation of glioma, and statistical distribution of laws. The prediction model is a distinct segmentation model that is guided by BEC theory for blurry glioma image segmentation. Results Our approach is based on five tests. The first three tests aimed at confirming the measuring range of T and μ in the BEC kernel. The results are extended from −10 to 10, approximating the standard range to T ≤ 0, and μ from 0 to 6.7. Tests 4 and 5 are comparison tests. The comparison in Test 4 was based on various established cluster methods. The results show that our prediction model in image evaluation parameters of P, R, and F is the best amongst all the existent ten forms except for only one reference with the mean value of F that is between 0.88 and 0.93, while our approach returns between 0.85 and 0.99. Test 5 aimed to further compare our results, especially with CNN (Convolutional Neural Networks) methods, by challenging Brain Tumor Segmentation (BraTS) and clinic patient datasets. Our results were also better than all reference tests. In addition, the proposed prediction model with the BEC kernel is feasible and has a comparative validity in glioma image segmentation. Conclusions Theoretical derivation and experimental verification show that the prediction model based on the BEC kernel can solve the problem of accurate segmentation of blurry glioma images. It demonstrates that the BEC kernel is a more feasible, valid, and accurate approach than a lot of the recent year segmentation methods. It is also an advanced and innovative model of prediction deducing from micro BEC theory to macro glioma image segmentation.
Collapse
Affiliation(s)
- Tian Chi Zhang
- School of Information Science and Engineering, Chongqing Jiaotong University, Chongqing, China
| | - Jing Zhang
- School of Information Science and Engineering, University of Jinan, Jinan, China
- Shandong Provincial Key Laboratory of Network-Based Intelligent Computing, Jinan, China
- *Correspondence: Jing Zhang
| | - Shou Cun Chen
- School of Information Science and Engineering, University of Jinan, Jinan, China
- Shandong Provincial Key Laboratory of Network-Based Intelligent Computing, Jinan, China
| | - Bacem Saada
- Cancer Institute, Eighth Affiliated Hospital of Sun Yat-sen University, Shenzhen, China
- Department of Animal Biosciences, University of Guelph, Guelph, ON, Canada
| |
Collapse
|
37
|
Xu W, Yang H, Zhang M, Cao Z, Pan X, Liu W. Brain tumor segmentation with corner attention and high-dimensional perceptual loss. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103438] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
38
|
Kong D, Liu X, Wang Y, Li D, Xue J. 3D hierarchical dual-attention fully convolutional networks with hybrid losses for diverse glioma segmentation. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2021.107692] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
39
|
Efficient tumor volume measurement and segmentation approach for CT image based on twin support vector machines. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06769-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
40
|
Zhou T, Canu S, Vera P, Ruan S. Feature-enhanced generation and multi-modality fusion based deep neural network for brain tumor segmentation with missing MR modalities. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.09.032] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|