1
|
Onakpojeruo EP, Mustapha MT, Ozsahin DU, Ozsahin I. A Comparative Analysis of the Novel Conditional Deep Convolutional Neural Network Model, Using Conditional Deep Convolutional Generative Adversarial Network-Generated Synthetic and Augmented Brain Tumor Datasets for Image Classification. Brain Sci 2024; 14:559. [PMID: 38928561 PMCID: PMC11201720 DOI: 10.3390/brainsci14060559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Revised: 05/28/2024] [Accepted: 05/29/2024] [Indexed: 06/28/2024] Open
Abstract
Disease prediction is greatly challenged by the scarcity of datasets and privacy concerns associated with real medical data. An approach that stands out to circumvent this hurdle is the use of synthetic data generated using Generative Adversarial Networks (GANs). GANs can increase data volume while generating synthetic datasets that have no direct link to personal information. This study pioneers the use of GANs to create synthetic datasets and datasets augmented using traditional augmentation techniques for our binary classification task. The primary aim of this research was to evaluate the performance of our novel Conditional Deep Convolutional Neural Network (C-DCNN) model in classifying brain tumors by leveraging these augmented and synthetic datasets. We utilized advanced GAN models, including Conditional Deep Convolutional Generative Adversarial Network (DCGAN), to produce synthetic data that retained essential characteristics of the original datasets while ensuring privacy protection. Our C-DCNN model was trained on both augmented and synthetic datasets, and its performance was benchmarked against state-of-the-art models such as ResNet50, VGG16, VGG19, and InceptionV3. The evaluation metrics demonstrated that our C-DCNN model achieved accuracy, precision, recall, and F1 scores of 99% on both synthetic and augmented images, outperforming the comparative models. The findings of this study highlight the potential of using GAN-generated synthetic data in enhancing the training of machine learning models for medical image classification, particularly in scenarios with limited data available. This approach not only improves model accuracy but also addresses privacy concerns, making it a viable solution for real-world clinical applications in disease prediction and diagnosis.
Collapse
Affiliation(s)
- Efe Precious Onakpojeruo
- Department of Medical Diagnostic Imaging, College of Health Sciences, University of Sharjah, Sharjah 27272, United Arab Emirates;
- Research Institute of Medical and Health Sciences, University of Sharjah, Sharjah 27272, United Arab Emirates
| | - Mubarak Taiwo Mustapha
- Department of Medical Diagnostic Imaging, College of Health Sciences, University of Sharjah, Sharjah 27272, United Arab Emirates;
- Research Institute of Medical and Health Sciences, University of Sharjah, Sharjah 27272, United Arab Emirates
| | - Dilber Uzun Ozsahin
- Department of Medical Diagnostic Imaging, College of Health Sciences, University of Sharjah, Sharjah 27272, United Arab Emirates;
- Operational Research Centre in Healthcare, Near East University, TRNC Mersin 10, Nicosia 99138, Turkey; (E.P.O.)
- Department of Biomedical Engineering, Near East University, TRNC Mersin 10, Nicosia 99138, Turkey
| | - Ilker Ozsahin
- Department of Medical Diagnostic Imaging, College of Health Sciences, University of Sharjah, Sharjah 27272, United Arab Emirates;
- Brain Health Imaging Institute, Department of Radiology, Weill Cornell Medicine, New York, NY 10065, USA
| |
Collapse
|
2
|
Zhao X, Zang D, Wang S, Shen Z, Xuan K, Wei Z, Wang Z, Zheng R, Wu X, Li Z, Wang Q, Qi Z, Zhang L. sTBI-GAN: An adversarial learning approach for data synthesis on traumatic brain segmentation. Comput Med Imaging Graph 2024; 112:102325. [PMID: 38228021 DOI: 10.1016/j.compmedimag.2024.102325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 11/18/2023] [Accepted: 12/12/2023] [Indexed: 01/18/2024]
Abstract
Automatic brain segmentation of magnetic resonance images (MRIs) from severe traumatic brain injury (sTBI) patients is critical for brain abnormality assessments and brain network analysis. Construction of sTBI brain segmentation model requires manually annotated MR scans of sTBI patients, which becomes a challenging problem as it is quite impractical to implement sufficient annotations for sTBI images with large deformations and lesion erosion. Data augmentation techniques can be applied to alleviate the issue of limited training samples. However, conventional data augmentation strategies such as spatial and intensity transformation are unable to synthesize the deformation and lesions in traumatic brains, which limits the performance of the subsequent segmentation task. To address these issues, we propose a novel medical image inpainting model named sTBI-GAN to synthesize labeled sTBI MR scans by adversarial inpainting. The main strength of our sTBI-GAN method is that it can generate sTBI images and corresponding labels simultaneously, which has not been achieved in previous inpainting methods for medical images. We first generate the inpainted image under the guidance of edge information following a coarse-to-fine manner, and then the synthesized MR image is used as the prior for label inpainting. Furthermore, we introduce a registration-based template augmentation pipeline to increase the diversity of the synthesized image pairs and enhance the capacity of data augmentation. Experimental results show that the proposed sTBI-GAN method can synthesize high-quality labeled sTBI images, which greatly improves the 2D and 3D traumatic brain segmentation performance compared with the alternatives. Code is available at .
Collapse
Affiliation(s)
- Xiangyu Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Di Zang
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China; National Center for Neurological Disorders, Shanghai, China; Shanghai Key Laboratory of Brain Function and Restoration and Neural Regeneration, Shanghai, China; State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, School of Basic Medical Sciences and Institutes of Brain Science, Fudan University, China
| | - Sheng Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zhenrong Shen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Kai Xuan
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zeyu Wei
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China; National Center for Neurological Disorders, Shanghai, China; Shanghai Key Laboratory of Brain Function and Restoration and Neural Regeneration, Shanghai, China; State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, School of Basic Medical Sciences and Institutes of Brain Science, Fudan University, China
| | - Zhe Wang
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China; National Center for Neurological Disorders, Shanghai, China; Shanghai Key Laboratory of Brain Function and Restoration and Neural Regeneration, Shanghai, China; State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, School of Basic Medical Sciences and Institutes of Brain Science, Fudan University, China
| | - Ruizhe Zheng
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China; National Center for Neurological Disorders, Shanghai, China; Shanghai Key Laboratory of Brain Function and Restoration and Neural Regeneration, Shanghai, China; State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, School of Basic Medical Sciences and Institutes of Brain Science, Fudan University, China
| | - Xuehai Wu
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China; National Center for Neurological Disorders, Shanghai, China; Shanghai Key Laboratory of Brain Function and Restoration and Neural Regeneration, Shanghai, China; State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, School of Basic Medical Sciences and Institutes of Brain Science, Fudan University, China
| | - Zheren Li
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Qian Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Zengxin Qi
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China; National Center for Neurological Disorders, Shanghai, China; Shanghai Key Laboratory of Brain Function and Restoration and Neural Regeneration, Shanghai, China; State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, School of Basic Medical Sciences and Institutes of Brain Science, Fudan University, China.
| | - Lichi Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
3
|
Li W, Liu J, Wang S, Feng C. MTFN: multi-temporal feature fusing network with co-attention for DCE-MRI synthesis. BMC Med Imaging 2024; 24:47. [PMID: 38373915 PMCID: PMC10875895 DOI: 10.1186/s12880-024-01201-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 01/15/2024] [Indexed: 02/21/2024] Open
Abstract
BACKGROUND Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) plays an important role in the diagnosis and treatment of breast cancer. However, obtaining complete eight temporal images of DCE-MRI requires a long scanning time, which causes patients' discomfort in the scanning process. Therefore, to reduce the time, the multi temporal feature fusing neural network with Co-attention (MTFN) is proposed to generate the eighth temporal images of DCE-MRI, which enables the acquisition of DCE-MRI images without scanning. In order to reduce the time, multi-temporal feature fusion cooperative attention mechanism neural network (MTFN) is proposed to generate the eighth temporal images of DCE-MRI, which enables DCE-MRI image acquisition without scanning. METHODS In this paper, we propose multi temporal feature fusing neural network with Co-attention (MTFN) for DCE-MRI Synthesis, in which the Co-attention module can fully fuse the features of the first and third temporal image to obtain the hybrid features. The Co-attention explore long-range dependencies, not just relationships between pixels. Therefore, the hybrid features are more helpful to generate the eighth temporal images. RESULTS We conduct experiments on the private breast DCE-MRI dataset from hospitals and the multi modal Brain Tumor Segmentation Challenge2018 dataset (BraTs2018). Compared with existing methods, the experimental results of our method show the improvement and our method can generate more realistic images. In the meanwhile, we also use synthetic images to classify the molecular typing of breast cancer that the accuracy on the original eighth time-series images and the generated images are 89.53% and 92.46%, which have been improved by about 3%, and the classification results verify the practicability of the synthetic images. CONCLUSIONS The results of subjective evaluation and objective image quality evaluation indicators show the effectiveness of our method, which can obtain comprehensive and useful information. The improvement of classification accuracy proves that the images generated by our method are practical.
Collapse
Affiliation(s)
- Wei Li
- Key Laboratory of Intelligent Computing in Medical Image MIIC, Northeastern University, Shenyang, China
| | - Jiaye Liu
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Shanshan Wang
- School of Computer Science and Engineering, Northeastern University, Shenyang, China.
| | - Chaolu Feng
- Key Laboratory of Intelligent Computing in Medical Image MIIC, Northeastern University, Shenyang, China
| |
Collapse
|
4
|
Wu Z, Zhang X, Li F, Wang S, Li J. TransRender: a transformer-based boundary rendering segmentation network for stroke lesions. Front Neurosci 2023; 17:1259677. [PMID: 37901438 PMCID: PMC10601640 DOI: 10.3389/fnins.2023.1259677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 09/26/2023] [Indexed: 10/31/2023] Open
Abstract
Vision transformer architectures attract widespread interest due to their robust representation capabilities of global features. Transformer-based methods as the encoder achieve superior performance compared to convolutional neural networks and other popular networks in many segmentation tasks for medical images. Due to the complex structure of the brain and the approximate grayscale of healthy tissue and lesions, lesion segmentation suffers from over-smooth boundaries or inaccurate segmentation. Existing methods, including the transformer, utilize stacked convolutional layers as the decoder to uniformly treat each pixel as a grid, which is convenient for feature computation. However, they often neglect the high-frequency features of the boundary and focus excessively on the region features. We propose an effective method for lesion boundary rendering called TransRender, which adaptively selects a series of important points to compute the boundary features in a point-based rendering way. The transformer-based method is selected to capture global information during the encoding stage. Several renders efficiently map the encoded features of different levels to the original spatial resolution by combining global and local features. Furthermore, the point-based function is employed to supervise the render module generating points, so that TransRender can continuously refine the uncertainty region. We conducted substantial experiments on different stroke lesion segmentation datasets to prove the efficiency of TransRender. Several evaluation metrics illustrate that our method can automatically segment the stroke lesion with relatively high accuracy and low calculation complexity.
Collapse
Affiliation(s)
- Zelin Wu
- College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan, China
| | - Xueying Zhang
- College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan, China
| | - Fenglian Li
- College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan, China
| | - Suzhe Wang
- College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan, China
| | - Jiaying Li
- The First Clinical Medical College, Shanxi Medical University, Taiyuan, China
| |
Collapse
|
5
|
Diao Y, Li F, Li Z. Joint learning-based feature reconstruction and enhanced network for incomplete multi-modal brain tumor segmentation. Comput Biol Med 2023; 163:107234. [PMID: 37450967 DOI: 10.1016/j.compbiomed.2023.107234] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 06/12/2023] [Accepted: 07/01/2023] [Indexed: 07/18/2023]
Abstract
Multimodal Magnetic Resonance Imaging (MRI) can provide valuable complementary information and substantially enhance the performance of brain tumor segmentation. However, it is common for certain modalities to be absent or missing during clinical diagnosis, which can significantly impair segmentation techniques that rely on complete modalities. Current advanced methods attempt to address this challenge by developing shared feature representations via modal fusion to handle different missing modality situations. Considering the importance of missing modality information in multimodal segmentation, this paper utilize a feature reconstruction method to recover the missing information, and proposes a joint learning-based feature reconstruction and enhancement method for incomplete modality brain tumor segmentation. The method leverages an information learning mechanism to transfer information from the complete modality to a single modality, enabling it to obtain complete brain tumor information, even without the support of other modalities. Additionally, the method incorporates a module for reconstructing missing modality features, which recovers fused features of the absent modality through utilizing the abundant potential information obtained from the available modalities. Furthermore, the feature enhancement mechanism improves shared feature representation by utilizing the information obtained from the missing modalities that have been reconstructed. These processes enable the method to obtain more comprehensive information regarding brain tumors in various missing modality circumstances, thereby enhancing the model's robustness. The performance of the proposed model was evaluated on BraTS datasets and compared with other deep learning algorithms using Dice similarity scores. On the BraTS2018 dataset, the proposed algorithm achieved a Dice similarity score of 86.28%, 77.02%, and 59.64% for whole tumors, tumor cores, and enhanced tumors, respectively. These results demonstrate the superiority of our framework over state-of-the-art methods in missing modalities situations.
Collapse
Affiliation(s)
- Yueqin Diao
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China; Yunnan Key Laboratory of Artificial Intelligence, Kunming 650500, China.
| | - Fan Li
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China; Yunnan Key Laboratory of Artificial Intelligence, Kunming 650500, China.
| | - Zhiyuan Li
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China; Yunnan Key Laboratory of Artificial Intelligence, Kunming 650500, China.
| |
Collapse
|
6
|
Luo J, Pan M, Mo K, Mao Y, Zou D. Emerging role of artificial intelligence in diagnosis, classification and clinical management of glioma. Semin Cancer Biol 2023; 91:110-123. [PMID: 36907387 DOI: 10.1016/j.semcancer.2023.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 03/05/2023] [Accepted: 03/08/2023] [Indexed: 03/12/2023]
Abstract
Glioma represents a dominant primary intracranial malignancy in the central nervous system. Artificial intelligence that mainly includes machine learning, and deep learning computational approaches, presents a unique opportunity to enhance clinical management of glioma through improving tumor segmentation, diagnosis, differentiation, grading, treatment, prediction of clinical outcomes (prognosis, and recurrence), molecular features, clinical classification, characterization of the tumor microenvironment, and drug discovery. A growing body of recent studies apply artificial intelligence-based models to disparate data sources of glioma, covering imaging modalities, digital pathology, high-throughput multi-omics data (especially emerging single-cell RNA sequencing and spatial transcriptome), etc. While these early findings are promising, future studies are required to normalize artificial intelligence-based models to improve the generalizability and interpretability of the results. Despite prominent issues, targeted clinical application of artificial intelligence approaches in glioma will facilitate the development of precision medicine of this field. If these challenges can be overcome, artificial intelligence has the potential to profoundly change the way patients with or at risk of glioma are provided with more rational care.
Collapse
Affiliation(s)
- Jiefeng Luo
- Department of Neurology, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China
| | - Mika Pan
- Department of Neurology, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China
| | - Ke Mo
- Clinical Research Center, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China
| | - Yingwei Mao
- Department of Biology, Pennsylvania State University, University Park, PA 16802, USA.
| | - Donghua Zou
- Department of Neurology, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China; Clinical Research Center, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China.
| |
Collapse
|
7
|
Ranjbarzadeh R, Dorosti S, Jafarzadeh Ghoushchi S, Caputo A, Tirkolaee EB, Ali SS, Arshadi Z, Bendechache M. Breast tumor localization and segmentation using machine learning techniques: Overview of datasets, findings, and methods. Comput Biol Med 2023; 152:106443. [PMID: 36563539 DOI: 10.1016/j.compbiomed.2022.106443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 11/24/2022] [Accepted: 12/15/2022] [Indexed: 12/23/2022]
Abstract
The Global Cancer Statistics 2020 reported breast cancer (BC) as the most common diagnosis of cancer type. Therefore, early detection of such type of cancer would reduce the risk of death from it. Breast imaging techniques are one of the most frequently used techniques to detect the position of cancerous cells or suspicious lesions. Computer-aided diagnosis (CAD) is a particular generation of computer systems that assist experts in detecting medical image abnormalities. In the last decades, CAD has applied deep learning (DL) and machine learning approaches to perform complex medical tasks in the computer vision area and improve the ability to make decisions for doctors and radiologists. The most popular and widely used technique of image processing in CAD systems is segmentation which consists of extracting the region of interest (ROI) through various techniques. This research provides a detailed description of the main categories of segmentation procedures which are classified into three classes: supervised, unsupervised, and DL. The main aim of this work is to provide an overview of each of these techniques and discuss their pros and cons. This will help researchers better understand these techniques and assist them in choosing the appropriate method for a given use case.
Collapse
Affiliation(s)
- Ramin Ranjbarzadeh
- School of Computing, Faculty of Engineering and Computing, Dublin City University, Ireland.
| | - Shadi Dorosti
- Department of Industrial Engineering, Urmia University of Technology, Urmia, Iran.
| | | | - Annalina Caputo
- School of Computing, Faculty of Engineering and Computing, Dublin City University, Ireland.
| | | | - Sadia Samar Ali
- Department of Industrial Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah, Saudi Arabia.
| | - Zahra Arshadi
- Faculty of Electronics, Telecommunications and Physics Engineering, Polytechnic University, Turin, Italy.
| | - Malika Bendechache
- Lero & ADAPT Research Centres, School of Computer Science, University of Galway, Ireland.
| |
Collapse
|