1
|
Tsukano M, Yamamoto Y, Shirai M, Takamura M, Matsuo K, Miyahara Y, Kaji Y. [Effect of Training Data Differences on Accuracy in MR Image Generation Using Pix2pix]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2024; 80:1277-1287. [PMID: 39477465 DOI: 10.6009/jjrt.2024-1487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2024]
Abstract
PURPOSE Using a magnetic resonance (MR) image generation technique with deep learning, we elucidated whether changing the training data patterns affected image generation accuracy. METHODS The pix2pix training model generated T1-weighted images from T2-weighted images or FLAIR images. Head MR images obtained at our hospital were used in this study. We prepared 300 cases for each model and four training data patterns for each model (a: 150 cases for one MR system, b: 300 cases for one MR system, c: 150 cases and augmentation data for one MR system, and d: 300 cases for two MR systems). The extension data were images of 150 cases rotated in the XY plane. The similarity between the images generated by the training and evaluation data in each group was evaluated using the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). RESULTS For both MR systems, the PSNR and SSIM were higher for training dataset b than training dataset a. The PSNR and SSIM were lower for training dataset d. CONCLUSION MR image generation accuracy varied among training data patterns.
Collapse
Affiliation(s)
| | - Yasushi Yamamoto
- Department of Radiology, Faculty of Medicine, Shimane University
| | - Masato Shirai
- Department of Intelligent Information Design, Faculty of Interdisciplinary Science and Engineering, Shimane University
| | | | | | | | - Yasushi Kaji
- Department of Radiology, Faculty of Medicine, Shimane University
| |
Collapse
|
2
|
Asadi F, Angsuwatanakul T, O’Reilly JA. Evaluating synthetic neuroimaging data augmentation for automatic brain tumour segmentation with a deep fully-convolutional network. IBRO Neurosci Rep 2024; 16:57-66. [PMID: 39007088 PMCID: PMC11240293 DOI: 10.1016/j.ibneur.2023.12.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 12/11/2023] [Indexed: 07/16/2024] Open
Abstract
Gliomas observed in medical images require expert neuro-radiologist evaluation for treatment planning and monitoring, motivating development of intelligent systems capable of automating aspects of tumour evaluation. Deep learning models for automatic image segmentation rely on the amount and quality of training data. In this study we developed a neuroimaging synthesis technique to augment data for training fully-convolutional networks (U-nets) to perform automatic glioma segmentation. We used StyleGAN2-ada to simultaneously generate fluid-attenuated inversion recovery (FLAIR) magnetic resonance images and corresponding glioma segmentation masks. Synthetic data were successively added to real training data (n = 2751) in fourteen rounds of 1000 and used to train U-nets that were evaluated on held-out validation (n = 590) and test sets (n = 588). U-nets were trained with and without geometric augmentation (translation, zoom and shear), and Dice coefficients were computed to evaluate segmentation performance. We also monitored the number of training iterations before stopping, total training time, and time per iteration to evaluate computational costs associated with training each U-net. Synthetic data augmentation yielded marginal improvements in Dice coefficients (validation set +0.0409, test set +0.0355), whereas geometric augmentation improved generalization (standard deviation between training, validation and test set performances of 0.01 with, and 0.04 without geometric augmentation). Based on the modest performance gains for automatic glioma segmentation we find it hard to justify the computational expense of developing a synthetic image generation pipeline. Future work may seek to optimize the efficiency of synthetic data generation for augmentation of neuroimaging data.
Collapse
Affiliation(s)
- Fawad Asadi
- College of Biomedical Engineering, Rangsit University, Pathum Thani 12000, Thailand
| | | | - Jamie A. O’Reilly
- School of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
| |
Collapse
|
3
|
Khalil YA, Ayaz A, Lorenz C, Weese J, Pluim J, Breeuwer M. Multi-modal brain tumor segmentation via conditional synthesis with Fourier domain adaptation. Comput Med Imaging Graph 2024; 112:102332. [PMID: 38245925 DOI: 10.1016/j.compmedimag.2024.102332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 10/31/2023] [Accepted: 12/13/2023] [Indexed: 01/23/2024]
Abstract
Accurate brain tumor segmentation is critical for diagnosis and treatment planning, whereby multi-modal magnetic resonance imaging (MRI) is typically used for analysis. However, obtaining all required sequences and expertly labeled data for training is challenging and can result in decreased quality of segmentation models developed through automated algorithms. In this work, we examine the possibility of employing a conditional generative adversarial network (GAN) approach for synthesizing multi-modal images to train deep learning-based neural networks aimed at high-grade glioma (HGG) segmentation. The proposed GAN is conditioned on auxiliary brain tissue and tumor segmentation masks, allowing us to attain better accuracy and control of tissue appearance during synthesis. To reduce the domain shift between synthetic and real MR images, we additionally adapt the low-frequency Fourier space components of synthetic data, reflecting the style of the image, to those of real data. We demonstrate the impact of Fourier domain adaptation (FDA) on the training of 3D segmentation networks and attain significant improvements in both the segmentation performance and prediction confidence. Similar outcomes are seen when such data is used as a training augmentation alongside the available real images. In fact, experiments on the BraTS2020 dataset reveal that models trained solely with synthetic data exhibit an improvement of up to 4% in Dice score when using FDA, while training with both real and FDA-processed synthetic data through augmentation results in an improvement of up to 5% in Dice compared to using real data alone. This study highlights the importance of considering image frequency in generative approaches for medical image synthesis and offers a promising approach to address data scarcity in medical imaging segmentation.
Collapse
Affiliation(s)
- Yasmina Al Khalil
- Biomedical Engineering Department, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Aymen Ayaz
- Biomedical Engineering Department, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | | | - Jürgen Weese
- Philips Research Laboratories, Hamburg, Germany.
| | - Josien Pluim
- Biomedical Engineering Department, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Marcel Breeuwer
- Biomedical Engineering Department, Eindhoven University of Technology, Eindhoven, The Netherlands; Philips Healthcare, Best, The Netherlands.
| |
Collapse
|
4
|
Saluja S, Trivedi MC, Sarangdevot SS. Advancing glioma diagnosis: Integrating custom U-Net and VGG-16 for improved grading in MR imaging. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:4328-4350. [PMID: 38549330 DOI: 10.3934/mbe.2024191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/02/2024]
Abstract
In the realm of medical imaging, the precise segmentation and classification of gliomas represent fundamental challenges with profound clinical implications. Leveraging the BraTS 2018 dataset as a standard benchmark, this study delves into the potential of advanced deep learning models for addressing these challenges. We propose a novel approach that integrates a customized U-Net for segmentation and VGG-16 for classification. The U-Net, with its tailored encoder-decoder pathways, accurately identifies glioma regions, thus improving tumor localization. The fine-tuned VGG-16, featuring a customized output layer, precisely differentiates between low-grade and high-grade gliomas. To ensure consistency in data pre-processing, a standardized methodology involving gamma correction, data augmentation, and normalization is introduced. This novel integration surpasses existing methods, offering significantly improved glioma diagnosis, validated by high segmentation dice scores (WT: 0.96, TC: 0.92, ET: 0.89), and a remarkable overall classification accuracy of 97.89%. The experimental findings underscore the potential of integrating deep learning-based methodologies for tumor segmentation and classification in enhancing glioma diagnosis and formulating subsequent treatment strategies.
Collapse
Affiliation(s)
- Sonam Saluja
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura, 799046, India
| | - Munesh Chandra Trivedi
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura, 799046, India
| | | |
Collapse
|
5
|
Schmidt EK, Krishnan C, Onuoha E, Gregory AV, Kline TL, Mrug M, Cardenas C, Kim H. Deep learning-based automated kidney and cyst segmentation of autosomal dominant polycystic kidney disease using single vs. multi-institutional data. Clin Imaging 2024; 106:110068. [PMID: 38101228 DOI: 10.1016/j.clinimag.2023.110068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 12/06/2023] [Accepted: 12/11/2023] [Indexed: 12/17/2023]
Abstract
PURPOSE This study aimed to investigate if a deep learning model trained with a single institution's data has comparable accuracy to that trained with multi-institutional data for segmenting kidney and cyst regions in magnetic resonance (MR) images of patients affected by autosomal dominant polycystic kidney disease (ADPKD). METHODS We used TensorFlow with a Keras custom UNet on 2D slices of 756 MRI images of kidneys with ADPKD obtained from four institutions in the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease (CRISP) study. The ground truth was determined via a manual plus global thresholding method. Five models were trained with 80 % of all institutional data (n = 604) and each institutional data (n = 232, 172, 148, or 52), respectively, and validated with 10 % and tested on an unseen 10 % of the data. The model's performance was evaluated using the Dice Similarity Coefficient (DSC). RESULTS The DSCs by the model trained with all institutional data ranged from 0.92 to 0.95 for kidney image segmentation, only 1-2 % higher than those by the models trained with single institutional data (0.90-0.93).In cyst segmentation, however, the DSCs by the model trained with all institutional data ranged from 0.83 to 0.89, which were 2-20 % higher than those by the models trained with single institutional data (0.66-0.86). CONCLUSION The UNet performance, when trained with a single institutional dataset, exhibited similar accuracy to the model trained on a multi-institutional dataset. Segmentation accuracy increases with models trained on larger sample sizes, especially in more complex cyst segmentation.
Collapse
Affiliation(s)
- Emma K Schmidt
- Department of Biomedical Engineering, The University of Alabama at Birmingham, Birmingham, AL 35294, USA
| | - Chetana Krishnan
- Department of Biomedical Engineering, The University of Alabama at Birmingham, Birmingham, AL 35294, USA
| | - Ezinwanne Onuoha
- Department of Biomedical Engineering, The University of Alabama at Birmingham, Birmingham, AL 35294, USA
| | | | - Timothy L Kline
- Department of Radiology, Mayo Clinic, Rochester, MN 55902, USA
| | - Michal Mrug
- Department of Veterans Affairs Medical Center, Birmingham, AL 35233, USA; Department of Nephrology, The University of Alabama at Birmingham, Birmingham, AL 35294, USA
| | - Carlos Cardenas
- Department of Biomedical Engineering, The University of Alabama at Birmingham, Birmingham, AL 35294, USA; Department of Radiation Oncology, The University of Alabama at Birmingham, Birmingham, AL 35294, USA.
| | - Harrison Kim
- Department of Biomedical Engineering, The University of Alabama at Birmingham, Birmingham, AL 35294, USA; Department of Radiology, The University of Alabama at Birmingham, Birmingham, AL 35294, USA.
| |
Collapse
|
6
|
Pitarch C, Ungan G, Julià-Sapé M, Vellido A. Advances in the Use of Deep Learning for the Analysis of Magnetic Resonance Image in Neuro-Oncology. Cancers (Basel) 2024; 16:300. [PMID: 38254790 PMCID: PMC10814384 DOI: 10.3390/cancers16020300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 12/28/2023] [Accepted: 01/08/2024] [Indexed: 01/24/2024] Open
Abstract
Machine Learning is entering a phase of maturity, but its medical applications still lag behind in terms of practical use. The field of oncological radiology (and neuro-oncology in particular) is at the forefront of these developments, now boosted by the success of Deep-Learning methods for the analysis of medical images. This paper reviews in detail some of the most recent advances in the use of Deep Learning in this field, from the broader topic of the development of Machine-Learning-based analytical pipelines to specific instantiations of the use of Deep Learning in neuro-oncology; the latter including its use in the groundbreaking field of ultra-low field magnetic resonance imaging.
Collapse
Affiliation(s)
- Carla Pitarch
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Eurecat, Digital Health Unit, Technology Centre of Catalonia, 08005 Barcelona, Spain
| | - Gulnur Ungan
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Margarida Julià-Sapé
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Alfredo Vellido
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| |
Collapse
|
7
|
Mowlani K, Jafari Shahbazzadeh M, Hashemipour M. Segmentation and classification of brain tumors using fuzzy 3D highlighting and machine learning. J Cancer Res Clin Oncol 2023; 149:9025-9041. [PMID: 37166578 DOI: 10.1007/s00432-023-04754-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Accepted: 04/08/2023] [Indexed: 05/12/2023]
Abstract
PURPOSE Brain tumors are among the most lethal forms of cancer, so early diagnosis is crucial. As a result of machine learning algorithms, radiologists can now make accurate diagnoses of tumors without resorting to invasive procedures. There are, however, a number of obstacles to overcome. To begin, classifying brain tumors presents a significant difficulty in developing the most effective deep learning framework. Furthermore, physically dividing the brain tumor is a time-consuming and challenging process that requires the expertise of medical professionals. METHODS Here, we have discussed the use of a fuzzy 3D highlighting method for the segmentation of brain tumors and the selection of suspect tumor areas based on the geometric characteristics of MRI scans. After features were extracted from the brain tumor section, the images were classified using two machine learning methods: a support vector machine technique optimized with the grasshopper optimization algorithm (GOA-SVM), and a deep neural network technique based on features selected with the genetic algorithm (GA-DNN). This classifies brain tumors into benign and malignant. Implemented on the MATLAB platform, the proposed method is evaluated for effectiveness using performance metrics like sensitivity, accuracy, specificity, and Youden index. RESULTS From these results, it is clear that the proposed strategy is significantly superior to the alternatives. The average classification accuracy was determined to be 97.53%, 97.65%, for GA-DNN and GOA-SVM, respectively. CONCLUSION These findings may be a quick and important step to detect the presence of lesions at the same time as cancerous tumors in neurology diagnosis.
Collapse
Affiliation(s)
- Khalil Mowlani
- Department of Computer Engineering, Kerman Branch, Islamic Azad University, Kerman, Iran
| | | | - Maliheh Hashemipour
- Department of Computer Engineering, Kerman Branch, Islamic Azad University, Kerman, Iran
| |
Collapse
|