1
|
Fernandes JND, Cardoso VEM, Comesaña-Campos A, Pinheira A. Comprehensive Review: Machine and Deep Learning in Brain Stroke Diagnosis. SENSORS (BASEL, SWITZERLAND) 2024; 24:4355. [PMID: 39001134 PMCID: PMC11244385 DOI: 10.3390/s24134355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 06/19/2024] [Accepted: 07/01/2024] [Indexed: 07/16/2024]
Abstract
Brain stroke, or a cerebrovascular accident, is a devastating medical condition that disrupts the blood supply to the brain, depriving it of oxygen and nutrients. Each year, according to the World Health Organization, 15 million people worldwide experience a stroke. This results in approximately 5 million deaths and another 5 million individuals suffering permanent disabilities. The complex interplay of various risk factors highlights the urgent need for sophisticated analytical methods to more accurately predict stroke risks and manage their outcomes. Machine learning and deep learning technologies offer promising solutions by analyzing extensive datasets including patient demographics, health records, and lifestyle choices to uncover patterns and predictors not easily discernible by humans. These technologies enable advanced data processing, analysis, and fusion techniques for a comprehensive health assessment. We conducted a comprehensive review of 25 review papers published between 2020 and 2024 on machine learning and deep learning applications in brain stroke diagnosis, focusing on classification, segmentation, and object detection. Furthermore, all these reviews explore the performance evaluation and validation of advanced sensor systems in these areas, enhancing predictive health monitoring and personalized care recommendations. Moreover, we also provide a collection of the most relevant datasets used in brain stroke analysis. The selection of the papers was conducted according to PRISMA guidelines. Furthermore, this review critically examines each domain, identifies current challenges, and proposes future research directions, emphasizing the potential of AI methods in transforming health monitoring and patient care.
Collapse
Affiliation(s)
- João N. D. Fernandes
- INESC TEC, 4200-465 Porto, Portugal
- Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
- Department of Computer Engineering, Superior Institute of Engineering of Porto, 4249-015 Porto, Portugal;
| | - Vitor E. M. Cardoso
- Collaborative Laboratory for the Future Built Environment (BUILT CoLAB), Rua Do Campo Alegre, 760, 4150-003 Porto, Portugal;
- Department of Computer Engineering, Superior Institute of Engineering of Porto, 4249-015 Porto, Portugal;
| | - Alberto Comesaña-Campos
- Department of Design in Engineering, University of Vigo, 36312 Vigo, Spain;
- Design, Expert Systems and Artificial Intelligent Solutions Group (DESAINS), Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36312 Vigo, Spain
| | - Alberto Pinheira
- Department of Computer Engineering, Superior Institute of Engineering of Porto, 4249-015 Porto, Portugal;
- Department of Design in Engineering, University of Vigo, 36312 Vigo, Spain;
- Design, Expert Systems and Artificial Intelligent Solutions Group (DESAINS), Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36312 Vigo, Spain
- Center for Health Technologies and Information Systems Research—CINTESIS@RISE, Faculty of Medicine, University of Porto, 4200-450 Porto, Portugal
| |
Collapse
|
2
|
Muhammed A, Bakheet RA, Kenawy K, Ahmed AMA, Abdelhamid M, Soliman WG. Potential Role of Generative Adversarial Networks in Enhancing Brain Tumors. JCO Clin Cancer Inform 2024; 8:e2300266. [PMID: 39028919 DOI: 10.1200/cci.23.00266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 04/19/2024] [Accepted: 05/22/2024] [Indexed: 07/21/2024] Open
Abstract
PURPOSE Contrast enhancement is necessary for visualizing, diagnosing, and treating brain tumors. Through this study, we aimed to examine the potential role of general adversarial neural networks in generating artificial intelligence-based enhancement of tumors using a lightweight model. PATIENTS AND METHODS A retrospective study was conducted on magnetic resonance imaging scans of patients diagnosed with brain tumors between 2020 and 2023. A generative adversarial neural network was built to generate images that would mimic the real contrast enhancement of these tumors. The performance of the neural network was evaluated quantitatively by VGG-16, ResNet, binary cross-entropy loss, mean absolute error, mean squared error, and structural similarity index measures. Regarding the qualitative evaluation, nine cases were randomly selected from the test set and were used to build a short satisfaction survey for experienced medical professionals. RESULTS One hundred twenty-nine patients with 156 scans were identified from the hospital database. The data were randomly split into a training set and validation set (90%) and a test set (10%). The VGG loss function for training, validation, and test sets were 2,049.8, 2,632.6, and 4,276.9, respectively. Additionally, the structural similarity index measured 0.366, 0.356, and 0.3192, respectively. At the time of submitting the article, 23 medical professionals responded to the survey. The median overall satisfaction score was 7 of 10. CONCLUSION Our network would open the door for using lightweight models in performing artificial contrast enhancement. Further research is necessary in this field to reach the point of clinical practicality.
Collapse
Affiliation(s)
- Amr Muhammed
- Clinical Oncology Department, Sohag University Hospital, Sohag, Egypt
| | - Rafaat A Bakheet
- Clinical Oncology Department, Sohag University Hospital, Sohag, Egypt
| | - Karam Kenawy
- Neurosurgery Department, Sohag University Hospital, Sohag, Egypt
| | - Ahmed M A Ahmed
- Clinical Oncology Department, Sohag University Hospital, Sohag, Egypt
| | | | | |
Collapse
|
3
|
Ayaz A, Al Khalil Y, Amirrajab S, Lorenz C, Weese J, Pluim J, Breeuwer M. Brain MR image simulation for deep learning based medical image analysis networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 248:108115. [PMID: 38503072 DOI: 10.1016/j.cmpb.2024.108115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 02/02/2024] [Accepted: 03/02/2024] [Indexed: 03/21/2024]
Abstract
BACKGROUND AND OBJECTIVE As large sets of annotated MRI data are needed for training and validating deep learning based medical image analysis algorithms, the lack of sufficient annotated data is a critical problem. A possible solution is the generation of artificial data by means of physics-based simulations. Existing brain simulation data is limited in terms of anatomical models, tissue classes, fixed tissue characteristics, MR sequences and overall realism. METHODS We propose a realistic simulation framework by incorporating patient-specific phantoms and Bloch equations-based analytical solutions for fast and accurate MRI simulations. A large number of labels are derived from open-source high-resolution T1w MRI data using a fully automated brain classification tool. The brain labels are taken as ground truth (GT) on which MR images are simulated using our framework. Moreover, we demonstrate that the T1w MR images generated from our framework along with GT annotations can be utilized directly to train a 3D brain segmentation network. To evaluate our model further on larger set of real multi-source MRI data without GT, we compared our model to existing brain segmentation tools, FSL-FAST and SynthSeg. RESULTS Our framework generates 3D brain MRI for variable anatomy, sequence, contrast, SNR and resolution. The brain segmentation network for WM/GM/CSF trained only on T1w simulated data shows promising results on real MRI data from MRBrainS18 challenge dataset with a Dice scores of 0.818/0.832/0.828. On OASIS data, our model exhibits a close performance to FSL, both qualitatively and quantitatively with a Dice scores of 0.901/0.939/0.937. CONCLUSIONS Our proposed simulation framework is the initial step towards achieving truly physics-based MRI image generation, providing flexibility to generate large sets of variable MRI data for desired anatomy, sequence, contrast, SNR, and resolution. Furthermore, the generated images can effectively train 3D brain segmentation networks, mitigating the reliance on real 3D annotated data.
Collapse
Affiliation(s)
- Aymen Ayaz
- Biomedical Engineering Department, Eindhoven University of Technology, Eindhoven, the Netherlands.
| | - Yasmina Al Khalil
- Biomedical Engineering Department, Eindhoven University of Technology, Eindhoven, the Netherlands.
| | - Sina Amirrajab
- Biomedical Engineering Department, Eindhoven University of Technology, Eindhoven, the Netherlands.
| | | | - Jürgen Weese
- Philips Research Laboratories, Hamburg, Germany.
| | - Josien Pluim
- Biomedical Engineering Department, Eindhoven University of Technology, Eindhoven, the Netherlands.
| | - Marcel Breeuwer
- Biomedical Engineering Department, Eindhoven University of Technology, Eindhoven, the Netherlands; MR R&D - Clinical Science, Philips Healthcare, Best, the Netherlands.
| |
Collapse
|
4
|
Ahmed HS. Uncover This Tech Term: Generative Adversarial Networks. Korean J Radiol 2024; 25:493-498. [PMID: 38627875 PMCID: PMC11058428 DOI: 10.3348/kjr.2023.1306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Revised: 02/04/2024] [Accepted: 02/11/2024] [Indexed: 05/01/2024] Open
Affiliation(s)
- H Shafeeq Ahmed
- Bangalore Medical College and Research Institute, Bangalore, India.
| |
Collapse
|
5
|
Khalighi S, Reddy K, Midya A, Pandav KB, Madabhushi A, Abedalthagafi M. Artificial intelligence in neuro-oncology: advances and challenges in brain tumor diagnosis, prognosis, and precision treatment. NPJ Precis Oncol 2024; 8:80. [PMID: 38553633 PMCID: PMC10980741 DOI: 10.1038/s41698-024-00575-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 03/13/2024] [Indexed: 04/02/2024] Open
Abstract
This review delves into the most recent advancements in applying artificial intelligence (AI) within neuro-oncology, specifically emphasizing work on gliomas, a class of brain tumors that represent a significant global health issue. AI has brought transformative innovations to brain tumor management, utilizing imaging, histopathological, and genomic tools for efficient detection, categorization, outcome prediction, and treatment planning. Assessing its influence across all facets of malignant brain tumor management- diagnosis, prognosis, and therapy- AI models outperform human evaluations in terms of accuracy and specificity. Their ability to discern molecular aspects from imaging may reduce reliance on invasive diagnostics and may accelerate the time to molecular diagnoses. The review covers AI techniques, from classical machine learning to deep learning, highlighting current applications and challenges. Promising directions for future research include multimodal data integration, generative AI, large medical language models, precise tumor delineation and characterization, and addressing racial and gender disparities. Adaptive personalized treatment strategies are also emphasized for optimizing clinical outcomes. Ethical, legal, and social implications are discussed, advocating for transparency and fairness in AI integration for neuro-oncology and providing a holistic understanding of its transformative impact on patient care.
Collapse
Affiliation(s)
- Sirvan Khalighi
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Kartik Reddy
- Department of Radiology, Emory University, Atlanta, GA, USA
| | - Abhishek Midya
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Krunal Balvantbhai Pandav
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Anant Madabhushi
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA.
- Atlanta Veterans Administration Medical Center, Atlanta, GA, USA.
| | - Malak Abedalthagafi
- Department of Pathology and Laboratory Medicine, Emory University, Atlanta, GA, USA.
- The Cell and Molecular Biology Program, Winship Cancer Institute, Atlanta, GA, USA.
| |
Collapse
|
6
|
Kaba E, Vogl TJ. Can We Use Large Language Models for the Use of Contrast Media in Radiology? Acad Radiol 2024; 31:752. [PMID: 38092589 DOI: 10.1016/j.acra.2023.11.034] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 11/20/2023] [Accepted: 11/21/2023] [Indexed: 02/26/2024]
Affiliation(s)
- Esat Kaba
- Department of Radiology, Recep Tayyip Erdogan University, Rize, Turkey (E.K.).
| | - Thomas J Vogl
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt, Germany (T.J.V.)
| |
Collapse
|
7
|
Ali H, Shah Z, Alam T, Wijayatunga P, Elyan E. Editorial: Recent advances in multimodal artificial intelligence for disease diagnosis, prognosis, and prevention. FRONTIERS IN RADIOLOGY 2024; 3:1349830. [PMID: 38268783 PMCID: PMC10806116 DOI: 10.3389/fradi.2023.1349830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 12/11/2023] [Indexed: 01/26/2024]
Affiliation(s)
- Hazrat Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Zubair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Tanvir Alam
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | | | - Eyad Elyan
- School of Computing Science and Digital Media, Robert Gordon University, Aberdeen, United Kingdom
| |
Collapse
|
8
|
Ali H, Qureshi R, Shah Z. Artificial Intelligence-Based Methods for Integrating Local and Global Features for Brain Cancer Imaging: Scoping Review. JMIR Med Inform 2023; 11:e47445. [PMID: 37976086 PMCID: PMC10692876 DOI: 10.2196/47445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 07/02/2023] [Accepted: 07/12/2023] [Indexed: 11/19/2023] Open
Abstract
BACKGROUND Transformer-based models are gaining popularity in medical imaging and cancer imaging applications. Many recent studies have demonstrated the use of transformer-based models for brain cancer imaging applications such as diagnosis and tumor segmentation. OBJECTIVE This study aims to review how different vision transformers (ViTs) contributed to advancing brain cancer diagnosis and tumor segmentation using brain image data. This study examines the different architectures developed for enhancing the task of brain tumor segmentation. Furthermore, it explores how the ViT-based models augmented the performance of convolutional neural networks for brain cancer imaging. METHODS This review performed the study search and study selection following the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. The search comprised 4 popular scientific databases: PubMed, Scopus, IEEE Xplore, and Google Scholar. The search terms were formulated to cover the interventions (ie, ViTs) and the target application (ie, brain cancer imaging). The title and abstract for study selection were performed by 2 reviewers independently and validated by a third reviewer. Data extraction was performed by 2 reviewers and validated by a third reviewer. Finally, the data were synthesized using a narrative approach. RESULTS Of the 736 retrieved studies, 22 (3%) were included in this review. These studies were published in 2021 and 2022. The most commonly addressed task in these studies was tumor segmentation using ViTs. No study reported early detection of brain cancer. Among the different ViT architectures, Shifted Window transformer-based architectures have recently become the most popular choice of the research community. Among the included architectures, UNet transformer and TransUNet had the highest number of parameters and thus needed a cluster of as many as 8 graphics processing units for model training. The brain tumor segmentation challenge data set was the most popular data set used in the included studies. ViT was used in different combinations with convolutional neural networks to capture both the global and local context of the input brain imaging data. CONCLUSIONS It can be argued that the computational complexity of transformer architectures is a bottleneck in advancing the field and enabling clinical transformations. This review provides the current state of knowledge on the topic, and the findings of this review will be helpful for researchers in the field of medical artificial intelligence and its applications in brain cancer.
Collapse
Affiliation(s)
- Hazrat Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Rizwan Qureshi
- Department of Imaging Physics, MD Anderson Cancer Center, University of Texas, Houston, Houston, TX, United States
| | - Zubair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
9
|
Ng CKC. Generative Adversarial Network (Generative Artificial Intelligence) in Pediatric Radiology: A Systematic Review. CHILDREN (BASEL, SWITZERLAND) 2023; 10:1372. [PMID: 37628371 PMCID: PMC10453402 DOI: 10.3390/children10081372] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 08/07/2023] [Accepted: 08/09/2023] [Indexed: 08/27/2023]
Abstract
Generative artificial intelligence, especially with regard to the generative adversarial network (GAN), is an important research area in radiology as evidenced by a number of literature reviews on the role of GAN in radiology published in the last few years. However, no review article about GAN in pediatric radiology has been published yet. The purpose of this paper is to systematically review applications of GAN in pediatric radiology, their performances, and methods for their performance evaluation. Electronic databases were used for a literature search on 6 April 2023. Thirty-seven papers met the selection criteria and were included. This review reveals that the GAN can be applied to magnetic resonance imaging, X-ray, computed tomography, ultrasound and positron emission tomography for image translation, segmentation, reconstruction, quality assessment, synthesis and data augmentation, and disease diagnosis. About 80% of the included studies compared their GAN model performances with those of other approaches and indicated that their GAN models outperformed the others by 0.1-158.6%. However, these study findings should be used with caution because of a number of methodological weaknesses. For future GAN studies, more robust methods will be essential for addressing these issues. Otherwise, this would affect the clinical adoption of the GAN-based applications in pediatric radiology and the potential advantages of GAN could not be realized widely.
Collapse
Affiliation(s)
- Curtise K. C. Ng
- Curtin Medical School, Curtin University, GPO Box U1987, Perth, WA 6845, Australia; or ; Tel.: +61-8-9266-7314; Fax: +61-8-9266-2377
- Curtin Health Innovation Research Institute (CHIRI), Faculty of Health Sciences, Curtin University, GPO Box U1987, Perth, WA 6845, Australia
| |
Collapse
|
10
|
Kebaili A, Lapuyade-Lahorgue J, Ruan S. Deep Learning Approaches for Data Augmentation in Medical Imaging: A Review. J Imaging 2023; 9:81. [PMID: 37103232 PMCID: PMC10144738 DOI: 10.3390/jimaging9040081] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 03/31/2023] [Accepted: 04/07/2023] [Indexed: 04/28/2023] Open
Abstract
Deep learning has become a popular tool for medical image analysis, but the limited availability of training data remains a major challenge, particularly in the medical field where data acquisition can be costly and subject to privacy regulations. Data augmentation techniques offer a solution by artificially increasing the number of training samples, but these techniques often produce limited and unconvincing results. To address this issue, a growing number of studies have proposed the use of deep generative models to generate more realistic and diverse data that conform to the true distribution of the data. In this review, we focus on three types of deep generative models for medical image augmentation: variational autoencoders, generative adversarial networks, and diffusion models. We provide an overview of the current state of the art in each of these models and discuss their potential for use in different downstream tasks in medical imaging, including classification, segmentation, and cross-modal translation. We also evaluate the strengths and limitations of each model and suggest directions for future research in this field. Our goal is to provide a comprehensive review about the use of deep generative models for medical image augmentation and to highlight the potential of these models for improving the performance of deep learning algorithms in medical image analysis.
Collapse
Affiliation(s)
| | | | - Su Ruan
- Université Rouen Normandie, INSA Rouen Normandie, Université Le Havre Normandie, Normandie Univ, LITIS UR 4108, F-76000 Rouen, France
| |
Collapse
|
11
|
A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI. FUTURE INTERNET 2022. [DOI: 10.3390/fi14120351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
With the advances in brain imaging, magnetic resonance imaging (MRI) is evolving as a popular radiological tool in clinical diagnosis. Deep learning (DL) methods can detect abnormalities in brain images without an extensive manual feature extraction process. Generative adversarial network (GAN)-synthesized images have many applications in this field besides augmentation, such as image translation, registration, super-resolution, denoising, motion correction, segmentation, reconstruction, and contrast enhancement. The existing literature was reviewed systematically to understand the role of GAN-synthesized dummy images in brain disease diagnosis. Web of Science and Scopus databases were extensively searched to find relevant studies from the last 6 years to write this systematic literature review (SLR). Predefined inclusion and exclusion criteria helped in filtering the search results. Data extraction is based on related research questions (RQ). This SLR identifies various loss functions used in the above applications and software to process brain MRIs. A comparative study of existing evaluation metrics for GAN-synthesized images helps choose the proper metric for an application. GAN-synthesized images will have a crucial role in the clinical sector in the coming years, and this paper gives a baseline for other researchers in the field.
Collapse
|