1
|
El Hachimy I, Kabelma D, Echcharef C, Hassani M, Benamar N, Hajji N. A comprehensive survey on the use of deep learning techniques in glioblastoma. Artif Intell Med 2024; 154:102902. [PMID: 38852314 DOI: 10.1016/j.artmed.2024.102902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 04/28/2024] [Accepted: 06/02/2024] [Indexed: 06/11/2024]
Abstract
Glioblastoma, characterized as a grade 4 astrocytoma, stands out as the most aggressive brain tumor, often leading to dire outcomes. The challenge of treating glioblastoma is exacerbated by the convergence of genetic mutations and disruptions in gene expression, driven by alterations in epigenetic mechanisms. The integration of artificial intelligence, inclusive of machine learning algorithms, has emerged as an indispensable asset in medical analyses. AI is becoming a necessary tool in medicine and beyond. Current research on Glioblastoma predominantly revolves around non-omics data modalities, prominently including magnetic resonance imaging, computed tomography, and positron emission tomography. Nonetheless, the assimilation of omic data-encompassing gene expression through transcriptomics and epigenomics-offers pivotal insights into patients' conditions. These insights, reciprocally, hold significant value in refining diagnoses, guiding decision- making processes, and devising efficacious treatment strategies. This survey's core objective encompasses a comprehensive exploration of noteworthy applications of machine learning methodologies in the domain of glioblastoma, alongside closely associated research pursuits. The study accentuates the deployment of artificial intelligence techniques for both non-omics and omics data, encompassing a range of tasks. Furthermore, the survey underscores the intricate challenges posed by the inherent heterogeneity of Glioblastoma, delving into strategies aimed at addressing its multifaceted nature.
Collapse
Affiliation(s)
| | | | | | - Mohamed Hassani
- Cancer Division, Faculty of medicine, Department of Biomolecular Medicine, Imperial College, London, United Kingdom
| | - Nabil Benamar
- Moulay Ismail University of Meknes, Meknes, Morocco; Al Akhawayn University in Ifrane, Ifrane, Morocco.
| | - Nabil Hajji
- Cancer Division, Faculty of medicine, Department of Biomolecular Medicine, Imperial College, London, United Kingdom; Department of Medical Biochemistry, Molecular Biology and Immunology, School of Medicine, Virgen Macarena University Hospital, University of Seville, Seville, Spain
| |
Collapse
|
2
|
Kolhar M, Al Rajeh AM, Kazi RNA. Augmenting Radiological Diagnostics with AI for Tuberculosis and COVID-19 Disease Detection: Deep Learning Detection of Chest Radiographs. Diagnostics (Basel) 2024; 14:1334. [PMID: 39001228 PMCID: PMC11240993 DOI: 10.3390/diagnostics14131334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2024] [Revised: 06/18/2024] [Accepted: 06/21/2024] [Indexed: 07/16/2024] Open
Abstract
In this research, we introduce a network that can identify pneumonia, COVID-19, and tuberculosis using X-ray images of patients' chests. The study emphasizes tuberculosis, COVID-19, and healthy lung conditions, discussing how advanced neural networks, like VGG16 and ResNet50, can improve the detection of lung issues from images. To prepare the images for the model's input requirements, we enhanced them through data augmentation techniques for training purposes. We evaluated the model's performance by analyzing the precision, recall, and F1 scores across training, validation, and testing datasets. The results show that the ResNet50 model outperformed VGG16 with accuracy and resilience. It displayed superior ROC AUC values in both validation and test scenarios. Particularly impressive were ResNet50's precision and recall rates, nearing 0.99 for all conditions in the test set. On the hand, VGG16 also performed well during testing-detecting tuberculosis with a precision of 0.99 and a recall of 0.93. Our study highlights the performance of our deep learning method by showcasing the effectiveness of ResNet50 over traditional approaches like VGG16. This progress utilizes methods to enhance classification accuracy by augmenting data and balancing them. This positions our approach as an advancement in using state-of-the-art deep learning applications in imaging. By enhancing the accuracy and reliability of diagnosing ailments such as COVID-19 and tuberculosis, our models have the potential to transform care and treatment strategies, highlighting their role in clinical diagnostics.
Collapse
Affiliation(s)
- Manjur Kolhar
- Department Health Informatics, College of Applied Medical Sciences, King Faisal University, Al-Hofuf 31982, Saudi Arabia
| | - Ahmed M Al Rajeh
- College of Applied Medical Sciences, King Faisal University, Al-Hofuf 31982, Saudi Arabia
| | - Raisa Nazir Ahmed Kazi
- College of Applied Medical Sciences, King Faisal University, Al-Hofuf 31982, Saudi Arabia
| |
Collapse
|
3
|
Zahoor MM, Khan SH, Alahmadi TJ, Alsahfi T, Mazroa ASA, Sakr HA, Alqahtani S, Albanyan A, Alshemaimri BK. Brain Tumor MRI Classification Using a Novel Deep Residual and Regional CNN. Biomedicines 2024; 12:1395. [PMID: 39061969 PMCID: PMC11274019 DOI: 10.3390/biomedicines12071395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 05/30/2024] [Accepted: 06/10/2024] [Indexed: 07/28/2024] Open
Abstract
Brain tumor classification is essential for clinical diagnosis and treatment planning. Deep learning models have shown great promise in this task, but they are often challenged by the complex and diverse nature of brain tumors. To address this challenge, we propose a novel deep residual and region-based convolutional neural network (CNN) architecture, called Res-BRNet, for brain tumor classification using magnetic resonance imaging (MRI) scans. Res-BRNet employs a systematic combination of regional and boundary-based operations within modified spatial and residual blocks. The spatial blocks extract homogeneity, heterogeneity, and boundary-related features of brain tumors, while the residual blocks significantly capture local and global texture variations. We evaluated the performance of Res-BRNet on a challenging dataset collected from Kaggle repositories, Br35H, and figshare, containing various tumor categories, including meningioma, glioma, pituitary, and healthy images. Res-BRNet outperformed standard CNN models, achieving excellent accuracy (98.22%), sensitivity (0.9811), F1-score (0.9841), and precision (0.9822). Our results suggest that Res-BRNet is a promising tool for brain tumor classification, with the potential to improve the accuracy and efficiency of clinical diagnosis and treatment planning.
Collapse
Affiliation(s)
- Mirza Mumtaz Zahoor
- Faculty of Computer Sciences, Ibadat International University, Islamabad 44000, Pakistan;
| | - Saddam Hussain Khan
- Department of Computer System Engineering, University of Engineering and Applied Science (UEAS), Swat 19060, Pakistan;
| | - Tahani Jaser Alahmadi
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia;
| | - Tariq Alsahfi
- Department of Information Systems and Technology, College of Computer Science and Engineering, University of Jeddah, Jeddah 21959, Saudi Arabia;
| | - Alanoud S. Al Mazroa
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia;
| | - Hesham A. Sakr
- Nile Higher Institute for Engineering and Technology, Mansoura 35511, Dakahlia, Egypt;
| | - Saeed Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 61441, Saudi Arabia;
| | - Abdullah Albanyan
- College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 16278, Saudi Arabia;
| | | |
Collapse
|
4
|
Zadeh Shirazi A, Tofighi M, Gharavi A, Gomez GA. The Application of Artificial Intelligence to Cancer Research: A Comprehensive Guide. Technol Cancer Res Treat 2024; 23:15330338241250324. [PMID: 38775067 PMCID: PMC11113055 DOI: 10.1177/15330338241250324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 03/28/2024] [Accepted: 04/08/2024] [Indexed: 05/25/2024] Open
Abstract
Advancements in AI have notably changed cancer research, improving patient care by enhancing detection, survival prediction, and treatment efficacy. This review covers the role of Machine Learning, Soft Computing, and Deep Learning in oncology, explaining key concepts and algorithms (like SVM, Naïve Bayes, and CNN) in a clear, accessible manner. It aims to make AI advancements understandable to a broad audience, focusing on their application in diagnosing, classifying, and predicting various cancer types, thereby underlining AI's potential to better patient outcomes. Moreover, we present a tabular summary of the most significant advances from the literature, offering a time-saving resource for readers to grasp each study's main contributions. The remarkable benefits of AI-powered algorithms in cancer care underscore their potential for advancing cancer research and clinical practice. This review is a valuable resource for researchers and clinicians interested in the transformative implications of AI in cancer care.
Collapse
Affiliation(s)
- Amin Zadeh Shirazi
- Centre for Cancer Biology, SA Pathology and the University of South Australia, Adelaide, SA, Australia
| | - Morteza Tofighi
- Department of Electrical Engineering, Faculty of Engineering, Bu-Ali Sina University, Hamedan, Iran
| | - Alireza Gharavi
- Department of Computer Science, Azad University, Mashhad Branch, Mashhad, Iran
| | - Guillermo A. Gomez
- Centre for Cancer Biology, SA Pathology and the University of South Australia, Adelaide, SA, Australia
| |
Collapse
|
5
|
ASI-DBNet: An Adaptive Sparse Interactive ResNet-Vision Transformer Dual-Branch Network for the Grading of Brain Cancer Histopathological Images. Interdiscip Sci 2023; 15:15-31. [PMID: 35810266 DOI: 10.1007/s12539-022-00532-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2022] [Revised: 05/26/2022] [Accepted: 05/31/2022] [Indexed: 10/17/2022]
Abstract
Brain cancer is the deadliest cancer that occurs in the brain and central nervous system, and rapid and precise grading is essential to reduce patient suffering and improve survival. Traditional convolutional neural network (CNN)-based computer-aided diagnosis algorithms cannot fully utilize the global information of pathology images, and the recently popular vision transformer (ViT) model does not focus enough on the local details of pathology images, both of which lead to a lack of precision in the focus of the model and a lack of accuracy in the grading of brain cancer. To solve this problem, we propose an adaptive sparse interaction ResNet-ViT dual-branch network (ASI-DBNet). First, we design the ResNet-ViT parallel structure to simultaneously capture and retain the local and global information of pathology images. Second, we design the adaptive sparse interaction block (ASIB) to interact the ResNet branch with the ViT branch. Furthermore, we introduce the attention mechanism in ASIB to adaptively filter the redundant information from the dual branches during the interaction so that the feature maps delivered during the interaction are more beneficial. Intensive experiments have shown that ASI-DBNet performs best in various baseline and SOTA models, with 95.24% accuracy in four grades. In particular, for brain tumors with a high degree of deterioration (Grade III and Grade IV), the highest diagnostic accuracies achieved by ASI-DBNet are 97.93% and 96.28%, respectively, which is of great clinical significance. Meanwhile, the gradient-weighted class activation map (Grad_cam) and attention rollout visualization mechanisms are utilized to visualize the working logic behind the model, and the resulting feature maps highlight the important distinguishing features related to the diagnosis. Therefore, the interpretability and confidence of the model are improved, which is of great value for the clinical diagnosis of brain cancer.
Collapse
|
6
|
Jørgensen ACS, Hill CS, Sturrock M, Tang W, Karamched SR, Gorup D, Lythgoe MF, Parrinello S, Marguerat S, Shahrezaei V. Data-driven spatio-temporal modelling of glioblastoma. ROYAL SOCIETY OPEN SCIENCE 2023; 10:221444. [PMID: 36968241 PMCID: PMC10031411 DOI: 10.1098/rsos.221444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 02/23/2023] [Indexed: 06/18/2023]
Abstract
Mathematical oncology provides unique and invaluable insights into tumour growth on both the microscopic and macroscopic levels. This review presents state-of-the-art modelling techniques and focuses on their role in understanding glioblastoma, a malignant form of brain cancer. For each approach, we summarize the scope, drawbacks and assets. We highlight the potential clinical applications of each modelling technique and discuss the connections between the mathematical models and the molecular and imaging data used to inform them. By doing so, we aim to prime cancer researchers with current and emerging computational tools for understanding tumour progression. By providing an in-depth picture of the different modelling techniques, we also aim to assist researchers who seek to build and develop their own models and the associated inference frameworks. Our article thus strikes a unique balance. On the one hand, we provide a comprehensive overview of the available modelling techniques and their applications, including key mathematical expressions. On the other hand, the content is accessible to mathematicians and biomedical scientists alike to accommodate the interdisciplinary nature of cancer research.
Collapse
Affiliation(s)
| | - Ciaran Scott Hill
- Department of Neurosurgery, The National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
- Samantha Dickson Brain Cancer Unit, UCL Cancer Institute, London WC1E 6DD, UK
| | - Marc Sturrock
- Department of Physiology and Medical Physics, Royal College of Surgeons in Ireland, Dublin D02 YN77, Ireland
| | - Wenhao Tang
- Department of Mathematics, Faculty of Natural Sciences, Imperial College London, London SW7 2AZ, UK
| | - Saketh R. Karamched
- Division of Medicine, Centre for Advanced Biomedical Imaging, University College London (UCL), London WC1E 6BT, UK
| | - Dunja Gorup
- Division of Medicine, Centre for Advanced Biomedical Imaging, University College London (UCL), London WC1E 6BT, UK
| | - Mark F. Lythgoe
- Division of Medicine, Centre for Advanced Biomedical Imaging, University College London (UCL), London WC1E 6BT, UK
| | - Simona Parrinello
- Samantha Dickson Brain Cancer Unit, UCL Cancer Institute, London WC1E 6DD, UK
| | - Samuel Marguerat
- Genomics Translational Technology Platform, UCL Cancer Institute, University College London, London WC1E 6DD, UK
| | - Vahid Shahrezaei
- Department of Mathematics, Faculty of Natural Sciences, Imperial College London, London SW7 2AZ, UK
| |
Collapse
|
7
|
Abbas A, Gaber MM, Abdelsamea MM. XDecompo: Explainable Decomposition Approach in Convolutional Neural Networks for Tumour Image Classification. SENSORS (BASEL, SWITZERLAND) 2022; 22:9875. [PMID: 36560243 PMCID: PMC9782528 DOI: 10.3390/s22249875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 12/06/2022] [Accepted: 12/12/2022] [Indexed: 06/17/2023]
Abstract
Of the various tumour types, colorectal cancer and brain tumours are still considered among the most serious and deadly diseases in the world. Therefore, many researchers are interested in improving the accuracy and reliability of diagnostic medical machine learning models. In computer-aided diagnosis, self-supervised learning has been proven to be an effective solution when dealing with datasets with insufficient data annotations. However, medical image datasets often suffer from data irregularities, making the recognition task even more challenging. The class decomposition approach has provided a robust solution to such a challenging problem by simplifying the learning of class boundaries of a dataset. In this paper, we propose a robust self-supervised model, called XDecompo, to improve the transferability of features from the pretext task to the downstream task. XDecompo has been designed based on an affinity propagation-based class decomposition to effectively encourage learning of the class boundaries in the downstream task. XDecompo has an explainable component to highlight important pixels that contribute to classification and explain the effect of class decomposition on improving the speciality of extracted features. We also explore the generalisability of XDecompo in handling different medical datasets, such as histopathology for colorectal cancer and brain tumour images. The quantitative results demonstrate the robustness of XDecompo with high accuracy of 96.16% and 94.30% for CRC and brain tumour images, respectively. XDecompo has demonstrated its generalization capability and achieved high classification accuracy (both quantitatively and qualitatively) in different medical image datasets, compared with other models. Moreover, a post hoc explainable method has been used to validate the feature transferability, demonstrating highly accurate feature representations.
Collapse
Affiliation(s)
- Asmaa Abbas
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7AP, UK
| | - Mohamed Medhat Gaber
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7AP, UK
- Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt
| | - Mohammed M. Abdelsamea
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7AP, UK
- Department of Computer Science, Faculty of Computers and Information, University of Assiut, Assiut 71515, Egypt
| |
Collapse
|
8
|
Requa J, Godard T, Mandal R, Balzer B, Whittemore D, George E, Barcelona F, Lambert C, Lee J, Lambert A, Larson A, Osmond G. High-fidelity detection, subtyping, and localization of five skin neoplasms using supervised and semi-supervised learning. J Pathol Inform 2022; 14:100159. [PMID: 36506813 PMCID: PMC9731861 DOI: 10.1016/j.jpi.2022.100159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/16/2022] [Accepted: 11/22/2022] [Indexed: 11/27/2022] Open
Abstract
Background Skin cancers are the most common malignancies diagnosed worldwide. While the early detection and treatment of pre-cancerous and cancerous skin lesions can dramatically improve outcomes, factors such as a global shortage of pathologists, increased workloads, and high rates of diagnostic discordance underscore the need for techniques that improve pathology workflows. Although AI models are now being used to classify lesions from whole slide images (WSIs), diagnostic performance rarely surpasses that of expert pathologists. Objectives The objective of the present study was to create an AI model to detect and classify skin lesions with a higher degree of sensitivity than previously demonstrated, with potential to match and eventually surpass expert pathologists to improve clinical workflows. Methods We combined supervised learning (SL) with semi-supervised learning (SSL) to produce an end-to-end multi-level skin detection system that not only detects 5 main types of skin lesions with high sensitivity and specificity, but also subtypes, localizes, and provides margin status to evaluate the proximity of the lesion to non-epidermal margins. The Supervised Training Subset consisted of 2188 random WSIs collected by the PathologyWatch (PW) laboratory between 2013 and 2018, while the Weakly Supervised Subset consisted of 5161 WSIs from daily case specimens. The Validation Set consisted of 250 curated daily case WSIs obtained from the PW tissue archives and included 50 "mimickers". The Testing Set (3821 WSIs) was composed of non-curated daily case specimens collected from July 20, 2021 to August 20, 2021 from PW laboratories. Results The performance characteristics of our AI model (i.e., Mihm) were assessed retrospectively by running the Testing Set through the Mihm Evaluation Pipeline. Our results show that the sensitivity of Mihm in classifying melanocytic lesions, basal cell carcinoma, and atypical squamous lesions, verruca vulgaris, and seborrheic keratosis was 98.91% (95% CI: 98.27%, 99.55%), 97.24% (95% CI: 96.15%, 98.33%), 95.26% (95% CI: 93.79%, 96.73%), 93.50% (95% CI: 89.14%, 97.86%), and 86.91% (95% CI: 82.13%, 91.69%), respectively. Additionally, our multi-level (i.e., patch-level, ROI-level, and WSI-level) detection algorithm includes a qualitative feature that subtypes lesions, an AI overlay in the front-end digital display that localizes diagnostic ROIs, and reports on margin status by detecting overlap between lesions and non-epidermal tissue margins. Conclusions Our AI model, developed in collaboration with dermatopathologists, detects 5 skin lesion types with higher sensitivity than previously published AI models, and provides end users with information such as subtyping, localization, and margin status in a front-end digital display. Our end-to-end system has the potential to improve pathology workflows by increasing diagnostic accuracy, expediting the course of patient care, and ultimately improving patient outcomes.
Collapse
Affiliation(s)
- James Requa
- Pathology Watch, 497 West 4800 South, Suite 201, Murray, UT 84123, USA
| | - Tuatini Godard
- Pathology Watch, 497 West 4800 South, Suite 201, Murray, UT 84123, USA
| | - Rajni Mandal
- Pathology Watch, 497 West 4800 South, Suite 201, Murray, UT 84123, USA
| | - Bonnie Balzer
- Cedars-Sinai Medical Center, 8700 Beverly Blvd, Los Angeles, CA 90048, USA
| | - Darren Whittemore
- Pathology Watch, 497 West 4800 South, Suite 201, Murray, UT 84123, USA
| | - Eva George
- Pathology Watch, 497 West 4800 South, Suite 201, Murray, UT 84123, USA
| | | | - Chalette Lambert
- Kirk Kerkorian School of Medicine at UNLV, University of Nevada, Las Vegas, Mail Stop: 3070, 2040 W Charleston Blvd., Las Vegas, NV 89102-2244, USA
| | - Jonathan Lee
- Bethesda Dermatopathology Laboratory, 1730 Elton Road, Silver Spring, MD 20903, USA
| | - Allison Lambert
- Pathology Watch, 497 West 4800 South, Suite 201, Murray, UT 84123, USA
| | - April Larson
- Pathology Watch, 497 West 4800 South, Suite 201, Murray, UT 84123, USA
| | - Gregory Osmond
- Intermountain Healthcare, Saint George Regional Hospital, Department of Pathology, 1380 East Medical Center Drive, Saint George, Utah 84790, USA,Corresponding author.
| |
Collapse
|
9
|
Xie Y, Zaccagna F, Rundo L, Testa C, Agati R, Lodi R, Manners DN, Tonon C. Convolutional Neural Network Techniques for Brain Tumor Classification (from 2015 to 2022): Review, Challenges, and Future Perspectives. Diagnostics (Basel) 2022; 12:diagnostics12081850. [PMID: 36010200 PMCID: PMC9406354 DOI: 10.3390/diagnostics12081850] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Revised: 07/20/2022] [Accepted: 07/28/2022] [Indexed: 12/21/2022] Open
Abstract
Convolutional neural networks (CNNs) constitute a widely used deep learning approach that has frequently been applied to the problem of brain tumor diagnosis. Such techniques still face some critical challenges in moving towards clinic application. The main objective of this work is to present a comprehensive review of studies using CNN architectures to classify brain tumors using MR images with the aim of identifying useful strategies for and possible impediments in the development of this technology. Relevant articles were identified using a predefined, systematic procedure. For each article, data were extracted regarding training data, target problems, the network architecture, validation methods, and the reported quantitative performance criteria. The clinical relevance of the studies was then evaluated to identify limitations by considering the merits of convolutional neural networks and the remaining challenges that need to be solved to promote the clinical application and development of CNN algorithms. Finally, possible directions for future research are discussed for researchers in the biomedical and machine learning communities. A total of 83 studies were identified and reviewed. They differed in terms of the precise classification problem targeted and the strategies used to construct and train the chosen CNN. Consequently, the reported performance varied widely, with accuracies of 91.63–100% in differentiating meningiomas, gliomas, and pituitary tumors (26 articles) and of 60.0–99.46% in distinguishing low-grade from high-grade gliomas (13 articles). The review provides a survey of the state of the art in CNN-based deep learning methods for brain tumor classification. Many networks demonstrated good performance, and it is not evident that any specific methodological choice greatly outperforms the alternatives, especially given the inconsistencies in the reporting of validation methods, performance metrics, and training data encountered. Few studies have focused on clinical usability.
Collapse
Affiliation(s)
- Yuting Xie
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
| | - Fulvio Zaccagna
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy;
| | - Leonardo Rundo
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, Italy;
| | - Claudia Testa
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy;
- Department of Physics and Astronomy, University of Bologna, 40127 Bologna, Italy
| | - Raffaele Agati
- Programma Neuroradiologia con Tecniche ad elevata complessità, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy;
| | - Raffaele Lodi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
- IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy
| | - David Neil Manners
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
- Correspondence:
| | - Caterina Tonon
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy;
| |
Collapse
|
10
|
Hong J, Huang Y, Ye J, Wang J, Xu X, Wu Y, Li Y, Zhao J, Li R, Kang J, Lai X. 3D FRN-ResNet: An Automated Major Depressive Disorder Structural Magnetic Resonance Imaging Data Identification Framework. Front Aging Neurosci 2022; 14:912283. [PMID: 35645776 PMCID: PMC9136074 DOI: 10.3389/fnagi.2022.912283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 04/19/2022] [Indexed: 11/13/2022] Open
Abstract
Major Depressive Disorder (MDD) is the most prevalent psychiatric disorder, seriously affecting people's quality of life. Manually identifying MDD from structural magnetic resonance imaging (sMRI) images is laborious and time-consuming due to the lack of clear physiological indicators. With the development of deep learning, many automated identification methods have been developed, but most of them stay in 2D images, resulting in poor performance. In addition, the heterogeneity of MDD also results in slightly different changes reflected in patients' brain imaging, which constitutes a barrier to the study of MDD identification based on brain sMRI images. We propose an automated MDD identification framework in sMRI data (3D FRN-ResNet) to comprehensively address these challenges, which uses 3D-ResNet to extract features and reconstruct them based on feature maps. Notably, the 3D FRN-ResNet fully exploits the interlayer structure information in 3D sMRI data and preserves most of the spatial details as well as the location information when converting the extracted features into vectors. Furthermore, our model solves the feature map reconstruction problem in closed form to produce a straightforward and efficient classifier and dramatically improves model performance. We evaluate our framework on a private brain sMRI dataset of MDD patients. Experimental results show that the proposed model exhibits promising performance and outperforms the typical other methods, achieving the accuracy, recall, precision, and F1 values of 0.86776, 0.84237, 0.85333, and 0.84781, respectively.
Collapse
Affiliation(s)
- Jialin Hong
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou, China
| | - Yueqi Huang
- Department of Psychiatry, Hangzhou Seventh People’s Hospital, Hangzhou, China
| | - Jianming Ye
- First Affiliated Hospital, Gannan Medical University, Ganzhou, China
| | - Jianqing Wang
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou, China
| | - Xiaomei Xu
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou, China
| | - Yan Wu
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou, China
| | - Yi Li
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou, China
| | - Jialu Zhao
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou, China
| | - Ruipeng Li
- Hangzhou Third People’s Hospital, Hangzhou, China
| | - Junlong Kang
- Zhongshan Hospital, Xiamen University, Xiamen, China
| | - Xiaobo Lai
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou, China
- Department of Nephrology Surgery, Hangzhou Hospital of Traditional Chinese Medicine, Hangzhou, China
| |
Collapse
|
11
|
Risk Factors of Restroke in Patients with Lacunar Cerebral Infarction Using Magnetic Resonance Imaging Image Features under Deep Learning Algorithm. CONTRAST MEDIA & MOLECULAR IMAGING 2021; 2021:2527595. [PMID: 34887708 PMCID: PMC8616697 DOI: 10.1155/2021/2527595] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 10/18/2021] [Accepted: 10/23/2021] [Indexed: 02/08/2023]
Abstract
This study was aimed to explore the magnetic resonance imaging (MRI) image features based on the fuzzy local information C-means clustering (FLICM) image segmentation method to analyze the risk factors of restroke in patients with lacunar infarction. In this study, based on the FLICM algorithm, the Canny edge detection algorithm and the Fourier shape descriptor were introduced to optimize the algorithm. The difference of Jaccard coefficient, Dice coefficient, peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), running time, and segmentation accuracy of the optimized FLICM algorithm and other algorithms when the brain tissue MRI images were segmented was studied. 36 patients with lacunar infarction were selected as the research objects, and they were divided into a control group (no restroke, 20 cases) and a stroke group (restroke, 16 cases) according to whether the patients had restroke. The differences in MRI imaging characteristics of the two groups of patients were compared, and the risk factors for restroke in lacunar infarction were analyzed by logistic multivariate regression. The results showed that the Jaccard coefficient, Dice coefficient, PSNR value, and SSIM value of the optimized FLICM algorithm for segmenting brain tissue were all higher than those of other algorithms. The shortest running time was 26 s, and the highest accuracy rate was 97.86%. The proportion of patients with a history of hypertension, the proportion of patients with paraventricular white matter lesion (WML) score greater than 2 in the stroke group, the proportion of patients with a deep WML score of 2, and the average age of patients in the stroke group were much higher than those in the control group (P < 0.05). Logistic multivariate regression showed that age and history of hypertension were risk factors for restroke after lacunar infarction (P < 0.05). It showed that the optimized FLICM algorithm can effectively segment brain MRI images, and the risk factors for restroke in patients with lacunar infarction were age and hypertension history. This study could provide a reference for the diagnosis and prognosis of lacunar infarction.
Collapse
|
12
|
Abstract
AbstractBrain tumor occurs owing to uncontrolled and rapid growth of cells. If not treated at an initial phase, it may lead to death. Despite many significant efforts and promising outcomes in this domain, accurate segmentation and classification remain a challenging task. A major challenge for brain tumor detection arises from the variations in tumor location, shape, and size. The objective of this survey is to deliver a comprehensive literature on brain tumor detection through magnetic resonance imaging to help the researchers. This survey covered the anatomy of brain tumors, publicly available datasets, enhancement techniques, segmentation, feature extraction, classification, and deep learning, transfer learning and quantum machine learning for brain tumors analysis. Finally, this survey provides all important literature for the detection of brain tumors with their advantages, limitations, developments, and future trends.
Collapse
|
13
|
Ali M, Ali R. Multi-Input Dual-Stream Capsule Network for Improved Lung and Colon Cancer Classification. Diagnostics (Basel) 2021; 11:1485. [PMID: 34441419 PMCID: PMC8393706 DOI: 10.3390/diagnostics11081485] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 08/12/2021] [Accepted: 08/13/2021] [Indexed: 12/19/2022] Open
Abstract
Lung and colon cancers are two of the most common causes of death and morbidity in humans. One of the most important aspects of appropriate treatment is the histopathological diagnosis of such cancers. As a result, the main goal of this study is to use a multi-input capsule network and digital histopathology images to build an enhanced computerized diagnosis system for detecting squamous cell carcinomas and adenocarcinomas of the lungs, as well as adenocarcinomas of the colon. Two convolutional layer blocks are used in the proposed multi-input capsule network. The CLB (Convolutional Layers Block) employs traditional convolutional layers, whereas the SCLB (Separable Convolutional Layers Block) employs separable convolutional layers. The CLB block takes unprocessed histopathology images as input, whereas the SCLB block takes uniquely pre-processed histopathological images. The pre-processing method uses color balancing, gamma correction, image sharpening, and multi-scale fusion as the major processes because histopathology slide images are typically red blue. All three channels (Red, Green, and Blue) are adequately compensated during the color balancing phase. The dual-input technique aids the model's ability to learn features more effectively. On the benchmark LC25000 dataset, the empirical analysis indicates a significant improvement in classification results. The proposed model provides cutting-edge performance in all classes, with 99.58% overall accuracy for lung and colon abnormalities based on histopathological images.
Collapse
Affiliation(s)
- Mumtaz Ali
- School of Computer Science, Huazhong University of Science and Technology, Wuhan 430074, China
- Department of Computer Systems Engineering, Sukkur IBA University, Sukkur 65200, Pakistan
| | - Riaz Ali
- Department of Computer Science, Sukkur IBA University, Sukkur 65200, Pakistan;
| |
Collapse
|
14
|
Zadeh Shirazi A, McDonnell MD, Fornaciari E, Bagherian NS, Scheer KG, Samuel MS, Yaghoobi M, Ormsby RJ, Poonnoose S, Tumes DJ, Gomez GA. A deep convolutional neural network for segmentation of whole-slide pathology images identifies novel tumour cell-perivascular niche interactions that are associated with poor survival in glioblastoma. Br J Cancer 2021; 125:337-350. [PMID: 33927352 PMCID: PMC8329064 DOI: 10.1038/s41416-021-01394-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 03/16/2021] [Accepted: 04/08/2021] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND Glioblastoma is the most aggressive type of brain cancer with high-levels of intra- and inter-tumour heterogeneity that contribute to its rapid growth and invasion within the brain. However, a spatial characterisation of gene signatures and the cell types expressing these in different tumour locations is still lacking. METHODS We have used a deep convolutional neural network (DCNN) as a semantic segmentation model to segment seven different tumour regions including leading edge (LE), infiltrating tumour (IT), cellular tumour (CT), cellular tumour microvascular proliferation (CTmvp), cellular tumour pseudopalisading region around necrosis (CTpan), cellular tumour perinecrotic zones (CTpnz) and cellular tumour necrosis (CTne) in digitised glioblastoma histopathological slides from The Cancer Genome Atlas (TCGA). Correlation analysis between segmentation results from tumour images together with matched RNA expression data was performed to identify genetic signatures that are specific to different tumour regions. RESULTS We found that spatially resolved gene signatures were strongly correlated with survival in patients with defined genetic mutations. Further in silico cell ontology analysis along with single-cell RNA sequencing data from resected glioblastoma tissue samples showed that these tumour regions had different gene signatures, whose expression was driven by different cell types in the regional tumour microenvironment. Our results further pointed to a key role for interactions between microglia/pericytes/monocytes and tumour cells that occur in the IT and CTmvp regions, which may contribute to poor patient survival. CONCLUSIONS This work identified key histopathological features that correlate with patient survival and detected spatially associated genetic signatures that contribute to tumour-stroma interactions and which should be investigated as new targets in glioblastoma. The source codes and datasets used are available in GitHub: https://github.com/amin20/GBM_WSSM .
Collapse
Affiliation(s)
- Amin Zadeh Shirazi
- Centre for Cancer Biology, SA Pathology and University of South Australia, Adelaide, SA, Australia
- Computational Learning Systems Laboratory, UniSA STEM, University of South Australia, Mawson Lakes, SA, Australia
| | - Mark D McDonnell
- Computational Learning Systems Laboratory, UniSA STEM, University of South Australia, Mawson Lakes, SA, Australia
| | - Eric Fornaciari
- Department of Mathematics of Computation, University of California, Los Angeles (UCLA), CA, USA
| | | | - Kaitlin G Scheer
- Centre for Cancer Biology, SA Pathology and University of South Australia, Adelaide, SA, Australia
| | - Michael S Samuel
- Centre for Cancer Biology, SA Pathology and University of South Australia, Adelaide, SA, Australia
- Adelaide Medical School, University of Adelaide, Adelaide, SA, Australia
| | - Mahdi Yaghoobi
- Electrical and Computer Engineering Department, Department of Artificial Intelligence, Islamic Azad University, Mashhad Branch, Mashhad, Iran
| | - Rebecca J Ormsby
- Flinders Health and Medical Research Institute, College of Medicine & Public Health, Flinders University, Adelaide, SA, Australia
| | - Santosh Poonnoose
- Flinders Health and Medical Research Institute, College of Medicine & Public Health, Flinders University, Adelaide, SA, Australia
- Department of Neurosurgery, Flinders Medical Centre, Bedford Park, SA, Australia
| | - Damon J Tumes
- Centre for Cancer Biology, SA Pathology and University of South Australia, Adelaide, SA, Australia
| | - Guillermo A Gomez
- Centre for Cancer Biology, SA Pathology and University of South Australia, Adelaide, SA, Australia.
| |
Collapse
|
15
|
Irmak E. Multi-Classification of Brain Tumor MRI Images Using Deep Convolutional Neural Network with Fully Optimized Framework. IRANIAN JOURNAL OF SCIENCE AND TECHNOLOGY, TRANSACTIONS OF ELECTRICAL ENGINEERING 2021; 45. [PMCID: PMC8061452 DOI: 10.1007/s40998-021-00426-9] [Citation(s) in RCA: 55] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Brain tumor diagnosis and classification still rely on histopathological analysis of biopsy specimens today. The current method is invasive, time-consuming and prone to manual errors. These disadvantages show how essential it is to perform a fully automated method for multi-classification of brain tumors based on deep learning. This paper aims to make multi-classification of brain tumors for the early diagnosis purposes using convolutional neural network (CNN). Three different CNN models are proposed for three different classification tasks. Brain tumor detection is achieved with 99.33% accuracy using the first CNN model. The second CNN model can classify the brain tumor into five brain tumor types as normal, glioma, meningioma, pituitary and metastatic with an accuracy of 92.66%. The third CNN model can classify the brain tumors into three grades as Grade II, Grade III and Grade IV with an accuracy of 98.14%. All the important hyper-parameters of CNN models are automatically designated using the grid search optimization algorithm. To the best of author’s knowledge, this is the first study for multi-classification of brain tumor MRI images using CNN whose almost all hyper-parameters are tuned by the grid search optimizer. The proposed CNN models are compared with other popular state-of-the-art CNN models such as AlexNet, Inceptionv3, ResNet-50, VGG-16 and GoogleNet. Satisfactory classification results are obtained using large and publicly available clinical datasets. The proposed CNN models can be employed to assist physicians and radiologists in validating their initial screening for brain tumor multi-classification purposes.
Collapse
Affiliation(s)
- Emrah Irmak
- Electrical-Electronics Engineering Department, Alanya Alaaddin Keykubat University, 07425 Alanya, Antalya, Turkey
| |
Collapse
|