1
|
Ahmed MM, Hossain MM, Islam MR, Ali MS, Nafi AAN, Ahmed MF, Ahmed KM, Miah MS, Rahman MM, Niu M, Islam MK. Brain tumor detection and classification in MRI using hybrid ViT and GRU model with explainable AI in Southern Bangladesh. Sci Rep 2024; 14:22797. [PMID: 39354009 PMCID: PMC11445444 DOI: 10.1038/s41598-024-71893-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Accepted: 09/02/2024] [Indexed: 10/03/2024] Open
Abstract
Brain tumor, a leading cause of uncontrolled cell growth in the central nervous system, presents substantial challenges in medical diagnosis and treatment. Early and accurate detection is essential for effective intervention. This study aims to enhance the detection and classification of brain tumors in Magnetic Resonance Imaging (MRI) scans using an innovative framework combining Vision Transformer (ViT) and Gated Recurrent Unit (GRU) models. We utilized primary MRI data from Bangabandhu Sheikh Mujib Medical College Hospital (BSMMCH) in Faridpur, Bangladesh. Our hybrid ViT-GRU model extracts essential features via ViT and identifies relationships between these features using GRU, addressing class imbalance and outperforming existing diagnostic methods. We extensively processed the dataset, and then trained the model using various optimizers (SGD, Adam, AdamW) and evaluated through rigorous 10-fold cross-validation. Additionally, we incorporated Explainable Artificial Intelligence (XAI) techniques-Attention Map, SHAP, and LIME-to enhance the interpretability of the model's predictions. For the primary dataset BrTMHD-2023, the ViT-GRU model achieved precision, recall, and F1-score metrics of 97%. The highest accuracies obtained with SGD, Adam, and AdamW optimizers were 81.66%, 96.56%, and 98.97%, respectively. Our model outperformed existing Transfer Learning models by 1.26%, as validated through comparative analysis and cross-validation. The proposed model also shows excellent performances with another Brain Tumor Kaggle Dataset outperforming the existing research done on the same dataset with 96.08% accuracy. The proposed ViT-GRU framework significantly improves the detection and classification of brain tumors in MRI scans. The integration of XAI techniques enhances the model's transparency and reliability, fostering trust among clinicians and facilitating clinical application. Future work will expand the dataset and apply findings to real-time diagnostic devices, advancing the field.
Collapse
Affiliation(s)
- Md Mahfuz Ahmed
- Shaanxi Int'l Innovation Center for Transportation-Energy-Information Fusion and Sustainability, Chang'an University, Xi'an, 710064, China
- Department of Biomedical Engineering, Islamic University, 7003, Kushtia, Bangladesh
- Bio-Imaging Research Lab, Islamic University, 7003, Kushtia, Bangladesh
| | - Md Maruf Hossain
- Department of Biomedical Engineering, Islamic University, 7003, Kushtia, Bangladesh
- Bio-Imaging Research Lab, Islamic University, 7003, Kushtia, Bangladesh
| | - Md Rakibul Islam
- Bio-Imaging Research Lab, Islamic University, 7003, Kushtia, Bangladesh
- Department of Information and Communication Technology, Islamic University, 7003, Kushtia, Bangladesh
- Department of Computer Science and Engineering, Northern University Bangladesh, 1230, Dhaka, Bangladesh
| | - Md Shahin Ali
- Department of Biomedical Engineering, Islamic University, 7003, Kushtia, Bangladesh
- Bio-Imaging Research Lab, Islamic University, 7003, Kushtia, Bangladesh
| | - Abdullah Al Noman Nafi
- Department of Information and Communication Technology, Islamic University, 7003, Kushtia, Bangladesh
| | - Md Faisal Ahmed
- Ship International Hospital, 1230, Uttara, Dhaka, Bangladesh
| | - Kazi Mowdud Ahmed
- Department of Information and Communication Technology, Islamic University, 7003, Kushtia, Bangladesh
| | - Md Sipon Miah
- Shaanxi Int'l Innovation Center for Transportation-Energy-Information Fusion and Sustainability, Chang'an University, Xi'an, 710064, China
- Department of Information and Communication Technology, Islamic University, 7003, Kushtia, Bangladesh
- Wireless Communications with Machine Learning (WCML) Laboratory, Islamic University, 7003, Kushtia, Bangladesh
| | - Md Mahbubur Rahman
- Department of Information and Communication Technology, Islamic University, 7003, Kushtia, Bangladesh
| | - Mingbo Niu
- Shaanxi Int'l Innovation Center for Transportation-Energy-Information Fusion and Sustainability, Chang'an University, Xi'an, 710064, China.
| | - Md Khairul Islam
- Department of Biomedical Engineering, Islamic University, 7003, Kushtia, Bangladesh
- Bio-Imaging Research Lab, Islamic University, 7003, Kushtia, Bangladesh
| |
Collapse
|
2
|
Moldovanu S, Tăbăcaru G, Barbu M. Convolutional Neural Network-Machine Learning Model: Hybrid Model for Meningioma Tumour and Healthy Brain Classification. J Imaging 2024; 10:235. [PMID: 39330455 PMCID: PMC11433632 DOI: 10.3390/jimaging10090235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Revised: 09/06/2024] [Accepted: 09/18/2024] [Indexed: 09/28/2024] Open
Abstract
This paper presents a hybrid study of convolutional neural networks (CNNs), machine learning (ML), and transfer learning (TL) in the context of brain magnetic resonance imaging (MRI). The anatomy of the brain is very complex; inside the skull, a brain tumour can form in any part. With MRI technology, cross-sectional images are generated, and radiologists can detect the abnormalities. When the size of the tumour is very small, it is undetectable to the human visual system, necessitating alternative analysis using AI tools. As is widely known, CNNs explore the structure of an image and provide features on the SoftMax fully connected (SFC) layer, and the classification of the items that belong to the input classes is established. Two comparison studies for the classification of meningioma tumours and healthy brains are presented in this paper: (i) classifying MRI images using an original CNN and two pre-trained CNNs, DenseNet169 and EfficientNetV2B0; (ii) determining which CNN and ML combination yields the most accurate classification when SoftMax is replaced with three ML models; in this context, Random Forest (RF), K-Nearest Neighbors (KNN), and Support Vector Machine (SVM) were proposed. In a binary classification of tumours and healthy brains, the EfficientNetB0-SVM combination shows an accuracy of 99.5% on the test dataset. A generalisation of the results was performed, and overfitting was prevented by using the bagging ensemble method.
Collapse
Affiliation(s)
- Simona Moldovanu
- Department of Computer Science and Information Technology, Faculty of Automation, Computers, Electrical Engineering and Electronics, "Dunarea de Jos" University of Galati, 800146 Galati, Romania
- The Modelling & Simulation Laboratory, "Dunarea de Jos" University of Galati, 47 Domneasca Str., 800008 Galati, Romania
| | - Gigi Tăbăcaru
- Department of Automatic Control and Electrical Engineering, Faculty of Automation, Computers, Electrical, Engineering and Electronics, "Dunarea de Jos" University of Galati, 800146 Galati, Romania
| | - Marian Barbu
- Department of Automatic Control and Electrical Engineering, Faculty of Automation, Computers, Electrical, Engineering and Electronics, "Dunarea de Jos" University of Galati, 800146 Galati, Romania
| |
Collapse
|
3
|
Aziz N, Minallah N, Frnda J, Sher M, Zeeshan M, Durrani AH. Precision meets generalization: Enhancing brain tumor classification via pretrained DenseNet with global average pooling and hyperparameter tuning. PLoS One 2024; 19:e0307825. [PMID: 39241003 PMCID: PMC11379197 DOI: 10.1371/journal.pone.0307825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Accepted: 07/04/2024] [Indexed: 09/08/2024] Open
Abstract
Brain tumors pose significant global health concerns due to their high mortality rates and limited treatment options. These tumors, arising from abnormal cell growth within the brain, exhibits various sizes and shapes, making their manual detection from magnetic resonance imaging (MRI) scans a subjective and challenging task for healthcare professionals, hence necessitating automated solutions. This study investigates the potential of deep learning, specifically the DenseNet architecture, to automate brain tumor classification, aiming to enhance accuracy and generalizability for clinical applications. We utilized the Figshare brain tumor dataset, comprising 3,064 T1-weighted contrast-enhanced MRI images from 233 patients with three prevalent tumor types: meningioma, glioma, and pituitary tumor. Four pre-trained deep learning models-ResNet, EfficientNet, MobileNet, and DenseNet-were evaluated using transfer learning from ImageNet. DenseNet achieved the highest test set accuracy of 96%, outperforming ResNet (91%), EfficientNet (91%), and MobileNet (93%). Therefore, we focused on improving the performance of the DenseNet, while considering it as base model. To enhance the generalizability of the base DenseNet model, we implemented a fine-tuning approach with regularization techniques, including data augmentation, dropout, batch normalization, and global average pooling, coupled with hyperparameter optimization. This enhanced DenseNet model achieved an accuracy of 97.1%. Our findings demonstrate the effectiveness of DenseNet with transfer learning and fine-tuning for brain tumor classification, highlighting its potential to improve diagnostic accuracy and reliability in clinical settings.
Collapse
Affiliation(s)
- Najam Aziz
- Department of Computer Systems Engineering, University of Engineering and Technology(UET), Peshawar, Khyber Pakhtunkhwa, Pakistan
- National Center for Big Data and Cloud Computing (NCBC), University of Engineering and Technology, Peshawar, Khyber Pakhtunkhwa, Pakistan
| | - Nasru Minallah
- Department of Computer Systems Engineering, University of Engineering and Technology(UET), Peshawar, Khyber Pakhtunkhwa, Pakistan
- National Center for Big Data and Cloud Computing (NCBC), University of Engineering and Technology, Peshawar, Khyber Pakhtunkhwa, Pakistan
| | - Jaroslav Frnda
- Department of Quantitative Methods and Economic Informatics, Faculty of Operation and Economics of Transport and Communication, University of Zilina, Zilina, Slovakia
- Department of Telecommunications, Faculty of Electrical Engineering and Computer Science, VSB - Technical University, Ostrava-Poruba, Czechia
| | - Madiha Sher
- Department of Computer Systems Engineering, University of Engineering and Technology(UET), Peshawar, Khyber Pakhtunkhwa, Pakistan
| | - Muhammad Zeeshan
- National Center for Big Data and Cloud Computing (NCBC), University of Engineering and Technology, Peshawar, Khyber Pakhtunkhwa, Pakistan
| | | |
Collapse
|
4
|
Bacon EJ, He D, Achi NAD, Wang L, Li H, Yao-Digba PDZ, Monkam P, Qi S. Neuroimage analysis using artificial intelligence approaches: a systematic review. Med Biol Eng Comput 2024; 62:2599-2627. [PMID: 38664348 DOI: 10.1007/s11517-024-03097-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 04/14/2024] [Indexed: 08/18/2024]
Abstract
In the contemporary era, artificial intelligence (AI) has undergone a transformative evolution, exerting a profound influence on neuroimaging data analysis. This development has significantly elevated our comprehension of intricate brain functions. This study investigates the ramifications of employing AI techniques on neuroimaging data, with a specific objective to improve diagnostic capabilities and contribute to the overall progress of the field. A systematic search was conducted in prominent scientific databases, including PubMed, IEEE Xplore, and Scopus, meticulously curating 456 relevant articles on AI-driven neuroimaging analysis spanning from 2013 to 2023. To maintain rigor and credibility, stringent inclusion criteria, quality assessments, and precise data extraction protocols were consistently enforced throughout this review. Following a rigorous selection process, 104 studies were selected for review, focusing on diverse neuroimaging modalities with an emphasis on mental and neurological disorders. Among these, 19.2% addressed mental illness, and 80.7% focused on neurological disorders. It is found that the prevailing clinical tasks are disease classification (58.7%) and lesion segmentation (28.9%), whereas image reconstruction constituted 7.3%, and image regression and prediction tasks represented 9.6%. AI-driven neuroimaging analysis holds tremendous potential, transforming both research and clinical applications. Machine learning and deep learning algorithms outperform traditional methods, reshaping the field significantly.
Collapse
Affiliation(s)
- Eric Jacob Bacon
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Dianning He
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | | | - Lanbo Wang
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Han Li
- Department of Neurosurgery, Shengjing Hospital of China Medical University, Shenyang, China
| | | | - Patrice Monkam
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| |
Collapse
|
5
|
Abbasi S, Lan H, Choupan J, Sheikh-Bahaei N, Pandey G, Varghese B. Deep learning for the harmonization of structural MRI scans: a survey. Biomed Eng Online 2024; 23:90. [PMID: 39217355 PMCID: PMC11365220 DOI: 10.1186/s12938-024-01280-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Accepted: 08/06/2024] [Indexed: 09/04/2024] Open
Abstract
Medical imaging datasets for research are frequently collected from multiple imaging centers using different scanners, protocols, and settings. These variations affect data consistency and compatibility across different sources. Image harmonization is a critical step to mitigate the effects of factors like inherent differences between various vendors, hardware upgrades, protocol changes, and scanner calibration drift, as well as to ensure consistent data for medical image processing techniques. Given the critical importance and widespread relevance of this issue, a vast array of image harmonization methodologies have emerged, with deep learning-based approaches driving substantial advancements in recent times. The goal of this review paper is to examine the latest deep learning techniques employed for image harmonization by analyzing cutting-edge architectural approaches in the field of medical image harmonization, evaluating both their strengths and limitations. This paper begins by providing a comprehensive fundamental overview of image harmonization strategies, covering three critical aspects: established imaging datasets, commonly used evaluation metrics, and characteristics of different scanners. Subsequently, this paper analyzes recent structural MRI (Magnetic Resonance Imaging) harmonization techniques based on network architecture, network learning algorithm, network supervision strategy, and network output. The underlying architectures include U-Net, Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), flow-based generative models, transformer-based approaches, as well as custom-designed network architectures. This paper investigates the effectiveness of Disentangled Representation Learning (DRL) as a pivotal learning algorithm in harmonization. Lastly, the review highlights the primary limitations in harmonization techniques, specifically the lack of comprehensive quantitative comparisons across different methods. The overall aim of this review is to serve as a guide for researchers and practitioners to select appropriate architectures based on their specific conditions and requirements. It also aims to foster discussions around ongoing challenges in the field and shed light on promising future research directions with the potential for significant advancements.
Collapse
Affiliation(s)
- Soolmaz Abbasi
- Department of Computer Engineering, Yazd University, Yazd, Iran
| | - Haoyu Lan
- Department of Neurology, University of Southern California, Los Angeles, CA, USA
| | - Jeiran Choupan
- Department of Neurology, University of Southern California, Los Angeles, CA, USA
| | - Nasim Sheikh-Bahaei
- Department of Radiology, University of Southern California, Los Angeles, CA, USA
| | - Gaurav Pandey
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Bino Varghese
- Department of Radiology, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
6
|
Mandal S, Chakraborty S, Tariq MA, Ali K, Elavia Z, Khan MK, Garcia DB, Ali S, Al Hooti J, Kumar DV. Artificial Intelligence and Deep Learning in Revolutionizing Brain Tumor Diagnosis and Treatment: A Narrative Review. Cureus 2024; 16:e66157. [PMID: 39233936 PMCID: PMC11372433 DOI: 10.7759/cureus.66157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/05/2024] [Indexed: 09/06/2024] Open
Abstract
The emergence of artificial intelligence (AI) in the medical field holds promise in improving medical management, particularly in personalized strategies for the diagnosis and treatment of brain tumors. However, integrating AI into clinical practice has proven to be a challenge. Deep learning (DL) is very convenient for extracting relevant information from large amounts of data that has increased in medical history and imaging records, which shortens diagnosis time, that would otherwise overwhelm manual methods. In addition, DL aids in automated tumor segmentation, classification, and diagnosis. DL models such as the Brain Tumor Classification Model and the Inception-Resnet V2, or hybrid techniques that enhance these functions and combine DL networks with support vector machine and k-nearest neighbors, identify tumor phenotypes and brain metastases, allowing real-time decision-making and enhancing preoperative planning. AI algorithms and DL development facilitate radiological diagnostics such as computed tomography, positron emission tomography scans, and magnetic resonance imaging (MRI) by integrating two-dimensional and three-dimensional MRI using DenseNet and 3D convolutional neural network architectures, which enable precise tumor delineation. DL offers benefits in neuro-interventional procedures, and the shift toward computer-assisted interventions acknowledges the need for more accurate and efficient image analysis methods. Further research is needed to realize the potential impact of DL in improving these outcomes.
Collapse
Affiliation(s)
- Shobha Mandal
- Internal Medicine, Guthrie Robert Packer Hospital, Sayre, USA
| | - Subhadeep Chakraborty
- Electronics and Communication, Maulana Abul Kalam Azad University of Technology, West Bengal, IND
| | | | - Kamran Ali
- Internal Medicine, United Medical and Dental College, Karachi, PAK
| | - Zenia Elavia
- Medical School, Dr. D. Y. Patil Medical College, Hospital & Research Centre, Pune, IND
| | - Misbah Kamal Khan
- Internal Medicine, Peoples University of Medical and Health Sciences, Nawabshah, PAK
| | | | - Sofia Ali
- Medical School, Peninsula Medical School, Plymouth, GBR
| | | | - Divyanshi Vijay Kumar
- Internal Medicine, Smt. Nathiba Hargovandas Lakhmichand Municipal Medical College, Ahmedabad, IND
| |
Collapse
|
7
|
Toosi A, Shiri I, Zaidi H, Rahmim A. Segmentation-Free Outcome Prediction from Head and Neck Cancer PET/CT Images: Deep Learning-Based Feature Extraction from Multi-Angle Maximum Intensity Projections (MA-MIPs). Cancers (Basel) 2024; 16:2538. [PMID: 39061178 PMCID: PMC11274485 DOI: 10.3390/cancers16142538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 07/09/2024] [Accepted: 07/10/2024] [Indexed: 07/28/2024] Open
Abstract
We introduce an innovative, simple, effective segmentation-free approach for survival analysis of head and neck cancer (HNC) patients from PET/CT images. By harnessing deep learning-based feature extraction techniques and multi-angle maximum intensity projections (MA-MIPs) applied to Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) images, our proposed method eliminates the need for manual segmentations of regions-of-interest (ROIs) such as primary tumors and involved lymph nodes. Instead, a state-of-the-art object detection model is trained utilizing the CT images to perform automatic cropping of the head and neck anatomical area, instead of only the lesions or involved lymph nodes on the PET volumes. A pre-trained deep convolutional neural network backbone is then utilized to extract deep features from MA-MIPs obtained from 72 multi-angel axial rotations of the cropped PET volumes. These deep features extracted from multiple projection views of the PET volumes are then aggregated and fused, and employed to perform recurrence-free survival analysis on a cohort of 489 HNC patients. The proposed approach outperforms the best performing method on the target dataset for the task of recurrence-free survival analysis. By circumventing the manual delineation of the malignancies on the FDG PET-CT images, our approach eliminates the dependency on subjective interpretations and highly enhances the reproducibility of the proposed survival analysis method. The code for this work is publicly released.
Collapse
Affiliation(s)
- Amirhosein Toosi
- Department of Radiology, University of British Columbia, Vancouver, BC V5Z 1M9, Canada;
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC V5Z 1L3, Canada
| | - Isaac Shiri
- Department of Cardiology, University Hospital Bern, CH-3010 Bern, Switzerland;
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland;
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland;
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, Vancouver, BC V5Z 1M9, Canada;
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC V5Z 1L3, Canada
- Department of Physics & Astronomy, University of British Columbia, Vancouver, BC V6T 1Z1, Canada
- Department of Biomedical Engineering, University of British Columbia, Vancouver, BC V6T 1Z3, Canada
| |
Collapse
|
8
|
Abdusalomov A, Rakhimov M, Karimberdiyev J, Belalova G, Cho YI. Enhancing Automated Brain Tumor Detection Accuracy Using Artificial Intelligence Approaches for Healthcare Environments. Bioengineering (Basel) 2024; 11:627. [PMID: 38927863 PMCID: PMC11201188 DOI: 10.3390/bioengineering11060627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 06/09/2024] [Accepted: 06/17/2024] [Indexed: 06/28/2024] Open
Abstract
Medical imaging and deep learning models are essential to the early identification and diagnosis of brain cancers, facilitating timely intervention and improving patient outcomes. This research paper investigates the integration of YOLOv5, a state-of-the-art object detection framework, with non-local neural networks (NLNNs) to improve brain tumor detection's robustness and accuracy. This study begins by curating a comprehensive dataset comprising brain MRI scans from various sources. To facilitate effective fusion, the YOLOv5 and NLNNs, K-means+, and spatial pyramid pooling fast+ (SPPF+) modules are integrated within a unified framework. The brain tumor dataset is used to refine the YOLOv5 model through the application of transfer learning techniques, adapting it specifically to the task of tumor detection. The results indicate that the combination of YOLOv5 and other modules results in enhanced detection capabilities in comparison to the utilization of YOLOv5 exclusively, proving recall rates of 86% and 83% respectively. Moreover, the research explores the interpretability aspect of the combined model. By visualizing the attention maps generated by the NLNNs module, the regions of interest associated with tumor presence are highlighted, aiding in the understanding and validation of the decision-making procedure of the methodology. Additionally, the impact of hyperparameters, such as NLNNs kernel size, fusion strategy, and training data augmentation, is investigated to optimize the performance of the combined model.
Collapse
Affiliation(s)
- Akmalbek Abdusalomov
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 13120, Gyeonggi-do, Republic of Korea;
| | - Mekhriddin Rakhimov
- Department of Artificial Intelligence, Tashkent University of Information Technologies Named after Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan; (M.R.); (J.K.)
| | - Jakhongir Karimberdiyev
- Department of Artificial Intelligence, Tashkent University of Information Technologies Named after Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan; (M.R.); (J.K.)
| | - Guzal Belalova
- Department of Information Systems and Technologies, Tashkent State University of Economics, Tashkent 100066, Uzbekistan;
| | - Young Im Cho
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 13120, Gyeonggi-do, Republic of Korea;
- Department of Information Systems and Technologies, Tashkent State University of Economics, Tashkent 100066, Uzbekistan;
| |
Collapse
|
9
|
Dheepak G, J. AC, Vaishali D. Brain tumor classification: a novel approach integrating GLCM, LBP and composite features. Front Oncol 2024; 13:1248452. [PMID: 38352298 PMCID: PMC10861642 DOI: 10.3389/fonc.2023.1248452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 12/12/2023] [Indexed: 02/16/2024] Open
Abstract
Identifying and classifying tumors are critical in-patient care and treatment planning within the medical domain. Nevertheless, the conventional approach of manually examining tumor images is characterized by its lengthy duration and subjective nature. In response to this challenge, a novel method is proposed that integrates the capabilities of Gray-Level Co-Occurrence Matrix (GLCM) features and Local Binary Pattern (LBP) features to conduct a quantitative analysis of tumor images (Glioma, Meningioma, Pituitary Tumor). The key contribution of this study pertains to the development of interaction features, which are obtained through the outer product of the GLCM and LBP feature vectors. The utilization of this approach greatly enhances the discriminative capability of the extracted features. Furthermore, the methodology incorporates aggregated, statistical, and non-linear features in addition to the interaction features. The GLCM feature vectors are utilized to compute these values, encompassing a range of statistical characteristics and effectively modifying the feature space. The effectiveness of this methodology has been demonstrated on image datasets that include tumors. Integrating GLCM (Gray-Level Co-occurrence Matrix) and LBP (Local Binary Patterns) features offers a comprehensive representation of texture characteristics, enhancing tumor detection and classification precision. The introduced interaction features, a distinctive element of this methodology, provide enhanced discriminative capability, resulting in improved performance. Incorporating aggregated, statistical, and non-linear features enables a more precise representation of crucial tumor image characteristics. When utilized with a linear support vector machine classifier, the approach showcases a better accuracy rate of 99.84%, highlighting its efficacy and promising prospects. The proposed improvement in feature extraction techniques for brain tumor classification has the potential to enhance the precision of medical image processing significantly. The methodology exhibits substantial potential in facilitating clinicians to provide more accurate diagnoses and treatments for brain tumors in forthcoming times.
Collapse
Affiliation(s)
- G. Dheepak
- Department of Electronics & Communication Engineering, Faculty of Engineering and Technology, SRM Institute of Science and Technology, Vadapalani Campus, Chennai, TN, India
| | | | | |
Collapse
|
10
|
Khosravi P, Mohammadi S, Zahiri F, Khodarahmi M, Zahiri J. AI-Enhanced Detection of Clinically Relevant Structural and Functional Anomalies in MRI: Traversing the Landscape of Conventional to Explainable Approaches. J Magn Reson Imaging 2024. [PMID: 38243677 DOI: 10.1002/jmri.29247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 01/05/2024] [Accepted: 01/08/2024] [Indexed: 01/21/2024] Open
Abstract
Anomaly detection in medical imaging, particularly within the realm of magnetic resonance imaging (MRI), stands as a vital area of research with far-reaching implications across various medical fields. This review meticulously examines the integration of artificial intelligence (AI) in anomaly detection for MR images, spotlighting its transformative impact on medical diagnostics. We delve into the forefront of AI applications in MRI, exploring advanced machine learning (ML) and deep learning (DL) methodologies that are pivotal in enhancing the precision of diagnostic processes. The review provides a detailed analysis of preprocessing, feature extraction, classification, and segmentation techniques, alongside a comprehensive evaluation of commonly used metrics. Further, this paper explores the latest developments in ensemble methods and explainable AI, offering insights into future directions and potential breakthroughs. This review synthesizes current insights, offering a valuable guide for researchers, clinicians, and medical imaging experts. It highlights AI's crucial role in improving the precision and speed of detecting key structural and functional irregularities in MRI. Our exploration of innovative techniques and trends furthers MRI technology development, aiming to refine diagnostics, tailor treatments, and elevate patient care outcomes. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Pegah Khosravi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, New York, USA
- The CUNY Graduate Center, City University of New York, New York City, New York, USA
| | - Saber Mohammadi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, New York, USA
- Department of Biophysics, Tarbiat Modares University, Tehran, Iran
| | - Fatemeh Zahiri
- Department of Cell and Molecular Sciences, Kharazmi University, Tehran, Iran
| | | | - Javad Zahiri
- Department of Neuroscience, University of California San Diego, San Diego, California, USA
| |
Collapse
|
11
|
Eida S, Fukuda M, Katayama I, Takagi Y, Sasaki M, Mori H, Kawakami M, Nishino T, Ariji Y, Sumi M. Metastatic Lymph Node Detection on Ultrasound Images Using YOLOv7 in Patients with Head and Neck Squamous Cell Carcinoma. Cancers (Basel) 2024; 16:274. [PMID: 38254765 PMCID: PMC10813890 DOI: 10.3390/cancers16020274] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 12/28/2023] [Accepted: 01/04/2024] [Indexed: 01/24/2024] Open
Abstract
Ultrasonography is the preferred modality for detailed evaluation of enlarged lymph nodes (LNs) identified on computed tomography and/or magnetic resonance imaging, owing to its high spatial resolution. However, the diagnostic performance of ultrasonography depends on the examiner's expertise. To support the ultrasonographic diagnosis, we developed YOLOv7-based deep learning models for metastatic LN detection on ultrasonography and compared their detection performance with that of highly experienced radiologists and less experienced residents. We enrolled 462 B- and D-mode ultrasound images of 261 metastatic and 279 non-metastatic histopathologically confirmed LNs from 126 patients with head and neck squamous cell carcinoma. The YOLOv7-based B- and D-mode models were optimized using B- and D-mode training and validation images and their detection performance for metastatic LNs was evaluated using B- and D-mode testing images, respectively. The D-mode model's performance was comparable to that of radiologists and superior to that of residents' reading of D-mode images, whereas the B-mode model's performance was higher than that of residents but lower than that of radiologists on B-mode images. Thus, YOLOv7-based B- and D-mode models can assist less experienced residents in ultrasonographic diagnoses. The D-mode model could raise the diagnostic performance of residents to the same level as experienced radiologists.
Collapse
Affiliation(s)
- Sato Eida
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Motoki Fukuda
- Department of Oral Radiology, Osaka Dental University, 1-5-17 Otemae, Chuo-ku, Osaka 540-0008, Japan; (M.F.); (Y.A.)
| | - Ikuo Katayama
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Yukinori Takagi
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Miho Sasaki
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Hiroki Mori
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Maki Kawakami
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Tatsuyoshi Nishino
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Yoshiko Ariji
- Department of Oral Radiology, Osaka Dental University, 1-5-17 Otemae, Chuo-ku, Osaka 540-0008, Japan; (M.F.); (Y.A.)
| | - Misa Sumi
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| |
Collapse
|
12
|
Mohammadi S, Ghaderi S, Ghaderi K, Mohammadi M, Pourasl MH. Automated segmentation of meningioma from contrast-enhanced T1-weighted MRI images in a case series using a marker-controlled watershed segmentation and fuzzy C-means clustering machine learning algorithm. Int J Surg Case Rep 2023; 111:108818. [PMID: 37716060 PMCID: PMC10514425 DOI: 10.1016/j.ijscr.2023.108818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 09/07/2023] [Accepted: 09/09/2023] [Indexed: 09/18/2023] Open
Abstract
INTRODUCTION AND IMPORTANCE Accurate segmentation of meningiomas from contrast-enhanced T1-weighted (CE T1-w) magnetic resonance imaging (MRI) is crucial for diagnosis and treatment planning. Manual segmentation is time-consuming and prone to variability. To evaluate an automated segmentation approach for meningiomas using marker-controlled watershed segmentation (MCWS) and fuzzy c-means (FCM) algorithms. CASE PRESENTATION AND METHODS CE T1-w MRI of 3 female patients (aged 59, 44, 67 years) with right frontal meningiomas were analyzed. Images were converted to grayscale and preprocessed with Otsu's thresholding and FCM clustering. MCWS segmentation was performed. Segmentation accuracy was assessed by comparing automated segmentations to manual delineations. CLINICAL DISCUSSION The approach successfully segmented meningiomas in all cases. Mean sensitivity was 0.8822, indicating accurate identification of tumors. Mean Dice similarity coefficient between Otsu's and FCM1 was 0.6599, suggesting good overlap between segmentation methods. CONCLUSION The MCWS and FCM approach enables accurate automated segmentation of meningiomas from CE T1-w MRI. With further validation on larger datasets, this could provide an efficient tool to assist in delineating meningioma boundaries for clinical management.
Collapse
Affiliation(s)
- Sana Mohammadi
- Department of Medical Sciences, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Sadegh Ghaderi
- Department of Neuroscience and Addiction Studies, School of Advanced Technologies in Medicine, Tehran University of Medical Sciences, Tehran, Iran.
| | - Kayvan Ghaderi
- Department of Information Technology and Computer Engineering, Faculty of Engineering, University of Kurdistan, Sanandaj 66177-15175, Iran
| | - Mahdi Mohammadi
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | | |
Collapse
|
13
|
Tagmatova Z, Abdusalomov A, Nasimov R, Nasimova N, Dogru AH, Cho YI. New Approach for Generating Synthetic Medical Data to Predict Type 2 Diabetes. Bioengineering (Basel) 2023; 10:1031. [PMID: 37760133 PMCID: PMC10525473 DOI: 10.3390/bioengineering10091031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 08/28/2023] [Accepted: 08/30/2023] [Indexed: 09/29/2023] Open
Abstract
The lack of medical databases is currently the main barrier to the development of artificial intelligence-based algorithms in medicine. This issue can be partially resolved by developing a reliable high-quality synthetic database. In this study, an easy and reliable method for developing a synthetic medical database based only on statistical data is proposed. This method changes the primary database developed based on statistical data using a special shuffle algorithm to achieve a satisfactory result and evaluates the resulting dataset using a neural network. Using the proposed method, a database was developed to predict the risk of developing type 2 diabetes 5 years in advance. This dataset consisted of data from 172,290 patients. The prediction accuracy reached 94.45% during neural network training of the dataset.
Collapse
Affiliation(s)
- Zarnigor Tagmatova
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Republic of Korea
| | - Akmalbek Abdusalomov
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Republic of Korea
| | - Rashid Nasimov
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Nigorakhon Nasimova
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Ali Hikmet Dogru
- Department of Computer Science, University of Texas at San Antonio, San Antonio, TX 78249-0667, USA;
| | - Young-Im Cho
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Republic of Korea
| |
Collapse
|