1
|
Aziz N, Minallah N, Frnda J, Sher M, Zeeshan M, Durrani AH. Precision meets generalization: Enhancing brain tumor classification via pretrained DenseNet with global average pooling and hyperparameter tuning. PLoS One 2024; 19:e0307825. [PMID: 39241003 PMCID: PMC11379197 DOI: 10.1371/journal.pone.0307825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Accepted: 07/04/2024] [Indexed: 09/08/2024] Open
Abstract
Brain tumors pose significant global health concerns due to their high mortality rates and limited treatment options. These tumors, arising from abnormal cell growth within the brain, exhibits various sizes and shapes, making their manual detection from magnetic resonance imaging (MRI) scans a subjective and challenging task for healthcare professionals, hence necessitating automated solutions. This study investigates the potential of deep learning, specifically the DenseNet architecture, to automate brain tumor classification, aiming to enhance accuracy and generalizability for clinical applications. We utilized the Figshare brain tumor dataset, comprising 3,064 T1-weighted contrast-enhanced MRI images from 233 patients with three prevalent tumor types: meningioma, glioma, and pituitary tumor. Four pre-trained deep learning models-ResNet, EfficientNet, MobileNet, and DenseNet-were evaluated using transfer learning from ImageNet. DenseNet achieved the highest test set accuracy of 96%, outperforming ResNet (91%), EfficientNet (91%), and MobileNet (93%). Therefore, we focused on improving the performance of the DenseNet, while considering it as base model. To enhance the generalizability of the base DenseNet model, we implemented a fine-tuning approach with regularization techniques, including data augmentation, dropout, batch normalization, and global average pooling, coupled with hyperparameter optimization. This enhanced DenseNet model achieved an accuracy of 97.1%. Our findings demonstrate the effectiveness of DenseNet with transfer learning and fine-tuning for brain tumor classification, highlighting its potential to improve diagnostic accuracy and reliability in clinical settings.
Collapse
Affiliation(s)
- Najam Aziz
- Department of Computer Systems Engineering, University of Engineering and Technology(UET), Peshawar, Khyber Pakhtunkhwa, Pakistan
- National Center for Big Data and Cloud Computing (NCBC), University of Engineering and Technology, Peshawar, Khyber Pakhtunkhwa, Pakistan
| | - Nasru Minallah
- Department of Computer Systems Engineering, University of Engineering and Technology(UET), Peshawar, Khyber Pakhtunkhwa, Pakistan
- National Center for Big Data and Cloud Computing (NCBC), University of Engineering and Technology, Peshawar, Khyber Pakhtunkhwa, Pakistan
| | - Jaroslav Frnda
- Department of Quantitative Methods and Economic Informatics, Faculty of Operation and Economics of Transport and Communication, University of Zilina, Zilina, Slovakia
- Department of Telecommunications, Faculty of Electrical Engineering and Computer Science, VSB - Technical University, Ostrava-Poruba, Czechia
| | - Madiha Sher
- Department of Computer Systems Engineering, University of Engineering and Technology(UET), Peshawar, Khyber Pakhtunkhwa, Pakistan
| | - Muhammad Zeeshan
- National Center for Big Data and Cloud Computing (NCBC), University of Engineering and Technology, Peshawar, Khyber Pakhtunkhwa, Pakistan
| | | |
Collapse
|
2
|
Xu C, Zhang T, Zhang D, Zhang D, Han J. Deep Generative Adversarial Reinforcement Learning for Semi-Supervised Segmentation of Low-Contrast and Small Objects in Medical Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3072-3084. [PMID: 38557623 DOI: 10.1109/tmi.2024.3383716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Deep reinforcement learning (DRL) has demonstrated impressive performance in medical image segmentation, particularly for low-contrast and small medical objects. However, current DRL-based segmentation methods face limitations due to the optimization of error propagation in two separate stages and the need for a significant amount of labeled data. In this paper, we propose a novel deep generative adversarial reinforcement learning (DGARL) approach that, for the first time, enables end-to-end semi-supervised medical image segmentation in the DRL domain. DGARL ingeniously establishes a pipeline that integrates DRL and generative adversarial networks (GANs) to optimize both detection and segmentation tasks holistically while mutually enhancing each other. Specifically, DGARL introduces two innovative components to facilitate this integration in semi-supervised settings. First, a task-joint GAN with two discriminators links the detection results to the GAN's segmentation performance evaluation, allowing simultaneous joint evaluation and feedback. This ensures that DRL and GAN can be directly optimized based on each other's results. Second, a bidirectional exploration DRL integrates backward exploration and forward exploration to ensure the DRL agent explores the correct direction when forward exploration is disabled due to lack of explicit rewards. This mitigates the issue of unlabeled data being unable to provide rewards and rendering DRL unexplorable. Comprehensive experiments on three generalization datasets, comprising a total of 640 patients, demonstrate that our novel DGARL achieves 85.02% Dice and improves at least 1.91% for brain tumors, achieves 73.18% Dice and improves at least 4.28% for liver tumors, and achieves 70.85% Dice and improves at least 2.73% for pancreas compared to the ten most recent advanced methods, our results attest to the superiority of DGARL. Code is available at GitHub.
Collapse
|
3
|
Dutta TK, Nayak DR, Pachori RB. GT-Net: global transformer network for multiclass brain tumor classification using MR images. Biomed Eng Lett 2024; 14:1069-1077. [PMID: 39220025 PMCID: PMC11362438 DOI: 10.1007/s13534-024-00393-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Revised: 05/09/2024] [Accepted: 05/12/2024] [Indexed: 09/04/2024] Open
Abstract
Multiclass classification of brain tumors from magnetic resonance (MR) images is challenging due to high inter-class similarities. To this end, convolution neural networks (CNN) have been widely adopted in recent studies. However, conventional CNN architectures fail to capture the small lesion patterns of brain tumors. To tackle this issue, in this paper, we propose a global transformer network dubbed GT-Net for multiclass brain tumor classification. The GT-Net mainly comprises a global transformer module (GTM), which is introduced on the top of a backbone network. A generalized self-attention block (GSB) is proposed to capture the feature inter-dependencies not only across spatial dimension but also channel dimension, thereby facilitating the extraction of the detailed tumor lesion information while ignoring less important information. Further, multiple GSB heads are used in GTM to leverage global feature dependencies. We evaluate our GT-Net on a benchmark dataset by adopting several backbone networks, and the results demonstrate the effectiveness of GTM. Further, comparison with state-of-the-art methods validates the superiority of our model.
Collapse
Affiliation(s)
- Tapas Kumar Dutta
- School of Computer Science and Electronic Engineering, University of Surrey, Guildford, GU27XH United Kingdom
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology Jaipur, Jaipur, Rajasthan 302017 India
| | - Ram Bilas Pachori
- Department of Electrical Engineering, Indian Institute of Technology Indore, Indore, Madhya Pradesh 453552 India
| |
Collapse
|
4
|
Kidder BL. Advanced image generation for cancer using diffusion models. Biol Methods Protoc 2024; 9:bpae062. [PMID: 39258159 PMCID: PMC11387006 DOI: 10.1093/biomethods/bpae062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 07/25/2024] [Accepted: 08/21/2024] [Indexed: 09/12/2024] Open
Abstract
Deep neural networks have significantly advanced the field of medical image analysis, yet their full potential is often limited by relatively small dataset sizes. Generative modeling, particularly through diffusion models, has unlocked remarkable capabilities in synthesizing photorealistic images, thereby broadening the scope of their application in medical imaging. This study specifically investigates the use of diffusion models to generate high-quality brain MRI scans, including those depicting low-grade gliomas, as well as contrast-enhanced spectral mammography (CESM) and chest and lung X-ray images. By leveraging the DreamBooth platform, we have successfully trained stable diffusion models utilizing text prompts alongside class and instance images to generate diverse medical images. This approach not only preserves patient anonymity but also substantially mitigates the risk of patient re-identification during data exchange for research purposes. To evaluate the quality of our synthesized images, we used the Fréchet inception distance metric, demonstrating high fidelity between the synthesized and real images. Our application of diffusion models effectively captures oncology-specific attributes across different imaging modalities, establishing a robust framework that integrates artificial intelligence in the generation of oncological medical imagery.
Collapse
Affiliation(s)
- Benjamin L Kidder
- Department of Oncology, Wayne State University School of Medicine, Detroit, MI, 48201, United States
- Karmanos Cancer Institute, Wayne State University School of Medicine, Detroit, MI, 48201, United States
| |
Collapse
|
5
|
Reddy CKK, Reddy PA, Janapati H, Assiri B, Shuaib M, Alam S, Sheneamer A. A fine-tuned vision transformer based enhanced multi-class brain tumor classification using MRI scan imagery. Front Oncol 2024; 14:1400341. [PMID: 39091923 PMCID: PMC11291226 DOI: 10.3389/fonc.2024.1400341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Accepted: 06/25/2024] [Indexed: 08/04/2024] Open
Abstract
Brain tumors occur due to the expansion of abnormal cell tissues and can be malignant (cancerous) or benign (not cancerous). Numerous factors such as the position, size, and progression rate are considered while detecting and diagnosing brain tumors. Detecting brain tumors in their initial phases is vital for diagnosis where MRI (magnetic resonance imaging) scans play an important role. Over the years, deep learning models have been extensively used for medical image processing. The current study primarily investigates the novel Fine-Tuned Vision Transformer models (FTVTs)-FTVT-b16, FTVT-b32, FTVT-l16, FTVT-l32-for brain tumor classification, while also comparing them with other established deep learning models such as ResNet50, MobileNet-V2, and EfficientNet - B0. A dataset with 7,023 images (MRI scans) categorized into four different classes, namely, glioma, meningioma, pituitary, and no tumor are used for classification. Further, the study presents a comparative analysis of these models including their accuracies and other evaluation metrics including recall, precision, and F1-score across each class. The deep learning models ResNet-50, EfficientNet-B0, and MobileNet-V2 obtained an accuracy of 96.5%, 95.1%, and 94.9%, respectively. Among all the FTVT models, FTVT-l16 model achieved a remarkable accuracy of 98.70% whereas other FTVT models FTVT-b16, FTVT-b32, and FTVT-132 achieved an accuracy of 98.09%, 96.87%, 98.62%, respectively, hence proving the efficacy and robustness of FTVT's in medical image processing.
Collapse
Affiliation(s)
- C. Kishor Kumar Reddy
- Department of Computer Science and Engineering, Stanley College of Engineering and Technology for Women, Hyderabad, India
| | - Pulakurthi Anaghaa Reddy
- Department of Computer Science and Engineering, Stanley College of Engineering and Technology for Women, Hyderabad, India
| | - Himaja Janapati
- Department of Computer Science and Engineering, Stanley College of Engineering and Technology for Women, Hyderabad, India
| | - Basem Assiri
- Department of Computer Science, College of Engineering and Computer Science, Jazan University, Jazan, Saudi Arabia
| | - Mohammed Shuaib
- Department of Computer Science, College of Engineering and Computer Science, Jazan University, Jazan, Saudi Arabia
| | - Shadab Alam
- Department of Computer Science, College of Engineering and Computer Science, Jazan University, Jazan, Saudi Arabia
| | - Abdullah Sheneamer
- Department of Computer Science, College of Engineering and Computer Science, Jazan University, Jazan, Saudi Arabia
| |
Collapse
|
6
|
Kadhim YA, Guzel MS, Mishra A. A Novel Hybrid Machine Learning-Based System Using Deep Learning Techniques and Meta-Heuristic Algorithms for Various Medical Datatypes Classification. Diagnostics (Basel) 2024; 14:1469. [PMID: 39061605 PMCID: PMC11275302 DOI: 10.3390/diagnostics14141469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Revised: 06/27/2024] [Accepted: 07/02/2024] [Indexed: 07/28/2024] Open
Abstract
Medicine is one of the fields where the advancement of computer science is making significant progress. Some diseases require an immediate diagnosis in order to improve patient outcomes. The usage of computers in medicine improves precision and accelerates data processing and diagnosis. In order to categorize biological images, hybrid machine learning, a combination of various deep learning approaches, was utilized, and a meta-heuristic algorithm was provided in this research. In addition, two different medical datasets were introduced, one covering the magnetic resonance imaging (MRI) of brain tumors and the other dealing with chest X-rays (CXRs) of COVID-19. These datasets were introduced to the combination network that contained deep learning techniques, which were based on a convolutional neural network (CNN) or autoencoder, to extract features and combine them with the next step of the meta-heuristic algorithm in order to select optimal features using the particle swarm optimization (PSO) algorithm. This combination sought to reduce the dimensionality of the datasets while maintaining the original performance of the data. This is considered an innovative method and ensures highly accurate classification results across various medical datasets. Several classifiers were employed to predict the diseases. The COVID-19 dataset found that the highest accuracy was 99.76% using the combination of CNN-PSO-SVM. In comparison, the brain tumor dataset obtained 99.51% accuracy, the highest accuracy derived using the combination method of autoencoder-PSO-KNN.
Collapse
Affiliation(s)
- Yezi Ali Kadhim
- College of Engineering, University of Baghdad, Jadriyah, Baghdad 10071, Iraq;
- Department of Modeling and Design of Engineering Systems (MODES), Atilim University, Ankara 06830, Turkey
- Department of Electrical and Electronics Engineering, Atilim University, Incek, Ankara 06830, Turkey
| | - Mehmet Serdar Guzel
- Department of Computer Engineering, Ankara University, Yenimahalle, Ankara 06100, Turkey;
| | - Alok Mishra
- Faculty of Engineering, Norwegian University of Science and Technology, 7034 Trondheim, Norway
- Department of Software Engineering, Atilim University, Incek, Ankara 06830, Turkey
| |
Collapse
|
7
|
Ozdemir C, Dogan Y. Advancing brain tumor classification through MTAP model: an innovative approach in medical diagnostics. Med Biol Eng Comput 2024; 62:2165-2176. [PMID: 38483711 PMCID: PMC11190006 DOI: 10.1007/s11517-024-03064-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 02/26/2024] [Indexed: 06/21/2024]
Abstract
The early diagnosis of brain tumors is critical in the area of healthcare, owing to the potentially life-threatening repercussions unstable growths within the brain can pose to individuals. The accurate and early diagnosis of brain tumors enables prompt medical intervention. In this context, we have established a new model called MTAP to enable a highly accurate diagnosis of brain tumors. The MTAP model addresses dataset class imbalance by utilizing the ADASYN method, employs a network pruning technique to reduce unnecessary weights and nodes in the neural network, and incorporates Avg-TopK pooling method for enhanced feature extraction. The primary goal of our research is to enhance the accuracy of brain tumor type detection, a critical aspect of medical imaging and diagnostics. The MTAP model introduces a novel classification strategy for brain tumors, leveraging the strength of deep learning methods and novel model refinement techniques. Following comprehensive experimental studies and meticulous design, the MTAP model has achieved a state-of-the-art accuracy of 99.69%. Our findings indicate that the use of deep learning and innovative model refinement techniques shows promise in facilitating the early detection of brain tumors. Analysis of the model's heat map revealed a notable focus on regions encompassing the parietal and temporal lobes.
Collapse
Affiliation(s)
- Cuneyt Ozdemir
- Computer Engineering, Engineering Faculty, Siirt University, Siirt, 56100, Turkey.
| | - Yahya Dogan
- Computer Engineering, Engineering Faculty, Siirt University, Siirt, 56100, Turkey
| |
Collapse
|
8
|
Mathivanan SK, Sonaimuthu S, Murugesan S, Rajadurai H, Shivahare BD, Shah MA. Employing deep learning and transfer learning for accurate brain tumor detection. Sci Rep 2024; 14:7232. [PMID: 38538708 PMCID: PMC10973383 DOI: 10.1038/s41598-024-57970-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 03/23/2024] [Indexed: 04/01/2024] Open
Abstract
Artificial intelligence-powered deep learning methods are being used to diagnose brain tumors with high accuracy, owing to their ability to process large amounts of data. Magnetic resonance imaging stands as the gold standard for brain tumor diagnosis using machine vision, surpassing computed tomography, ultrasound, and X-ray imaging in its effectiveness. Despite this, brain tumor diagnosis remains a challenging endeavour due to the intricate structure of the brain. This study delves into the potential of deep transfer learning architectures to elevate the accuracy of brain tumor diagnosis. Transfer learning is a machine learning technique that allows us to repurpose pre-trained models on new tasks. This can be particularly useful for medical imaging tasks, where labelled data is often scarce. Four distinct transfer learning architectures were assessed in this study: ResNet152, VGG19, DenseNet169, and MobileNetv3. The models were trained and validated on a dataset from benchmark database: Kaggle. Five-fold cross validation was adopted for training and testing. To enhance the balance of the dataset and improve the performance of the models, image enhancement techniques were applied to the data for the four categories: pituitary, normal, meningioma, and glioma. MobileNetv3 achieved the highest accuracy of 99.75%, significantly outperforming other existing methods. This demonstrates the potential of deep transfer learning architectures to revolutionize the field of brain tumor diagnosis.
Collapse
Affiliation(s)
| | - Sridevi Sonaimuthu
- Department of Computer Science and Engineering, Vel Tech Rangarajan Dr.Sagunthala R&D Institute of Science and Technology, Chennai, 600062, India
| | - Sankar Murugesan
- Department of Computer Science and Engineering, Vel Tech Rangarajan Dr.Sagunthala R&D Institute of Science and Technology, Chennai, 600062, India
| | - Hariharan Rajadurai
- School of Computing Science and Engineering, VIT Bhopal University, Bhopal-Indore Highway Kothrikalan, Sehore, 466114, India
| | - Basu Dev Shivahare
- School of Computer Science and Engineering, Galgotias University, Greater Noida, 203201, India
| | - Mohd Asif Shah
- Kebri Dehar University, 250, Kebri Dehar, Somali, Ethiopia.
- Centre of Research Impact and Outcome, Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, 140401, India.
- Division of Research and Development, Lovely Professional University, Phagwara, Punjab, 144001, India.
| |
Collapse
|
9
|
Ullah MS, Khan MA, Masood A, Mzoughi O, Saidani O, Alturki N. Brain tumor classification from MRI scans: a framework of hybrid deep learning model with Bayesian optimization and quantum theory-based marine predator algorithm. Front Oncol 2024; 14:1335740. [PMID: 38390266 PMCID: PMC10882068 DOI: 10.3389/fonc.2024.1335740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 01/12/2024] [Indexed: 02/24/2024] Open
Abstract
Brain tumor classification is one of the most difficult tasks for clinical diagnosis and treatment in medical image analysis. Any errors that occur throughout the brain tumor diagnosis process may result in a shorter human life span. Nevertheless, most currently used techniques ignore certain features that have particular significance and relevance to the classification problem in favor of extracting and choosing deep significance features. One important area of research is the deep learning-based categorization of brain tumors using brain magnetic resonance imaging (MRI). This paper proposes an automated deep learning model and an optimal information fusion framework for classifying brain tumor from MRI images. The dataset used in this work was imbalanced, a key challenge for training selected networks. This imbalance in the training dataset impacts the performance of deep learning models because it causes the classifier performance to become biased in favor of the majority class. We designed a sparse autoencoder network to generate new images that resolve the problem of imbalance. After that, two pretrained neural networks were modified and the hyperparameters were initialized using Bayesian optimization, which was later utilized for the training process. After that, deep features were extracted from the global average pooling layer. The extracted features contain few irrelevant information; therefore, we proposed an improved Quantum Theory-based Marine Predator Optimization algorithm (QTbMPA). The proposed QTbMPA selects both networks' best features and finally fuses using a serial-based approach. The fused feature set is passed to neural network classifiers for the final classification. The proposed framework tested on an augmented Figshare dataset and an improved accuracy of 99.80%, a sensitivity rate of 99.83%, a false negative rate of 17%, and a precision rate of 99.83% is obtained. Comparison and ablation study show the improvement in the accuracy of this work.
Collapse
Affiliation(s)
| | | | - Anum Masood
- Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
| | - Olfa Mzoughi
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Oumaima Saidani
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Nazik Alturki
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| |
Collapse
|
10
|
Kim H, Kim HG, Oh JH, Lee KM. Deep-learning model for diagnostic clue: detecting the dural tail sign for meningiomas on contrast-enhanced T1 weighted images. Quant Imaging Med Surg 2023; 13:8132-8143. [PMID: 38106283 PMCID: PMC10722041 DOI: 10.21037/qims-23-114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 09/06/2023] [Indexed: 12/19/2023]
Abstract
Background Meningiomas are the most common primary central nervous system tumors, and magnetic resonance imaging (MRI), especially contrast-enhanced T1 weighted image (CE T1WI), is used as a fundamental imaging modality for the detection and analysis of the tumors. In this study, we propose an automated deep-learning model for meningioma detection using the dural tail sign. Methods The dataset included 123 patients with 3,824 dural tail signs on sagittal CE T1WI. The dataset was divided into training and test datasets based on specific time point, and 78 and 45 patients were comprised for the training and test dataset, respectively. To compensate for the small sample size of the training dataset, 39 additional patients with 69 dural tail signs from the open dataset were appended to the training dataset. A You Only Look Once (YOLO) v4 network was trained with sagittal CE T1WI to detect dural tail signs. The normal group dataset, comprised of 51 patients with no abnormal finding on MRI, was employed to evaluate the specificity of the trained model. Results The sensitivity and false positive average were 82.22% and 29.73, respectively, in the test dataset. The specificity and false positive average were 17.65% and 3.16, respectively, in the normal dataset. Most of the false-positive cases in the test dataset were enhancing vessels, misinterpreted as dural thickening. Conclusions The proposed model demonstrates an automated detection system for the dural tail sign to identify meningioma in general screening MRI. Our model can facilitate and alleviate radiologists' reading process by notifying the possibility of incidental dural mass based on dural tail sign detection.
Collapse
Affiliation(s)
- Hyunmin Kim
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, Seoul, Republic of Korea
| | - Hyug-Gi Kim
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, Seoul, Republic of Korea
| | | | | |
Collapse
|
11
|
Hartmann D, Schmid V, Meyer P, Auer F, Soto-Rey I, Müller D, Kramer F. MISM: A Medical Image Segmentation Metric for Evaluation of Weak Labeled Data. Diagnostics (Basel) 2023; 13:2618. [PMID: 37627877 PMCID: PMC10453729 DOI: 10.3390/diagnostics13162618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 07/31/2023] [Accepted: 08/01/2023] [Indexed: 08/27/2023] Open
Abstract
Performance measures are an important tool for assessing and comparing different medical image segmentation algorithms. Unfortunately, the current measures have their weaknesses when it comes to assessing certain edge cases. These limitations arise when images with a very small region of interest or without a region of interest at all are assessed. As a solution to these limitations, we propose a new medical image segmentation metric: MISm. This metric is a composition of the Dice similarity coefficient and the weighted specificity. MISm was investigated for definition gaps, an appropriate scoring gradient, and different weighting coefficients used to propose a constant value. Furthermore, an evaluation was performed by comparing the popular metrics in the medical image segmentation and MISm using images of magnet resonance tomography from several fictitious prediction scenarios. Our analysis shows that MISm can be applied in a general way and thus also covers the mentioned edge cases, which are not covered by other metrics, in a reasonable way. In order to allow easy access to MISm and therefore widespread application in the community, as well as reproducibility of experimental results, we included MISm in the publicly available evaluation framework MISeval.
Collapse
Affiliation(s)
- Dennis Hartmann
- IT-Infrastructure for Translational Medical Research, University of Augsburg, 86159 Augsburg, Germany; (D.H.)
| | - Verena Schmid
- IT-Infrastructure for Translational Medical Research, University of Augsburg, 86159 Augsburg, Germany; (D.H.)
- Medical Data Integration Center, Institute for Digital Medicine, University Hospital Augsburg, 86156 Augsburg, Germany
| | - Philip Meyer
- Medical Data Integration Center, Institute for Digital Medicine, University Hospital Augsburg, 86156 Augsburg, Germany
| | - Florian Auer
- IT-Infrastructure for Translational Medical Research, University of Augsburg, 86159 Augsburg, Germany; (D.H.)
| | - Iñaki Soto-Rey
- Medical Data Integration Center, Institute for Digital Medicine, University Hospital Augsburg, 86156 Augsburg, Germany
| | - Dominik Müller
- IT-Infrastructure for Translational Medical Research, University of Augsburg, 86159 Augsburg, Germany; (D.H.)
- Medical Data Integration Center, Institute for Digital Medicine, University Hospital Augsburg, 86156 Augsburg, Germany
| | - Frank Kramer
- IT-Infrastructure for Translational Medical Research, University of Augsburg, 86159 Augsburg, Germany; (D.H.)
| |
Collapse
|
12
|
Zulfiqar F, Ijaz Bajwa U, Mehmood Y. Multi-class classification of brain tumor types from MR images using EfficientNets. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
|
13
|
Ali HS, Ismail AI, El‐Rabaie EM, Abd El‐Samie FE. Deep residual architectures and ensemble learning for efficient brain tumour classification. EXPERT SYSTEMS 2023; 40. [DOI: 10.1111/exsy.13226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 12/12/2022] [Indexed: 09/02/2023]
Abstract
AbstractThe prompt and accurate detection of brain tumours is essential for disease management and life‐saving. This paper introduces an efficient and robust completely automated system for classifying the three prominent types of brain tumour. The aim is to contribute for enhanced classification accuracy with minimum pre‐processing and less inference time. The power of deep networks is thoroughly investigated, with and without transfer learning. Fine‐tuned deep Residual Networks (ResNets) with depth up to 101 are introduced to manage the complex nature of brain images, and to capture their microstructural information. The proposed residual architectures with their in‐depth representations are evaluated and compared to other fine‐tuned networks (AlexNet, GoogLeNet and VGG16). A novel Convolutional Network (ConvNet) built and trained from scratch is also proposed for tumour type classification. Proven models are integrated by combining their decisions using majority voting to obtain the final classification accuracy. Results show that the residual architectures can be optimized efficiently, and a noticeable accuracy can be gained with them. Although ResNet models are deeper than VGG16, they show lower complexity. Results also indicate that building ensemble of models is a successful strategy to enhance the system performance. Each model in the ensemble learns specific patterns with certain filters. This stochastic nature boosts the classification accuracy. The accuracies obtained from ResNet18, ResNet101, and the proposed ConvNet are 98.91%, 97.39% and 95.43%, respectively. The accuracy based on decision fusion for the three networks is 99.57%, which is better than those of all state‐of‐the‐art techniques. The accuracy obtained with ResNet50 is 98.26%, and its fusion with ResNet18 and the designed network yields a 99.35% accuracy, which is also better than those of previous methods, meanwhile achieving minimum detection time requirements. Finally, visual representation of the learned features is provided to understand what the models have learned.
Collapse
Affiliation(s)
- Hanaa S. Ali
- Electronics & Communication Department, Faculty of Engineering Zagazig University Zagazig Egypt
| | - Asmaa I. Ismail
- Department of Electronics and Electrical Communications, Faculty of Electronic Engineering Menoufia University Menouf Egypt
| | - El‐Sayed M. El‐Rabaie
- Department of Electronics and Electrical Communications, Faculty of Electronic Engineering Menoufia University Menouf Egypt
| | - Fathi E. Abd El‐Samie
- Department of Electronics and Electrical Communications, Faculty of Electronic Engineering Menoufia University Menouf Egypt
| |
Collapse
|
14
|
Hammad M, ElAffendi M, Ateya AA, Abd El-Latif AA. Efficient Brain Tumor Detection with Lightweight End-to-End Deep Learning Model. Cancers (Basel) 2023; 15:2837. [PMID: 37345173 DOI: 10.3390/cancers15102837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 04/25/2023] [Accepted: 04/28/2023] [Indexed: 06/23/2023] Open
Abstract
In the field of medical imaging, deep learning has made considerable strides, particularly in the diagnosis of brain tumors. The Internet of Medical Things (IoMT) has made it possible to combine these deep learning models into advanced medical devices for more accurate and efficient diagnosis. Convolutional neural networks (CNNs) are a popular deep learning technique for brain tumor detection because they can be trained on vast medical imaging datasets to recognize cancers in new images. Despite its benefits, which include greater accuracy and efficiency, deep learning has disadvantages, such as high computing costs and the possibility of skewed findings due to inadequate training data. Further study is needed to fully understand the potential and limitations of deep learning in brain tumor detection in the IoMT and to overcome the obstacles associated with real-world implementation. In this study, we propose a new CNN-based deep learning model for brain tumor detection. The suggested model is an end-to-end model, which reduces the system's complexity in comparison to earlier deep learning models. In addition, our model is lightweight, as it is built from a small number of layers compared to other previous models, which makes the model suitable for real-time applications. The optimistic findings of a rapid increase in accuracy (99.48% for binary class and 96.86% for multi-class) demonstrate that the new framework model has excelled in the competition. This study demonstrates that the suggested deep model outperforms other CNNs for detecting brain tumors. Additionally, the study provides a framework for secure data transfer of medical lab results with security recommendations to ensure security in the IoMT.
Collapse
Affiliation(s)
- Mohamed Hammad
- EIAS Data Science Lab, College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
- Department of Information Technology, Faculty of Computers and Information, Menoufia University, Shibin El Kom 32511, Egypt
| | - Mohammed ElAffendi
- EIAS Data Science Lab, College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
| | - Abdelhamied A Ateya
- EIAS Data Science Lab, College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
- Department of Electronics and Communications Engineering, Zagazig University, Zagazig 44519, Egypt
| | - Ahmed A Abd El-Latif
- EIAS Data Science Lab, College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
- Department of Mathematics and Computer Science, Faculty of Science, Menoufia University, Shebin El Koom 32511, Egypt
| |
Collapse
|
15
|
Sailunaz K, Bestepe D, Alhajj S, Özyer T, Rokne J, Alhajj R. Brain tumor detection and segmentation: Interactive framework with a visual interface and feedback facility for dynamically improved accuracy and trust. PLoS One 2023; 18:e0284418. [PMID: 37068084 PMCID: PMC10109523 DOI: 10.1371/journal.pone.0284418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 03/30/2023] [Indexed: 04/18/2023] Open
Abstract
Brain cancers caused by malignant brain tumors are one of the most fatal cancer types with a low survival rate mostly due to the difficulties in early detection. Medical professionals therefore use various invasive and non-invasive methods for detecting and treating brain tumors at the earlier stages thus enabling early treatment. The main non-invasive methods for brain tumor diagnosis and assessment are brain imaging like computed tomography (CT), positron emission tomography (PET) and magnetic resonance imaging (MRI) scans. In this paper, the focus is on detection and segmentation of brain tumors from 2D and 3D brain MRIs. For this purpose, a complete automated system with a web application user interface is described which detects and segments brain tumors with more than 90% accuracy and Dice scores. The user can upload brain MRIs or can access brain images from hospital databases to check presence or absence of brain tumor, to check the existence of brain tumor from brain MRI features and to extract the tumor region precisely from the brain MRI using deep neural networks like CNN, U-Net and U-Net++. The web application also provides an option for entering feedbacks on the results of the detection and segmentation to allow healthcare professionals to add more precise information on the results that can be used to train the model for better future predictions and segmentations.
Collapse
Affiliation(s)
- Kashfia Sailunaz
- Department of Computer Science, University of Calgary, Alberta, Canada
| | - Deniz Bestepe
- Department of Computer Engineering, Istanbul Medipol University, Istanbul, Turkey
| | - Sleiman Alhajj
- International School of Medicine, Istanbul Medipol University, Istanbul, Turkey
| | - Tansel Özyer
- Department of Computer Engineering, Ankara Medipol University, Ankara, Turkey
| | - Jon Rokne
- Department of Computer Science, University of Calgary, Alberta, Canada
| | - Reda Alhajj
- Department of Computer Science, University of Calgary, Alberta, Canada
- Department of Computer Engineering, Istanbul Medipol University, Istanbul, Turkey
- Department of Health Informatics, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
16
|
Multiclass convolutional neural network based classification for the diagnosis of brain MRI images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
17
|
Özbay E, Altunbey Özbay F. Interpretable features fusion with precision MRI images deep hashing for brain tumor detection. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107387. [PMID: 36738605 DOI: 10.1016/j.cmpb.2023.107387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 12/30/2022] [Accepted: 01/29/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Brain tumor is a deadly disease that can affect people of all ages. Radiologists play a critical role in the early diagnosis and treatment of the 14,000 persons diagnosed with brain tumors on average each year. The best method for tumor detection with computer-aided diagnosis systems (CADs) is Magnetic Resonance Imaging (MRI). However, manual evaluation using conventional approaches may result in a number of inaccuracies due to the complicated tissue properties of a large number of images. Therefore a precision medical image hashing approach is proposed that combines interpretability and feature fusion using MRI images of brain tumors, to address the issue of medical image retrieval. METHODS A precision hashing method combining interpretability and feature fusion is proposed to recover the problem of low image resolutions in brain tumor detection on the Brain-Tumor-MRI (BT-MRI) dataset. First, the dataset is pre-trained with the DenseNet201 network using the Comparison-to-Learn method. Then, a global network is created that generates the salience map to yield a mask crop with local region discrimination. Finally, the local network features inputs and public features expressing the local discriminant regions are concatenated for the pooling layer. A hash layer is added between the fully connected layer and the classification layer of the backbone network to generate high-quality hash codes. The final result is obtained by calculating the hash codes with the similarity metric. RESULTS Experimental results with the BT-MRI dataset showed that the proposed method can effectively identify tumor regions and more accurate hash codes can be generated by using the three loss functions in feature fusion. It has been demonstrated that the accuracy of medical image retrieval is effectively increased when our method is compared with existing image retrieval approaches. CONCLUSIONS Our method has demonstrated that the accuracy of medical image retrieval can be effectively increased and potentially applied to CADs.
Collapse
Affiliation(s)
- Erdal Özbay
- Firat University, Faculty of Engineering, Computer Engineering, 23119, Elazig, Turkey.
| | - Feyza Altunbey Özbay
- Firat University, Faculty of Engineering, Software Engineering, 23119, Elazig, Turkey
| |
Collapse
|
18
|
AlTahhan FE, Khouqeer GA, Saadi S, Elgarayhi A, Sallah M. Refined Automatic Brain Tumor Classification Using Hybrid Convolutional Neural Networks for MRI Scans. Diagnostics (Basel) 2023; 13:864. [PMID: 36900008 PMCID: PMC10001035 DOI: 10.3390/diagnostics13050864] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 02/19/2023] [Accepted: 02/20/2023] [Indexed: 03/03/2023] Open
Abstract
Refined hybrid convolutional neural networks are proposed in this work for classifying brain tumor classes based on MRI scans. A dataset of 2880 T1-weighted contrast-enhanced MRI brain scans are used. The dataset contains three main classes of brain tumors: gliomas, meningiomas, and pituitary tumors, as well as a class of no tumors. Firstly, two pre-trained, fine-tuned convolutional neural networks, GoogleNet and AlexNet, were used for classification process, with validation and classification accuracy being 91.5% and 90.21%, respectively. Then, to improving the performance of the fine-tuning AlexNet, two hybrid networks (AlexNet-SVM and AlexNet-KNN) were applied. These hybrid networks achieved 96.9% and 98.6% validation and accuracy, respectively. Thus, the hybrid network AlexNet-KNN was shown to be able to apply the classification process of the present data with high accuracy. After exporting these networks, a selected dataset was employed for testing process, yielding accuracies of 88%, 85%, 95%, and 97% for the fine-tuned GoogleNet, the fine-tuned AlexNet, AlexNet-SVM, and AlexNet-KNN, respectively. The proposed system would help for automatic detection and classification of the brain tumor from the MRI scans and safe the time for the clinical diagnosis.
Collapse
Affiliation(s)
- Fatma E. AlTahhan
- Mathematics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
| | - Ghada A. Khouqeer
- Physics Department, Faculty of science, Imam Mohammad Ibn Saud Islamic University, Riyadh 11564, Saudi Arabia
| | - Sarmad Saadi
- Applied Mathematical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
| | - Ahmed Elgarayhi
- Applied Mathematical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
| | - Mohammed Sallah
- Applied Mathematical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
- Higher Institute of Engineering and Technology, New Damietta 34517, Egypt
| |
Collapse
|
19
|
Jin Y, Lu H, Zhu W, Huo W. Deep learning based classification of multi-label chest X-ray images via dual-weighted metric loss. Comput Biol Med 2023; 157:106683. [PMID: 36905869 DOI: 10.1016/j.compbiomed.2023.106683] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 10/17/2022] [Accepted: 11/06/2022] [Indexed: 02/17/2023]
Abstract
-Thoracic disease, like many other diseases, can lead to complications. Existing multi-label medical image learning problems typically include rich pathological information, such as images, attributes, and labels, which are crucial for supplementary clinical diagnosis. However, the majority of contemporary efforts exclusively focus on regression from input to binary labels, ignoring the relationship between visual features and semantic vectors of labels. In addition, there is an imbalance in data amount between diseases, which frequently causes intelligent diagnostic systems to make erroneous disease predictions. Therefore, we aim to improve the accuracy of the multi-label classification of chest X-ray images. Chest X-ray14 pictures were utilized as the multi-label dataset for the experiments in this study. By fine-tuning the ConvNeXt network, we got visual vectors, which we combined with semantic vectors encoded by BioBert to map the two different forms of features into a common metric space and made semantic vectors the prototype of each class in metric space. The metric relationship between images and labels is then considered from the image level and disease category level, respectively, and a new dual-weighted metric loss function is proposed. Finally, the average AUC score achieved in the experiment reached 0.826, and our model outperformed the comparison models.
Collapse
Affiliation(s)
- Yufei Jin
- College of Information Engineering, China Jiliang University, Hangzhou, China.
| | - Huijuan Lu
- College of Information Engineering, China Jiliang University, Hangzhou, China.
| | - Wenjie Zhu
- College of Information Engineering, China Jiliang University, Hangzhou, China.
| | - Wanli Huo
- College of Information Engineering, China Jiliang University, Hangzhou, China.
| |
Collapse
|
20
|
Balaha HM, Hassan AES. A variate brain tumor segmentation, optimization, and recognition framework. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10337-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
21
|
Ali MU, Kallu KD, Masood H, Hussain SJ, Ullah S, Byun JH, Zafar A, Kim KS. A Robust Computer-Aided Automated Brain Tumor Diagnosis Approach Using PSO-ReliefF Optimized Gaussian and Non-Linear Feature Space. LIFE (BASEL, SWITZERLAND) 2022; 12:life12122036. [PMID: 36556401 PMCID: PMC9782364 DOI: 10.3390/life12122036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 11/22/2022] [Accepted: 11/28/2022] [Indexed: 12/12/2022]
Abstract
Brain tumors are among the deadliest diseases in the modern world. This study proposes an optimized machine-learning approach for the detection and identification of the type of brain tumor (glioma, meningioma, or pituitary tumor) in brain images recorded using magnetic resonance imaging (MRI). The Gaussian features of the image are extracted using speed-up robust features (SURF), whereas its non-linear features are obtained using KAZE, owing to their high performance against rotation, scaling, and noise problems. To retrieve local-level information, all brain MRI images are segmented into an 8 × 8 pixel grid. To enhance the accuracy and reduce the computational time, the variance-based k-means clustering and PSO-ReliefF algorithms are employed to eliminate the redundant features of the brain MRI images. Finally, the performance of the proposed hybrid optimized feature vector is evaluated using various machine learning classifiers. An accuracy of 96.30% is obtained with 169 features using a support vector machine (SVM). Furthermore, the computational time is also reduced to 1 min compared to the non-optimized features used for training of the SVM. The findings are also compared with previous research, demonstrating that the suggested approach might assist physicians and doctors in the timely detection of brain tumors.
Collapse
Affiliation(s)
- Muhammad Umair Ali
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Karam Dad Kallu
- Department of Robotics & Artificial Intelligence (R&AI), School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST) H−12, Islamabad 44000, Pakistan
| | - Haris Masood
- Electrical Engineering Department, Wah Engineering College, University of Wah, Wah Cantt 47040, Pakistan
| | - Shaik Javeed Hussain
- Department of Electrical and Electronics, Global College of Engineering and Technology, Muscat 112, Oman
| | - Safee Ullah
- Department of Electrical Engineering HITEC University, Taxila 47080, Pakistan
| | - Jong Hyuk Byun
- Department of Mathematics, College of Natural Sciences, Pusan National University, Busan 46241, Republic of Korea
| | - Amad Zafar
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
- Correspondence: (A.Z.); (K.S.K.)
| | - Kawang Su Kim
- Department of Scientific computing, Pukyong National University, Busan 48513, Republic of Korea
- Interdisciplinary Biology Laboratory (iBLab), Division of Biological Science, Graduate School of Science, Nagoya University, Nagoya 464-8602, Japan
- Correspondence: (A.Z.); (K.S.K.)
| |
Collapse
|
22
|
Kadhim YA, Khan MU, Mishra A. Deep Learning-Based Computer-Aided Diagnosis (CAD): Applications for Medical Image Datasets. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22228999. [PMID: 36433595 PMCID: PMC9692938 DOI: 10.3390/s22228999] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 11/17/2022] [Accepted: 11/18/2022] [Indexed: 05/26/2023]
Abstract
Computer-aided diagnosis (CAD) has proved to be an effective and accurate method for diagnostic prediction over the years. This article focuses on the development of an automated CAD system with the intent to perform diagnosis as accurately as possible. Deep learning methods have been able to produce impressive results on medical image datasets. This study employs deep learning methods in conjunction with meta-heuristic algorithms and supervised machine-learning algorithms to perform an accurate diagnosis. Pre-trained convolutional neural networks (CNNs) or auto-encoder are used for feature extraction, whereas feature selection is performed using an ant colony optimization (ACO) algorithm. Ant colony optimization helps to search for the best optimal features while reducing the amount of data. Lastly, diagnosis prediction (classification) is achieved using learnable classifiers. The novel framework for the extraction and selection of features is based on deep learning, auto-encoder, and ACO. The performance of the proposed approach is evaluated using two medical image datasets: chest X-ray (CXR) and magnetic resonance imaging (MRI) for the prediction of the existence of COVID-19 and brain tumors. Accuracy is used as the main measure to compare the performance of the proposed approach with existing state-of-the-art methods. The proposed system achieves an average accuracy of 99.61% and 99.18%, outperforming all other methods in diagnosing the presence of COVID-19 and brain tumors, respectively. Based on the achieved results, it can be claimed that physicians or radiologists can confidently utilize the proposed approach for diagnosing COVID-19 patients and patients with specific brain tumors.
Collapse
Affiliation(s)
- Yezi Ali Kadhim
- Department of Modeling and Design of Engineering Systems (MODES), Atilim University, Ankara 06830, Turkey
- Department of Electrical and Electronics Engineering, Atilim University, Incek, Ankara 06830, Turkey
| | - Muhammad Umer Khan
- Department of Mechatronics Engineering, Atilim University, Incek, Ankara 06830, Turkey
| | - Alok Mishra
- Department of Software Engineering, Atilim University, Incek, Ankara 06830, Turkey
- Informatics and Digitalization Group, Molde University College—Specialized University in Logistics, 6410 Molde, Norway
| |
Collapse
|
23
|
Deepak S, Ameer P. Brain tumor categorization from imbalanced MRI dataset using weighted loss and deep feature fusion. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.11.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
24
|
Multimodal brain tumor detection using multimodal deep transfer learning. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
25
|
Tummala S, Kadry S, Bukhari SAC, Rauf HT. Classification of Brain Tumor from Magnetic Resonance Imaging Using Vision Transformers Ensembling. Curr Oncol 2022; 29:7498-7511. [PMID: 36290867 PMCID: PMC9600395 DOI: 10.3390/curroncol29100590] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 09/29/2022] [Accepted: 10/04/2022] [Indexed: 01/13/2023] Open
Abstract
The automated classification of brain tumors plays an important role in supporting radiologists in decision making. Recently, vision transformer (ViT)-based deep neural network architectures have gained attention in the computer vision research domain owing to the tremendous success of transformer models in natural language processing. Hence, in this study, the ability of an ensemble of standard ViT models for the diagnosis of brain tumors from T1-weighted (T1w) magnetic resonance imaging (MRI) is investigated. Pretrained and finetuned ViT models (B/16, B/32, L/16, and L/32) on ImageNet were adopted for the classification task. A brain tumor dataset from figshare, consisting of 3064 T1w contrast-enhanced (CE) MRI slices with meningiomas, gliomas, and pituitary tumors, was used for the cross-validation and testing of the ensemble ViT model's ability to perform a three-class classification task. The best individual model was L/32, with an overall test accuracy of 98.2% at 384 × 384 resolution. The ensemble of all four ViT models demonstrated an overall testing accuracy of 98.7% at the same resolution, outperforming individual model's ability at both resolutions and their ensembling at 224 × 224 resolution. In conclusion, an ensemble of ViT models could be deployed for the computer-aided diagnosis of brain tumors based on T1w CE MRI, leading to radiologist relief.
Collapse
Affiliation(s)
- Sudhakar Tummala
- Department of Electronics and Communication Engineering, School of Engineering and Sciences, SRM University—AP, Amaravati 522503, India
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 36, Lebanon
- Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman 346, United Arab Emirates
| | - Syed Ahmad Chan Bukhari
- Division of Computer Science, Mathematics and Science, Collins College of Professional Studies, St. John’s University, New York, NY 11439, USA
| | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
| |
Collapse
|
26
|
Hussain L, Malibari AA, Alzahrani JS, Alamgeer M, Obayya M, Al-Wesabi FN, Mohsen H, Hamza MA. Bayesian dynamic profiling and optimization of important ranked energy from gray level co-occurrence (GLCM) features for empirical analysis of brain MRI. Sci Rep 2022; 12:15389. [PMID: 36100621 PMCID: PMC9470580 DOI: 10.1038/s41598-022-19563-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 08/31/2022] [Indexed: 11/09/2022] Open
Abstract
AbstractAccurate classification of brain tumor subtypes is important for prognosis and treatment. Researchers are developing tools based on static and dynamic feature extraction and applying machine learning and deep learning. However, static feature requires further analysis to compute the relevance, strength, and types of association. Recently Bayesian inference approach gains attraction for deeper analysis of static (hand-crafted) features to unfold hidden dynamics and relationships among features. We computed the gray level co-occurrence (GLCM) features from brain tumor meningioma and pituitary MRIs and then ranked based on entropy methods. The highly ranked Energy feature was chosen as our target variable for further empirical analysis of dynamic profiling and optimization to unfold the nonlinear intrinsic dynamics of GLCM features extracted from brain MRIs. The proposed method further unfolds the dynamics and to detailed analysis of computed features based on GLCM features for better understanding of the hidden dynamics for proper diagnosis and prognosis of tumor types leading to brain stroke.
Collapse
|
27
|
Alarcão SM, Mendonça V, Maruta C, Fonseca MJ. ExpertosLF: dynamic late fusion of CBIR systems using online learning with relevance feedback. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:11619-11661. [PMID: 36035324 PMCID: PMC9391217 DOI: 10.1007/s11042-022-13119-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 01/11/2022] [Accepted: 04/10/2022] [Indexed: 06/15/2023]
Abstract
One of the main challenges in CBIR systems is to choose discriminative and compact features, among dozens, to represent the images under comparison. Over the years, a great effort has been made to combine multiple features, mainly using early, late, and hierarchical fusion techniques. Unveiling the perfect combination of features is highly domain-specific and dependent on the type of image. Thus, the process of designing a CBIR system for new datasets or domains involves a huge experimentation overhead, leading to multiple fine-tuned CBIR systems. It would be desirable to dynamically find the best combination of CBIR systems without needing to go through such extensive experimentation and without requiring previous domain knowledge. In this paper, we propose ExpertosLF, a model-agnostic interpretable late fusion technique based on online learning with expert advice, which dynamically combines CBIR systems without knowing a priori which ones are the best for a given domain. At each query, ExpertosLF takes advantage of user's feedback to determine each CBIR contribution in the ensemble for the following queries. ExpertosLF produces an interpretable ensemble that is independent of the dataset and domain. Moreover, ExpertosLF is designed to be modular, and scalable. Experiments on 13 benchmark datasets from the Biomedical, Real, and Sketch domains revealed that: (i) ExpertosLF surpasses the performance of state of the art late-fusion techniques; (ii) it successfully and quickly converges to the performance of the best CBIR sets across domains without any previous domain knowledge (in most cases, fewer than 25 queries need to receive human feedback).
Collapse
Affiliation(s)
- Soraia M. Alarcão
- LASIGE, Faculdade de Ciências, Universidade de Lisboa, Lisboa, Portugal
| | - Vânia Mendonça
- INESC-ID, Instituto Superior Técnico, Universidade de Lisboa, Lisboa, Portugal
| | - Carolina Maruta
- Laboratório de Estudos de Linguagem, Centro de Estudos Egas Moniz, Faculdade de Medicina, Universidade de Lisboa, Lisboa, Portugal
| | - Manuel J. Fonseca
- LASIGE, Faculdade de Ciências, Universidade de Lisboa, Lisboa, Portugal
| |
Collapse
|
28
|
Almalki YE, Ali MU, Kallu KD, Masud M, Zafar A, Alduraibi SK, Irfan M, Basha MAA, Alshamrani HA, Alduraibi AK, Aboualkheir M. Isolated Convolutional-Neural-Network-Based Deep-Feature Extraction for Brain Tumor Classification Using Shallow Classifier. Diagnostics (Basel) 2022; 12:diagnostics12081793. [PMID: 35892504 PMCID: PMC9331664 DOI: 10.3390/diagnostics12081793] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Revised: 07/13/2022] [Accepted: 07/21/2022] [Indexed: 11/29/2022] Open
Abstract
In today’s world, a brain tumor is one of the most serious diseases. If it is detected at an advanced stage, it might lead to a very limited survival rate. Therefore, brain tumor classification is crucial for appropriate therapeutic planning to improve patient life quality. This research investigates a deep-feature-trained brain tumor detection and differentiation model using classical/linear machine learning classifiers (MLCs). In this study, transfer learning is used to obtain deep brain magnetic resonance imaging (MRI) scan features from a constructed convolutional neural network (CNN). First, multiple layers (19, 22, and 25) of isolated CNNs are constructed and trained to evaluate the performance. The developed CNN models are then utilized for training the multiple MLCs by extracting deep features via transfer learning. The available brain MRI datasets are employed to validate the proposed approach. The deep features of pre-trained models are also extracted to evaluate and compare their performance with the proposed approach. The proposed CNN deep-feature-trained support vector machine model yielded higher accuracy than other commonly used pre-trained deep-feature MLC training models. The presented approach detects and distinguishes brain tumors with 98% accuracy. It also has a good classification rate (97.2%) for an unknown dataset not used to train the model. Following extensive testing and analysis, the suggested technique might be helpful in assisting doctors in diagnosing brain tumors.
Collapse
Affiliation(s)
- Yassir Edrees Almalki
- Division of Radiology, Department of Internal Medicine, Medical College, Najran University, Najran 61441, Saudi Arabia;
| | - Muhammad Umair Ali
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Korea;
| | - Karam Dad Kallu
- Department of Robotics and Intelligent Machine Engineering (RIME), School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST) H−12, Islamabad 44000, Pakistan;
| | - Manzar Masud
- Department of Mechanical Engineering, Capital University of Science and Technology (CUST), Islamabad 44000, Pakistan;
| | - Amad Zafar
- Department of Electrical Engineering, The Ibadat International University, Islamabad 54590, Pakistan
- Correspondence:
| | - Sharifa Khalid Alduraibi
- Department of Radiology, College of Medicine, Qassim University, Buraidah 52571, Saudi Arabia; (S.K.A.); (A.K.A.)
| | - Muhammad Irfan
- Electrical Engineering Department, College of Engineering, Najran University, Najran 61441, Saudi Arabia;
| | | | - Hassan A. Alshamrani
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 61441, Saudi Arabia;
| | - Alaa Khalid Alduraibi
- Department of Radiology, College of Medicine, Qassim University, Buraidah 52571, Saudi Arabia; (S.K.A.); (A.K.A.)
| | - Mervat Aboualkheir
- Department of Radiology and Medical Imaging, College of Medicine, Taibah University, Madinah 42353, Saudi Arabia;
| |
Collapse
|
29
|
Almalki YE, Ali MU, Ahmed W, Kallu KD, Zafar A, Alduraibi SK, Irfan M, Basha MAA, Alshamrani HA, Alduraibi AK. Robust Gaussian and Nonlinear Hybrid Invariant Clustered Features Aided Approach for Speeded Brain Tumor Diagnosis. LIFE (BASEL, SWITZERLAND) 2022; 12:life12071084. [PMID: 35888172 PMCID: PMC9315657 DOI: 10.3390/life12071084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/14/2022] [Accepted: 07/17/2022] [Indexed: 11/16/2022]
Abstract
Brain tumors reduce life expectancy due to the lack of a cure. Moreover, their diagnosis involves complex and costly procedures such as magnetic resonance imaging (MRI) and lengthy, careful examination to determine their severity. However, the timely diagnosis of brain tumors in their early stages may save a patient's life. Therefore, this work utilizes MRI with a machine learning approach to diagnose brain tumor severity (glioma, meningioma, no tumor, and pituitary) in a timely manner. MRI Gaussian and nonlinear scale features are extracted due to their robustness over rotation, scaling, and noise issues, which are common in image processing features such as texture, local binary patterns, histograms of oriented gradient, etc. For the features, each MRI is broken down into multiple small 8 × 8-pixel MR images to capture small details. To counter memory issues, the strongest features based on variance are selected and segmented into 400 Gaussian and 400 nonlinear scale features, and these features are hybridized against each MRI. Finally, classical machine learning classifiers are utilized to check the performance of the proposed hybrid feature vector. An available online brain MRI image dataset is utilized to validate the proposed approach. The results show that the support vector machine-trained model has the highest classification accuracy of 95.33%, with a low computational time. The results are also compared with the recent literature, which shows that the proposed model can be helpful for clinicians/doctors for the early diagnosis of brain tumors.
Collapse
Affiliation(s)
- Yassir Edrees Almalki
- Division of Radiology, Department of Internal Medicine, Medical College, Najran University, Najran 61441, Saudi Arabia;
| | - Muhammad Umair Ali
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Korea;
| | - Waqas Ahmed
- Secret Minds, Entrepreneurial Organization, Islamabad 44000, Pakistan;
| | - Karam Dad Kallu
- Department of Robotics and Intelligent Machine Engineering (RIME), School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), H-12, Islamabad 44000, Pakistan;
| | - Amad Zafar
- Department of Electrical Engineering, The Ibadat International University, Islamabad 54590, Pakistan
- Correspondence:
| | - Sharifa Khalid Alduraibi
- Department of Radiology, College of Medicine, Qassim University, Buraidah 52571, Saudi Arabia; (S.K.A.); (A.K.A.)
| | - Muhammad Irfan
- Electrical Engineering Department, College of Engineering, Najran University, Najran 61441, Saudi Arabia;
| | | | - Hassan A. Alshamrani
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 61441, Saudi Arabia;
| | - Alaa Khalid Alduraibi
- Department of Radiology, College of Medicine, Qassim University, Buraidah 52571, Saudi Arabia; (S.K.A.); (A.K.A.)
| |
Collapse
|
30
|
Müller D, Soto-Rey I, Kramer F. Towards a guideline for evaluation metrics in medical image segmentation. BMC Res Notes 2022; 15:210. [PMID: 35725483 PMCID: PMC9208116 DOI: 10.1186/s13104-022-06096-y] [Citation(s) in RCA: 55] [Impact Index Per Article: 27.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Accepted: 06/07/2022] [Indexed: 11/10/2022] Open
Abstract
In the last decade, research on artificial intelligence has seen rapid growth with deep learning models, especially in the field of medical image segmentation. Various studies demonstrated that these models have powerful prediction capabilities and achieved similar results as clinicians. However, recent studies revealed that the evaluation in image segmentation studies lacks reliable model performance assessment and showed statistical bias by incorrect metric implementation or usage. Thus, this work provides an overview and interpretation guide on the following metrics for medical image segmentation evaluation in binary as well as multi-class problems: Dice similarity coefficient, Jaccard, Sensitivity, Specificity, Rand index, ROC curves, Cohen's Kappa, and Hausdorff distance. Furthermore, common issues like class imbalance and statistical as well as interpretation biases in evaluation are discussed. As a summary, we propose a guideline for standardized medical image segmentation evaluation to improve evaluation quality, reproducibility, and comparability in the research field.
Collapse
Affiliation(s)
- Dominik Müller
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Augsburg, Germany.
- Medical Data Integration Center, Institute for Digital Medicine, University Hospital Augsburg, Augsburg, Germany.
| | - Iñaki Soto-Rey
- Medical Data Integration Center, Institute for Digital Medicine, University Hospital Augsburg, Augsburg, Germany
| | - Frank Kramer
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Augsburg, Germany
| |
Collapse
|
31
|
Brain Tumor Classification Based on Attention Guided Deep Learning Model. INT J COMPUT INT SYS 2022. [DOI: 10.1007/s44196-022-00090-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
AbstractCancer is the second leading cause of death worldwide. Brain tumors count for one out of every four cancer deaths. Providing an accurate and timely diagnosis can result in timely treatments. In recent years, the rapid development of image classification has facilitated computer-aided diagnosis. The convolutional neural network (CNN) is one of the most widely used neural network models for classifying images. However, its effectiveness is limited because it cannot accurately identify the focal point of the lesion. This paper proposes a novel brain tumor classification model that integrates an attention mechanism and a multipath network to solve the above issues. An attention mechanism is used to select the critical information belonging to the target region while ignoring irrelevant details. A multipath network assigns the data to multiple channels, before converting each channel and merging the results of all branches. The multipath network is equivalent to grouped convolution, which reduces the complexity. Experimental evaluations on this model using a dataset consisting of 3064 MR images achieved an overall accuracy of 98.61%, which outperforms previous studies on this dataset.
Collapse
|
32
|
Reena MR, Ameer PM. A content-based image retrieval system for the diagnosis of lymphoma using blood micrographs: An incorporation of deep learning with a traditional learning approach. Comput Biol Med 2022; 145:105463. [PMID: 35421794 DOI: 10.1016/j.compbiomed.2022.105463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Revised: 03/24/2022] [Accepted: 03/25/2022] [Indexed: 12/01/2022]
Abstract
Lymphomas, or cancers of the lymphatic system, account for around half of all blood cancers diagnosed each year. Lymphoma is a condition that is difficult to diagnose, and accurate diagnosis is critical for effective treatment. Manual microscopic analysis of blood cells requires the involvement of medical experts, whose precision is dependent on their abilities, and it takes time. This paper describes a content-based image retrieval system that uses deep learning-based feature extraction and a traditional learning method for feature reduction to retrieve similar images from a database to aid early/initial lymphoma diagnosis. The proposed algorithm employs a pre-trained network called ResNet-101 to extract image features required to distinguish four types of cells: lymphoma cells, blasts, lymphocytes, and other cells. The issue of class imbalance is resolved by over-sampling the training data followed by data augmentation. Deep learning features are extracted using the activations of the feature layer in the pre-trained net, then dimensionality reduction techniques are used to select discriminant features for the image retrieval system. Euclidean distance is used as the similarity measure to retrieve similar images from the database. The experimentation uses a microscopic blood image dataset with 1673 leukocytes of the categories blasts, lymphoma, lymphocytes, and other cells. The proposed algorithm achieves 98.74% precision in lymphoma cell classification and 99.22% precision @10 for lymphoma cell image retrieval. Experimental findings confirm our approach's practicability and effectiveness. Extended studies endorse the idea of using the prescribed system in actual medical applications, helping doctors diagnose lymphoma, dramatically reducing human resource requirements.
Collapse
Affiliation(s)
- M Roy Reena
- Department of Electronics and Communication Engineering, National Institute of Technology, Calicut, India.
| | - P M Ameer
- Department of Electronics and Communication Engineering, National Institute of Technology, Calicut, India
| |
Collapse
|
33
|
Bhattacharjee S, Prakash D, Kim CH, Kim HC, Choi HK. Texture, Morphology, and Statistical Analysis to Differentiate Primary Brain Tumors on Two-Dimensional Magnetic Resonance Imaging Scans Using Artificial Intelligence Techniques. Healthc Inform Res 2022; 28:46-57. [PMID: 35172090 PMCID: PMC8850171 DOI: 10.4258/hir.2022.28.1.46] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 01/05/2022] [Indexed: 11/23/2022] Open
Abstract
Objectives A primary brain tumor starts to grow from brain cells, and it occurs as a result of errors in the DNA of normal cells. Therefore, this study was carried out to analyze the two-dimensional (2D) texture, morphology, and statistical features of brain tumors and to perform a classification using artificial intelligence (AI) techniques. Methods AI techniques can help radiologists to diagnose primary brain tumors without using any invasive measurement techniques. In this paper, we focused on deep learning (DL) and machine learning (ML) techniques for texture, morphological, and statistical feature classification of three tumor types (namely, glioma, meningioma, and pituitary). T1-weighted magnetic resonance imaging (MRI) 2D scans were used for analysis and classification (multiclass and binary). A total of 102 features were calculated for each tumor, and the 20 most significant features were selected using the three-step feature selection method, which included removing duplicate features, Pearson correlations, and recursive feature elimination. Results From the predicted results of multiclass and binary classification, a long short-term memory binary classification (glioma vs. meningioma) showed the best performance, with an average accuracy, recall, precision, F1-score, and kappa coefficient of 97.7%, 97.2%, 97.5%, 97.0%, and 94.7%, respectively. Conclusions The early diagnosis of primary brain tumors is very important because it can be the key to effective treatment. Therefore, this research presents a method for early diagnoses by effectively classifying three types of primary brain tumors.
Collapse
Affiliation(s)
| | - Deekshitha Prakash
- Department of Computer Engineering, u-AHRC, Inje University, Gimhae, Korea
| | - Cho-Hee Kim
- Department of Digital Anti-Aging Healthcare, Inje University, Gimhae, Korea
| | - Hee-Cheol Kim
- Department of Digital Anti-Aging Healthcare, Inje University, Gimhae, Korea
| | - Heung-Kook Choi
- Department of Computer Engineering, u-AHRC, Inje University, Gimhae, Korea
- AI R&D Center, JLK Inc., Seoul, Korea
| |
Collapse
|
34
|
Zhou G, Lu B, Hu X, Ni T. Sparse Representation-Based Discriminative Metric Learning for Brain MRI Image Retrieval. Front Neurosci 2022; 15:829040. [PMID: 35095411 PMCID: PMC8795867 DOI: 10.3389/fnins.2021.829040] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Accepted: 12/27/2021] [Indexed: 12/20/2022] Open
Abstract
Magnetic resonance imaging (MRI) can have a good diagnostic function for important organs and parts of the body. MRI technology has become a common and important disease detection technology. At the same time, medical imaging data is increasing at an explosive rate. Retrieving similar medical images from a huge database is of great significance to doctors’ auxiliary diagnosis and treatment. In this paper, combining the advantages of sparse representation and metric learning, a sparse representation-based discriminative metric learning (SRDML) approach is proposed for medical image retrieval of brain MRI. The SRDML approach uses a sparse representation framework to learn robust feature representation of brain MRI, and uses metric learning to project new features into the metric space with matching discrimination. In such a metric space, the optimal similarity measure is obtained by using the local constraints of atoms and the pairwise constraints of coding coefficients, so that the distance between similar images is less than the given threshold, and the distance between dissimilar images is greater than another given threshold. The experiments are designed and tested on the brain MRI dataset created by Chang. Experimental results show that the SRDML approach can obtain satisfactory retrieval performance and achieve accurate brain MRI image retrieval.
Collapse
Affiliation(s)
- Guohua Zhou
- School of Information Engineering, Changzhou Institute of Industry Technology, Changzhou, China
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, China
- College of Information Engineering, Yangzhou University, Yangzhou, China
| | - Bing Lu
- School of Information Engineering, Changzhou Institute of Industry Technology, Changzhou, China
| | - Xuelong Hu
- College of Information Engineering, Yangzhou University, Yangzhou, China
| | - Tongguang Ni
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, China
- *Correspondence: Tongguang Ni,
| |
Collapse
|
35
|
Alanazi MF, Ali MU, Hussain SJ, Zafar A, Mohatram M, Irfan M, AlRuwaili R, Alruwaili M, Ali NH, Albarrak AM. Brain Tumor/Mass Classification Framework Using Magnetic-Resonance-Imaging-Based Isolated and Developed Transfer Deep-Learning Model. SENSORS (BASEL, SWITZERLAND) 2022; 22:372. [PMID: 35009911 PMCID: PMC8749789 DOI: 10.3390/s22010372] [Citation(s) in RCA: 48] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Revised: 12/20/2021] [Accepted: 12/31/2021] [Indexed: 05/13/2023]
Abstract
With the advancement in technology, machine learning can be applied to diagnose the mass/tumor in the brain using magnetic resonance imaging (MRI). This work proposes a novel developed transfer deep-learning model for the early diagnosis of brain tumors into their subclasses, such as pituitary, meningioma, and glioma. First, various layers of isolated convolutional-neural-network (CNN) models are built from scratch to check their performances for brain MRI images. Then, the 22-layer, binary-classification (tumor or no tumor) isolated-CNN model is re-utilized to re-adjust the neurons' weights for classifying brain MRI images into tumor subclasses using the transfer-learning concept. As a result, the developed transfer-learned model has a high accuracy of 95.75% for the MRI images of the same MRI machine. Furthermore, the developed transfer-learned model has also been tested using the brain MRI images of another machine to validate its adaptability, general capability, and reliability for real-time application in the future. The results showed that the proposed model has a high accuracy of 96.89% for an unseen brain MRI dataset. Thus, the proposed deep-learning framework can help doctors and radiologists diagnose brain tumors early.
Collapse
Affiliation(s)
- Muhannad Faleh Alanazi
- Radiology, Department of Internal Medicine, College of Medicine, Jouf University, Sakaka 72388, Saudi Arabia; (M.F.A.); (R.A.); (M.A.)
| | - Muhammad Umair Ali
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Korea;
| | - Shaik Javeed Hussain
- Department of Electrical and Electronics, Global College of Engineering and Technology, Muscat 112, Oman;
| | - Amad Zafar
- Department of Electrical Engineering, The Ibadat International University, Islamabad 54590, Pakistan
| | - Mohammed Mohatram
- Department of Electrical and Electronics, Global College of Engineering and Technology, Muscat 112, Oman;
| | - Muhammad Irfan
- Electrical Engineering Department, College of Engineering, Najran University, Najran 61441, Saudi Arabia;
| | - Raed AlRuwaili
- Radiology, Department of Internal Medicine, College of Medicine, Jouf University, Sakaka 72388, Saudi Arabia; (M.F.A.); (R.A.); (M.A.)
| | - Mubarak Alruwaili
- Radiology, Department of Internal Medicine, College of Medicine, Jouf University, Sakaka 72388, Saudi Arabia; (M.F.A.); (R.A.); (M.A.)
| | - Naif H. Ali
- Department of Internal Medicine, Medical College, Najran University, Najran 61441, Saudi Arabia;
| | - Anas Mohammad Albarrak
- Department of Internal Medicine, College of Medicine, Prince Sattam Bin Abdulaziz University, Alkharj 16278, Saudi Arabia;
| |
Collapse
|
36
|
Sailunaz K, Bestepe D, Alhajj S, Özyer T, Rokne J, Alhajj R. Convex Hull in Brain Tumor Segmentation. Brain Inform 2022. [DOI: 10.1007/978-3-031-15037-1_18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
|
37
|
Fayaz M, Torokeldiev N, Turdumamatov S, Qureshi MS, Qureshi MB, Gwak J. An Efficient Methodology for Brain MRI Classification Based on DWT and Convolutional Neural Network. SENSORS 2021; 21:s21227480. [PMID: 34833556 PMCID: PMC8619601 DOI: 10.3390/s21227480] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Revised: 11/01/2021] [Accepted: 11/08/2021] [Indexed: 12/21/2022]
Abstract
In this paper, a model based on discrete wavelet transform and convolutional neural network for brain MR image classification has been proposed. The proposed model is comprised of three main stages, namely preprocessing, feature extraction, and classification. In the preprocessing, the median filter has been applied to remove salt-and-pepper noise from the brain MRI images. In the discrete wavelet transform, discrete Harr wavelet transform has been used. In the proposed model, 3-level Harr wavelet decomposition has been applied on the images to remove low-level detail and reduce the size of the images. Next, the convolutional neural network has been used for classifying the brain MR images into normal and abnormal. The convolutional neural network is also a prevalent classification method and has been widely used in different areas. In this study, the convolutional neural network has been used for brain MRI classification. The proposed methodology has been applied to the standard dataset, and for performance evaluation, we have used different performance evaluation measures. The results indicate that the proposed method provides good results with 99% accuracy. The proposed method results are then presented for comparison with some state-of-the-art algorithms where simply the proposed method outperforms the counterpart algorithms. The proposed model has been developed to be used for practical applications.
Collapse
Affiliation(s)
- Muhammad Fayaz
- Department of Computer Science, University of Central Asia, 310 Lenin Street, Naryn 722918, Kyrgyzstan; (M.F.); (M.S.Q.)
| | - Nurlan Torokeldiev
- Department of Mathematics and Natural Sciences, University of Central Asia, Khorog 736, Tajikistan;
| | - Samat Turdumamatov
- Department of Mathematics and Natural Sciences, University of Central Asia, 310 Lenin Street, Naryn 722918, Kyrgyzstan;
| | - Muhammad Shuaib Qureshi
- Department of Computer Science, University of Central Asia, 310 Lenin Street, Naryn 722918, Kyrgyzstan; (M.F.); (M.S.Q.)
| | - Muhammad Bilal Qureshi
- Department of Computer Science and IT, University of Lakki Marwat, Lakki Marwat 28420, KPK, Pakistan;
| | - Jeonghwan Gwak
- Department of Software, Korea National University of Transportation, Chungju 27469, Korea
- Department of Biomedical Engineering, Korea National University of Transportation, Chungju 27469, Korea
- Department of AI Robotics Engineering, Korea National University of Transportation, Chungju 27469, Korea
- Department of IT & Energy Convergence (BK21 FOUR), Korea National University of Transportation, Chungju 27469, Korea
- Correspondence: ; Tel.: +82-43-841-5852
| |
Collapse
|
38
|
Sekhar A, Biswas S, Hazra R, Sunaniya AK, Mukherjee A, Yang L. Brain tumor classification using fine-tuned GoogLeNet features and machine learning algorithms: IoMT enabled CAD system. IEEE J Biomed Health Inform 2021; 26:983-991. [PMID: 34324425 DOI: 10.1109/jbhi.2021.3100758] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
In the healthcare research community, Internet of Medical Things (IoMT) is transforming the healthcare system into the world of the future internet. In IoMT enabled Computer aided diagnosis (CAD) system, the Health-related information is stored via the internet, and supportive data is provided to the patients. The development of various smart devices is interconnected via the internet, which helps the patient to communicate with a medical expert using IoMT based remote healthcare system for various life threatening diseases, e.g., brain tumors. The brain tumor is one of the most dreadful diseases ever known to human beings. Often, the tumors are predecessors to cancers. The survival rates for these diseases are very low. So, early detection and classification of tumors can save a lot of lives. IoMT enabled CAD system plays a vital role in solving these problems. Deep learning, a new domain in Machine Learning, has attracted a lot of attention in the last few years. The concept of Convolutional Neural Networks (CNNs) has been widely used in this field. In this paper, we have classified brain tumors into three classes, namely glioma, meningioma and pituitary, using transfer learning model. The features of the brain MRI images are extracted using a pre-trained CNN, i.e. GoogLeNet. The features are then classified using classifiers such as softmax, Support Vector Machine (SVM), and K-Nearest Neighbor (K-NN). The proposed model is trained and tested on CE-MRI Figshare dataset. Further, Harvard medical repository dataset images are also considered for the experimental purpose to classify four types of tumors, and the results are compared with the other state-of-the-art models. Performance measures such as accuracy, precision, recall, specificity, and F1 score are examined to evaluate the performances of the proposed model.
Collapse
|
39
|
Hashemzehi R, Seyyed Mahdavi SJ, Kheirabadi M, Kamel SR. Y-net: a reducing gaussian noise convolutional neural network for MRI brain tumor classification with NADE concatenation. Biomed Phys Eng Express 2021; 7. [PMID: 34198284 DOI: 10.1088/2057-1976/ac107b] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 07/01/2021] [Indexed: 11/11/2022]
Abstract
Brain tumors are among the most serious cancers that can have a negative impact on a person's quality of life. The magnetic resonance imaging (MRI) analysis detects abnormal cell growth in the skull. Recently, machine learning models such as artificial neural networks have been used to detect brain tumors more quickly. To classify brain tumors, this research introduces the Y-net, a new convolutional neural network (CNN) based on the convolutional U-net architecture. We apply a NADE concatenation method in pre-processing the MR images for enhanced Y-net performance. We put our approach to the test using two MRI datasets of brain tumors. The first dataset contains three different types of brain tumors, while the second dataset includes a separate category for healthy brains. We show that our model is resistant to white noise and can obtain excellent classification accuracy with a limited number of medical images.
Collapse
Affiliation(s)
- Raheleh Hashemzehi
- Department of Computer Science, Neyshabur Branch, Islamic Azad University, Neyshabur, Iran
| | | | - Maryam Kheirabadi
- Department of Computer Science, Neyshabur Branch, Islamic Azad University, Neyshabur, Iran
| | - Seyed Reza Kamel
- Department of Software Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran
| |
Collapse
|
40
|
Gu Y, Li K. A Transfer Model Based on Supervised Multi-Layer Dictionary Learning for Brain Tumor MRI Image Recognition. Front Neurosci 2021; 15:687496. [PMID: 34122003 PMCID: PMC8193061 DOI: 10.3389/fnins.2021.687496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 04/19/2021] [Indexed: 11/30/2022] Open
Abstract
Artificial intelligence (AI) is an effective technology for automatic brain tumor MRI image recognition. The training of an AI model requires a large number of labeled data, but medical data needs to be labeled by professional clinicians, which makes data collection complex and expensive. Moreover, a traditional AI model requires that the training data and test data must follow the independent and identically distributed. To solve this problem, we propose a transfer model based on supervised multi-layer dictionary learning (TSMDL) for brain tumor MRI image recognition in this paper. With the help of the knowledge learned from related domains, the goal of this model is to solve the task of transfer learning where the target domain has only a small number of labeled samples. Based on the framework of multi-layer dictionary learning, the proposed model learns the common shared dictionary of source and target domains in each layer to explore the intrinsic connections and shared information between different domains. At the same time, by making full use of the label information of samples, the Laplacian regularization term is introduced to make the dictionary coding of similar samples as close as possible and the dictionary coding of different class samples as different as possible. The recognition experiments on brain MRI image datasets REMBRANDT and Figshare show that the model performs better than competitive state of-the-art methods.
Collapse
Affiliation(s)
- Yi Gu
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
| | - Kang Li
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
| |
Collapse
|
41
|
Gu X, Shen Z, Xue J, Fan Y, Ni T. Brain Tumor MR Image Classification Using Convolutional Dictionary Learning With Local Constraint. Front Neurosci 2021; 15:679847. [PMID: 34122001 PMCID: PMC8193950 DOI: 10.3389/fnins.2021.679847] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 04/09/2021] [Indexed: 11/30/2022] Open
Abstract
Brain tumor image classification is an important part of medical image processing. It assists doctors to make accurate diagnosis and treatment plans. Magnetic resonance (MR) imaging is one of the main imaging tools to study brain tissue. In this article, we propose a brain tumor MR image classification method using convolutional dictionary learning with local constraint (CDLLC). Our method integrates the multi-layer dictionary learning into a convolutional neural network (CNN) structure to explore the discriminative information. Encoding a vector on a dictionary can be considered as multiple projections into new spaces, and the obtained coding vector is sparse. Meanwhile, in order to preserve the geometric structure of data and utilize the supervised information, we construct the local constraint of atoms through a supervised k-nearest neighbor graph, so that the discrimination of the obtained dictionary is strong. To solve the proposed problem, an efficient iterative optimization scheme is designed. In the experiment, two clinically relevant multi-class classification tasks on the Cheng and REMBRANDT datasets are designed. The evaluation results demonstrate that our method is effective for brain tumor MR image classification, and it could outperform other comparisons.
Collapse
Affiliation(s)
- Xiaoqing Gu
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, China
| | - Zongxuan Shen
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, China
| | - Jing Xue
- Department of Nephrology, Affiliated Wuxi People's Hospital of Nanjing Medical University, Wuxi, China
| | - Yiqing Fan
- Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States
| | - Tongguang Ni
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, China
| |
Collapse
|
42
|
Bansal S, Mehan V. Image retrieval of MRI brain tumour images based on SVM and FCM approaches. BIO-ALGORITHMS AND MED-SYSTEMS 2021. [DOI: 10.1515/bams-2021-0011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Abstract
Objectives
The key test in Content-Based Medical Image Retrieval (CBMIR) frameworks for MRI (Magnetic Resonance Imaging) pictures is the semantic hole between the low-level visual data caught by the MRI machine and the elevated level data seen by the human evaluator.
Methods
The conventional component extraction strategies centre just on low-level or significant level highlights and utilize some handmade highlights to diminish this hole. It is important to plan an element extraction structure to diminish this hole without utilizing handmade highlights by encoding/consolidating low-level and elevated level highlights. The Fleecy gathering is another packing technique, which is applied in plan depiction here and SVM (Support Vector Machine) is applied. Remembering the predefinition of bunching amount and enlistment cross-section is until now a significant theme, a new predefinition advance is extended in this paper, in like manner, and another CBMIR procedure is suggested and endorsed. It is essential to design a part extraction framework to diminish this opening without using painstakingly gathered features by encoding/joining low-level and critical level features.
Results
SVM and FCM (Fuzzy C Means) are applied to the power structures. Consequently, the incorporate vector contains all the objectives of the image. Recuperation of the image relies upon the detachment among request and database pictures called closeness measure.
Conclusions
Tests are performed on the 200 Image Database. Finally, exploratory results are evaluated by the audit and precision.
Collapse
Affiliation(s)
- Sonia Bansal
- ECE Department , Maharaja Agrasen University , Solan , Himachal Pradesh , India
| | - Vineet Mehan
- CSE Department , Maharaja Agrasen University , Solan , Himachal Pradesh , India
| |
Collapse
|
43
|
TETİK B, UCUZAL H, YAŞAR Ş, ÇOLAK C. Automated classification of brain tumors by deep learning-based models on magnetic resonance images using a developed web-based interface. KONURALP TIP DERGISI 2021. [DOI: 10.18521/ktd.889777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
44
|
Kokkalla S, Kakarla J, Venkateswarlu IB, Singh M. Three-class brain tumor classification using deep dense inception residual network. Soft comput 2021; 25:8721-8729. [PMID: 33897297 PMCID: PMC8051839 DOI: 10.1007/s00500-021-05748-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/11/2021] [Indexed: 11/17/2022]
Abstract
Three-class brain tumor classification becomes a contemporary research task due to the distinct characteristics of tumors. The existing proposals employ deep neural networks for the three-class classification. However, achieving high accuracy is still an endless challenge in brain image classification. We have proposed a deep dense inception residual network for three-class brain tumor classification. We have customized the output layer of Inception ResNet v2 with a deep dense network and a softmax layer. The deep dense network has improved the classification accuracy of the proposed model. The proposed model has been evaluated using key performance metrics on a publicly available brain tumor image dataset having 3064 images. Our proposed model outperforms the existing model with a mean accuracy of 99.69%. Further, similar performance has been obtained on noisy data.
Collapse
|
45
|
Anjum S, Hussain L, Ali M, Abbasi AA, Duong TQ. Automated multi-class brain tumor types detection by extracting RICA based features and employing machine learning techniques. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2021; 18:2882-2908. [PMID: 33892576 DOI: 10.3934/mbe.2021146] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Among the other cancer types, the brain tumor is one the leading cause of cancer across globe. If the tumor is properly identified at an earlier stage, then the chances of the survival can be increased. To categorize the brain tumor there are several factors including texture, type and location of brain tumor. We proposed a novel reconstruction independent component analysis (RICA) feature extraction method to detect multi-class brain tumor types (pituitary, meningioma, and glioma). We then employed the robust machine learning techniques as support vector machine (SVM) with quadratic and linear kernels and linear discriminant analysis (LDA). For training and testing of the data validation, a 10-fold cross validation was employed. For the multi-class classification, the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy and AUC were, respectively, 97.78%, 100%, 100%, 99.07, 99.34% and 0.9892 to detect pituitary using SVM Cubic followed by meningioma with accuracy (96.96%0, AUC (0.9348) and glioma with accuracy (95.88%), AUC (0.9635). The findings indicates that RICA feature based proposed methodology has more potential to detect the multiclass brain tumor types for improving diagnostic efficiency and can further improve the prediction accuracy to achieve the clinical outcomes.
Collapse
Affiliation(s)
- Sadia Anjum
- Department of IT, Hazara University, Mansehra 21120, KPK, Pakistan
| | - Lal Hussain
- Department of Computer Science & IT, University of Azad Jammu and Kashmir, King Abdullah Campus, Muzaffarabad 13100, Pakistan
- Department of Computer Science & IT, University of Azad Jammu and Kashmir, Neelum Campus, Athmuqam 13230, Pakistan
- Department of Radiology, Albert Einstein College of Medicine and Montefiore Medical Center, 111 East 210th Street, Bronx, NY 10467, USA
| | - Mushtaq Ali
- Department of IT, Hazara University, Mansehra 21120, KPK, Pakistan
| | - Adeel Ahmed Abbasi
- Department of Computer Science & IT, University of Azad Jammu and Kashmir, King Abdullah Campus, Muzaffarabad 13100, Pakistan
- School of Computer Science and Engineering, Central South University, 932 Lushan S Rd, Yuelu District, Changsha, Hunan, China
| | - Tim Q. Duong
- Department of Radiology, Albert Einstein College of Medicine and Montefiore Medical Center, 111 East 210th Street, Bronx, NY 10467, USA
| |
Collapse
|
46
|
Kang J, Ullah Z, Gwak J. MRI-Based Brain Tumor Classification Using Ensemble of Deep Features and Machine Learning Classifiers. SENSORS 2021; 21:s21062222. [PMID: 33810176 PMCID: PMC8004778 DOI: 10.3390/s21062222] [Citation(s) in RCA: 99] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 03/13/2021] [Accepted: 03/17/2021] [Indexed: 11/21/2022]
Abstract
Brain tumor classification plays an important role in clinical diagnosis and effective treatment. In this work, we propose a method for brain tumor classification using an ensemble of deep features and machine learning classifiers. In our proposed framework, we adopt the concept of transfer learning and uses several pre-trained deep convolutional neural networks to extract deep features from brain magnetic resonance (MR) images. The extracted deep features are then evaluated by several machine learning classifiers. The top three deep features which perform well on several machine learning classifiers are selected and concatenated as an ensemble of deep features which is then fed into several machine learning classifiers to predict the final output. To evaluate the different kinds of pre-trained models as a deep feature extractor, machine learning classifiers, and the effectiveness of an ensemble of deep feature for brain tumor classification, we use three different brain magnetic resonance imaging (MRI) datasets that are openly accessible from the web. Experimental results demonstrate that an ensemble of deep features can help improving performance significantly, and in most cases, support vector machine (SVM) with radial basis function (RBF) kernel outperforms other machine learning classifiers, especially for large datasets.
Collapse
Affiliation(s)
- Jaeyong Kang
- Department of Software, Korea National University of Transportation, Chungju 27469, Korea; (J.K.); (Z.U.)
| | - Zahid Ullah
- Department of Software, Korea National University of Transportation, Chungju 27469, Korea; (J.K.); (Z.U.)
| | - Jeonghwan Gwak
- Department of Software, Korea National University of Transportation, Chungju 27469, Korea; (J.K.); (Z.U.)
- Department of Biomedical Engineering, Korea National University of Transportation, Chungju 27469, Korea
- Department of AI Robotics Engineering, Korea National University of Transportation, Chungju 27469, Korea
- Department of IT Convergence (Brain Korea PLUS 21), Korea National University of Transportation, Chungju 27469, Korea
- Correspondence: ; Tel.: +82-43-841-5852
| |
Collapse
|
47
|
bodapati JD, Shareef SN, Naralasetti V, Mundukur NB. MSENet: Multi-Modal Squeeze-and-Excitation Network for Brain Tumor Severity Prediction. INT J PATTERN RECOGN 2021. [DOI: 10.1142/s0218001421570056] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Classification of brain tumors is one of the daunting tasks in medical imaging and incorrect decisions during the diagnosis process may lead to increased human fatality. Latest advances in artificial intelligence and deep learning have opened the path for the success of numerous medical image analysis tasks, including the recognition of brain tumors. In this paper, we propose a simple architecture based on deep learning that results in strong generalization without needing much preprocessing. The proposed Multi-modal Squeeze and Excitation model (MSENet) receives multiple representations of a given tumor image, learns end-to-end and effectively predicts the severity level of tumor. Convolution feature descriptors from multiple deep pre-trained models are used to effectively describe the tumor images and are supplied as input to the proposed MSENet. The squeeze and excitation blocks of the MSENet allow the model to prioritize tumor regions while giving less emphasis to the rest of the image, serving as an attention mechanism in the model. The proposed model is evaluated on the benchmark brain tumor dataset that is publicly accessible from the Figshare repository. Experimental studies reveal that in terms of model parameters, the proposed approach is simple and leads to competitive performance compared to those of existing complex models. By increasing complexity, the proposed model leads to generalize better and achieves state-of-the-art accuracy of 96.05% on Figshare dataset. Compared to the existing models, the proposed model neither uses segmentation nor uses augmentation techniques and without much pre-processing achieves competitive performance.
Collapse
Affiliation(s)
- Jyostna Devi bodapati
- Department of Computer Science and Engineering, Vignan’s Foundation for Science Technology and Research, Guntur, Andhra Pradesh 522213, India
| | - Shaik Nagur Shareef
- Department of Computer Science and Engineering, Vignan’s Foundation for Science Technology and Research, Guntur, Andhra Pradesh 522213, India
| | - Veeranjaneyulu Naralasetti
- Department of Computer Science and Engineering, Vignan’s Foundation for Science Technology and Research, Guntur, Andhra Pradesh 522213, India
| | - Nirupama Bhat Mundukur
- Department of Computer Science and Engineering, Vignan’s Foundation for Science Technology and Research, Guntur, Andhra Pradesh 522213, India
| |
Collapse
|
48
|
Zhong A, Li X, Wu D, Ren H, Kim K, Kim Y, Buch V, Neumark N, Bizzo B, Tak WY, Park SY, Lee YR, Kang MK, Park JG, Kim BS, Chung WJ, Guo N, Dayan I, Kalra MK, Li Q. Deep metric learning-based image retrieval system for chest radiograph and its clinical applications in COVID-19. Med Image Anal 2021; 70:101993. [PMID: 33711739 PMCID: PMC8032481 DOI: 10.1016/j.media.2021.101993] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 01/19/2021] [Accepted: 02/01/2021] [Indexed: 12/13/2022]
Abstract
In recent years, deep learning-based image analysis methods have been widely applied in computer-aided detection, diagnosis and prognosis, and has shown its value during the public health crisis of the novel coronavirus disease 2019 (COVID-19) pandemic. Chest radiograph (CXR) has been playing a crucial role in COVID-19 patient triaging, diagnosing and monitoring, particularly in the United States. Considering the mixed and unspecific signals in CXR, an image retrieval model of CXR that provides both similar images and associated clinical information can be more clinically meaningful than a direct image diagnostic model. In this work we develop a novel CXR image retrieval model based on deep metric learning. Unlike traditional diagnostic models which aim at learning the direct mapping from images to labels, the proposed model aims at learning the optimized embedding space of images, where images with the same labels and similar contents are pulled together. The proposed model utilizes multi-similarity loss with hard-mining sampling strategy and attention mechanism to learn the optimized embedding space, and provides similar images, the visualizations of disease-related attention maps and useful clinical information to assist clinical decisions. The model is trained and validated on an international multi-site COVID-19 dataset collected from 3 different sources. Experimental results of COVID-19 image retrieval and diagnosis tasks show that the proposed model can serve as a robust solution for CXR analysis and patient management for COVID-19. The model is also tested on its transferability on a different clinical decision support task for COVID-19, where the pre-trained model is applied to extract image features from a new dataset without any further training. The extracted features are then combined with COVID-19 patient's vitals, lab tests and medical histories to predict the possibility of airway intubation in 72 hours, which is strongly associated with patient prognosis, and is crucial for patient care and hospital resource planning. These results demonstrate our deep metric learning based image retrieval model is highly efficient in the CXR retrieval, diagnosis and prognosis, and thus has great clinical value for the treatment and management of COVID-19 patients.
Collapse
Affiliation(s)
- Aoxiao Zhong
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States; School of Engineering and Applied Sciences, Harvard University, Boston, MA, United States
| | - Xiang Li
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Dufan Wu
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Hui Ren
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Kyungsang Kim
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Younggon Kim
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Varun Buch
- MGH & BWH Center for Clinical Data Science, Boston, MA, United States
| | - Nir Neumark
- MGH & BWH Center for Clinical Data Science, Boston, MA, United States
| | - Bernardo Bizzo
- MGH & BWH Center for Clinical Data Science, Boston, MA, United States
| | - Won Young Tak
- Department of Internal Medicine, School of Medicine, Kyungpook National University, Daegu, South Korea
| | - Soo Young Park
- Department of Internal Medicine, School of Medicine, Kyungpook National University, Daegu, South Korea
| | - Yu Rim Lee
- Department of Internal Medicine, School of Medicine, Kyungpook National University, Daegu, South Korea
| | - Min Kyu Kang
- Department of Internal Medicine, Yeungnam University College of Medicine, Daegu, South Korea
| | - Jung Gil Park
- Department of Internal Medicine, Yeungnam University College of Medicine, Daegu, South Korea
| | - Byung Seok Kim
- Department of Internal Medicine, Catholic University of Daegu School of Medicine, Daegu, South Korea
| | - Woo Jin Chung
- Department of Internal Medicine, Keimyung University School of Medicine, Daegu, South Korea
| | - Ning Guo
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Ittai Dayan
- School of Engineering and Applied Sciences, Harvard University, Boston, MA, United States
| | - Mannudeep K Kalra
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States; MGH & BWH Center for Clinical Data Science, Boston, MA, United States.
| |
Collapse
|
49
|
A Deep Learning Approach for Brain Tumor Classification and Segmentation Using a Multiscale Convolutional Neural Network. Healthcare (Basel) 2021; 9:healthcare9020153. [PMID: 33540873 PMCID: PMC7912940 DOI: 10.3390/healthcare9020153] [Citation(s) in RCA: 83] [Impact Index Per Article: 27.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 01/29/2021] [Accepted: 01/31/2021] [Indexed: 12/22/2022] Open
Abstract
In this paper, we present a fully automatic brain tumor segmentation and classification model using a Deep Convolutional Neural Network that includes a multiscale approach. One of the differences of our proposal with respect to previous works is that input images are processed in three spatial scales along different processing pathways. This mechanism is inspired in the inherent operation of the Human Visual System. The proposed neural model can analyze MRI images containing three types of tumors: meningioma, glioma, and pituitary tumor, over sagittal, coronal, and axial views and does not need preprocessing of input images to remove skull or vertebral column parts in advance. The performance of our method on a publicly available MRI image dataset of 3064 slices from 233 patients is compared with previously classical machine learning and deep learning published methods. In the comparison, our method remarkably obtained a tumor classification accuracy of 0.973, higher than the other approaches using the same database.
Collapse
|
50
|
Muhammad K, Khan S, Ser JD, Albuquerque VHCD. Deep Learning for Multigrade Brain Tumor Classification in Smart Healthcare Systems: A Prospective Survey. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:507-522. [PMID: 32603291 DOI: 10.1109/tnnls.2020.2995800] [Citation(s) in RCA: 81] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Brain tumor is one of the most dangerous cancers in people of all ages, and its grade recognition is a challenging problem for radiologists in health monitoring and automated diagnosis. Recently, numerous methods based on deep learning have been presented in the literature for brain tumor classification (BTC) in order to assist radiologists for a better diagnostic analysis. In this overview, we present an in-depth review of the surveys published so far and recent deep learning-based methods for BTC. Our survey covers the main steps of deep learning-based BTC methods, including preprocessing, features extraction, and classification, along with their achievements and limitations. We also investigate the state-of-the-art convolutional neural network models for BTC by performing extensive experiments using transfer learning with and without data augmentation. Furthermore, this overview describes available benchmark data sets used for the evaluation of BTC. Finally, this survey does not only look into the past literature on the topic but also steps on it to delve into the future of this area and enumerates some research directions that should be followed in the future, especially for personalized and smart healthcare.
Collapse
|