1
|
Rethemiotaki I. Brain tumour detection from magnetic resonance imaging using convolutional neural networks. Contemp Oncol (Pozn) 2024; 27:230-241. [PMID: 38405206 PMCID: PMC10883197 DOI: 10.5114/wo.2023.135320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 01/02/2024] [Indexed: 02/27/2024] Open
Abstract
Introduction The aim of this work is to detect and classify brain tumours using computational intelligence techniques on magnetic resonance imaging (MRI) images. Material and methods A dataset of 3264 MRI brain images consisting of 4 categories: unspecified glioma, meningioma, pituitary, and healthy brain, was used in this study. Twelve convolutional neural networks (GoogleNet, MobileNetV2, Xception, DesNet-BC, ResNet 50, SqueezeNet, ShuffleNet, VGG-16, AlexNet, Enet, EfficientB0, and MobileNetV2 with meta pseudo-labels) were used to classify gliomas, meningiomas, pituitary tumours, and healthy brains to find the most appropriate model. The experiments included image preprocessing and hyperparameter tuning. The performance of each neural network was evaluated based on accuracy, precision, recall, and F-measure for each type of brain tumour. Results The experimental results show that the MobileNetV2 convolutional neural network (CNN) model was able to diagnose brain tumours with 99% accuracy, 98% recall, and 99% F1 score. On the other hand, the validation data analysis shows that the CNN model GoogleNet has the highest accuracy (97%) among CNNs and seems to be the best choice for brain tumour classification. Conclusions The results of this work highlight the importance of artificial intelligence and machine learning for brain tumour prediction. Furthermore, this study achieved the highest accuracy in brain tumour classification to date, and it is also the only study to compare the performance of so many neural networks simultaneously.
Collapse
Affiliation(s)
- Irene Rethemiotaki
- School of Electrical and Computer Engineering, Technical University of Crete, Chania, Crete, Greece
| |
Collapse
|
2
|
Aluri S, Imambi SS. Brain tumour classification using MRI images based on lenet with golden teacher learning optimization. NETWORK (BRISTOL, ENGLAND) 2024; 35:27-54. [PMID: 37947040 DOI: 10.1080/0954898x.2023.2275720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 10/22/2023] [Indexed: 11/12/2023]
Abstract
Brain tumour (BT) is a dangerous neurological disorder produced by abnormal cell growth within the skull or brain. Nowadays, the death rate of people with BT is linearly growing. The finding of tumours at an early stage is crucial for giving treatment to patients, which improves the survival rate of patients. Hence, the BT classification (BTC) is done in this research using magnetic resonance imaging (MRI) images. In this research, the input MRI image is pre-processed using a non-local means (NLM) filter that denoises the input image. For attaining the effective classified result, the tumour area from the MRI image is segmented by the SegNet model. Furthermore, the BTC is accomplished by the LeNet model whose weight is optimized by the Golden Teacher Learning Optimization Algorithm (GTLO) such that the classified output produced by the LeNet model is Gliomas, Meningiomas, and Pituitary tumours. The experimental outcome displays that the GTLO-LeNet achieved an Accuracy of 0.896, Negative Predictive value (NPV) of 0.907, Positive Predictive value (PPV) of 0.821, True Negative Rate (TNR) of 0.880, and True Positive Rate (TPR) of 0.888.
Collapse
Affiliation(s)
- Srilakshmi Aluri
- Research Scholar, Computer Science & Engineering, K L Educational foundation, deemed to be University, Vaddeswaram, India
| | - Sagar S Imambi
- Professor, Computer Science and Engineering, K L Educational foundation, deemed to be University, Vaddeswaram, India
| |
Collapse
|
3
|
Pitarch C, Ungan G, Julià-Sapé M, Vellido A. Advances in the Use of Deep Learning for the Analysis of Magnetic Resonance Image in Neuro-Oncology. Cancers (Basel) 2024; 16:300. [PMID: 38254790 PMCID: PMC10814384 DOI: 10.3390/cancers16020300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 12/28/2023] [Accepted: 01/08/2024] [Indexed: 01/24/2024] Open
Abstract
Machine Learning is entering a phase of maturity, but its medical applications still lag behind in terms of practical use. The field of oncological radiology (and neuro-oncology in particular) is at the forefront of these developments, now boosted by the success of Deep-Learning methods for the analysis of medical images. This paper reviews in detail some of the most recent advances in the use of Deep Learning in this field, from the broader topic of the development of Machine-Learning-based analytical pipelines to specific instantiations of the use of Deep Learning in neuro-oncology; the latter including its use in the groundbreaking field of ultra-low field magnetic resonance imaging.
Collapse
Affiliation(s)
- Carla Pitarch
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Eurecat, Digital Health Unit, Technology Centre of Catalonia, 08005 Barcelona, Spain
| | - Gulnur Ungan
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Margarida Julià-Sapé
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Alfredo Vellido
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| |
Collapse
|
4
|
Jyothi P, Dhanasekaran S. An attention 3DUNET and visual geometry group-19 based deep neural network for brain tumor segmentation and classification from MRI. J Biomol Struct Dyn 2023:1-12. [PMID: 37979152 DOI: 10.1080/07391102.2023.2283164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 11/06/2023] [Indexed: 11/20/2023]
Abstract
There has been an abrupt increase in brain tumor (BT) related medical cases during the past ten years. The tenth most typical type of tumor affecting millions of people is the BT. The cure rate can, however, rise if it is found early. When evaluating BT diagnosis and treatment options, MRI is a crucial tool. However, segmenting the tumors from magnetic resonance (MR) images is complex. The advancement of deep learning (DL) has led to the development of numerous automatic segmentation and classification approaches. However, most need improvement since they are limited to 2D images. So, this article proposes a novel and optimal DL system for segmenting and classifying the BTs from 3D brain MR images. Preprocessing, segmentation, feature extraction, feature selection, and tumor classification are the main phases of the proposed work. Preprocessing, such as noise removal, is performed on the collected brain MR images using bilateral filtering. The tumor segmentation uses spatial and channel attention-based three-dimensional u-shaped network (SC3DUNet) to segment the tumor lesions from the preprocessed data. After that, the feature extraction is done based on dilated convolution-based visual geometry group-19 (DCVGG-19), making the classification task more manageable. The optimal features are selected from the extracted feature sets using diagonal linear uniform and tangent flight included butterfly optimization algorithm. Finally, the proposed system applies an optimal hyperparameters-based deep neural network to classify the tumor classes. The experiments conducted on the BraTS2020 dataset show that the suggested method can segment tumors and categorize them more accurately than the existing state-of-the-art mechanisms.Communicated by Ramaswamy H. Sarma.
Collapse
Affiliation(s)
- Parvathy Jyothi
- Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, India
| | - S Dhanasekaran
- Department of Information Technology, Kalasalingam Academy of Research and Education, Krishnankoil, India
| |
Collapse
|
5
|
Albahli S, Nazir T. A Circular Box-Based Deep Learning Model for the Identification of Signet Ring Cells from Histopathological Images. Bioengineering (Basel) 2023; 10:1147. [PMID: 37892876 PMCID: PMC10604551 DOI: 10.3390/bioengineering10101147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 09/16/2023] [Accepted: 09/18/2023] [Indexed: 10/29/2023] Open
Abstract
Signet ring cell (SRC) carcinoma is a particularly serious type of cancer that is a leading cause of death all over the world. SRC carcinoma has a more deceptive onset than other carcinomas and is mostly encountered in its later stages. Thus, the recognition of SRCs at their initial stages is a challenge because of different variants and sizes and illumination changes. The recognition process of SRCs at their early stages is costly because of the requirement for medical experts. A timely diagnosis is important because the level of the disease determines the severity, cure, and survival rate of victims. To tackle the current challenges, a deep learning (DL)-based methodology is proposed in this paper, i.e., custom CircleNet with ResNet-34 for SRC recognition and classification. We chose this method because of the circular shapes of SRCs and achieved better performance due to the CircleNet method. We utilized a challenging dataset for experimentation and performed augmentation to increase the dataset samples. The experiments were conducted using 35,000 images and attained 96.40% accuracy. We performed a comparative analysis and confirmed that our method outperforms the other methods.
Collapse
Affiliation(s)
- Saleh Albahli
- Department of Information Technology, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia;
| | - Tahira Nazir
- Faculty of Computing, Riphah International University, Islamabad 44600, Pakistan
| |
Collapse
|
6
|
Shen L, Gao C, Hu S, Kang D, Zhang Z, Xia D, Xu Y, Xiang S, Zhu Q, Xu G, Tang F, Yue H, Yu W, Zhang Z. Using Artificial Intelligence to Diagnose Osteoporotic Vertebral Fractures on Plain Radiographs. J Bone Miner Res 2023; 38:1278-1287. [PMID: 37449775 DOI: 10.1002/jbmr.4879] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 06/18/2023] [Accepted: 07/06/2023] [Indexed: 07/18/2023]
Abstract
Osteoporotic vertebral fracture (OVF) is a risk factor for morbidity and mortality in elderly population, and accurate diagnosis is important for improving treatment outcomes. OVF diagnosis suffers from high misdiagnosis and underdiagnosis rates, as well as high workload. Deep learning methods applied to plain radiographs, a simple, fast, and inexpensive examination, might solve this problem. We developed and validated a deep-learning-based vertebral fracture diagnostic system using area loss ratio, which assisted a multitasking network to perform skeletal position detection and segmentation and identify and grade vertebral fractures. As the training set and internal validation set, we used 11,397 plain radiographs from six community centers in Shanghai. For the external validation set, 1276 participants were recruited from the outpatient clinic of the Shanghai Sixth People's Hospital (1276 plain radiographs). Radiologists performed all X-ray images and used the Genant semiquantitative tool for fracture diagnosis and grading as the ground truth data. Accuracy, sensitivity, specificity, positive predictive value, and negative predictive value were used to evaluate diagnostic performance. The AI_OVF_SH system demonstrated high accuracy and computational speed in skeletal position detection and segmentation. In the internal validation set, the accuracy, sensitivity, and specificity with the AI_OVF_SH model were 97.41%, 84.08%, and 97.25%, respectively, for all fractures. The sensitivity and specificity for moderate fractures were 88.55% and 99.74%, respectively, and for severe fractures, they were 92.30% and 99.92%. In the external validation set, the accuracy, sensitivity, and specificity for all fractures were 96.85%, 83.35%, and 94.70%, respectively. For moderate fractures, the sensitivity and specificity were 85.61% and 99.85%, respectively, and 93.46% and 99.92% for severe fractures. Therefore, the AI_OVF_SH system is an efficient tool to assist radiologists and clinicians to improve the diagnosing of vertebral fractures. © 2023 The Authors. Journal of Bone and Mineral Research published by Wiley Periodicals LLC on behalf of American Society for Bone and Mineral Research (ASBMR).
Collapse
Affiliation(s)
- Li Shen
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Clinical Research Center, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Chao Gao
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Shundong Hu
- Department of Radiology, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Dan Kang
- Shanghai Jiyinghui Intelligent Technology Co, Shanghai, China
| | - Zhaogang Zhang
- Shanghai Jiyinghui Intelligent Technology Co, Shanghai, China
| | - Dongdong Xia
- Department of Orthopaedics, Ning Bo First Hospital, Zhejiang, China
| | - Yiren Xu
- Department of Radiology, Ning Bo First Hospital, Zhejiang, China
| | - Shoukui Xiang
- Department of Endocrinology and Metabolism, The First People's Hospital of Changzhou, Changzhou, China
| | - Qiong Zhu
- Kangjian Community Health Service Center, Shanghai, China
| | - GeWen Xu
- Kangjian Community Health Service Center, Shanghai, China
| | - Feng Tang
- Jinhui Community Health Service Center, Shanghai, China
| | - Hua Yue
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wei Yu
- Department of Radiology, Peking Union Medical College Hospital, Beijing, China
| | - Zhenlin Zhang
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Clinical Research Center, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
7
|
Abdusalomov AB, Mukhiddinov M, Whangbo TK. Brain Tumor Detection Based on Deep Learning Approaches and Magnetic Resonance Imaging. Cancers (Basel) 2023; 15:4172. [PMID: 37627200 PMCID: PMC10453020 DOI: 10.3390/cancers15164172] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 08/11/2023] [Accepted: 08/17/2023] [Indexed: 08/27/2023] Open
Abstract
The rapid development of abnormal brain cells that characterizes a brain tumor is a major health risk for adults since it can cause severe impairment of organ function and even death. These tumors come in a wide variety of sizes, textures, and locations. When trying to locate cancerous tumors, magnetic resonance imaging (MRI) is a crucial tool. However, detecting brain tumors manually is a difficult and time-consuming activity that might lead to inaccuracies. In order to solve this, we provide a refined You Only Look Once version 7 (YOLOv7) model for the accurate detection of meningioma, glioma, and pituitary gland tumors within an improved detection of brain tumors system. The visual representation of the MRI scans is enhanced by the use of image enhancement methods that apply different filters to the original pictures. To further improve the training of our proposed model, we apply data augmentation techniques to the openly accessible brain tumor dataset. The curated data include a wide variety of cases, such as 2548 images of gliomas, 2658 images of pituitary, 2582 images of meningioma, and 2500 images of non-tumors. We included the Convolutional Block Attention Module (CBAM) attention mechanism into YOLOv7 to further enhance its feature extraction capabilities, allowing for better emphasis on salient regions linked with brain malignancies. To further improve the model's sensitivity, we have added a Spatial Pyramid Pooling Fast+ (SPPF+) layer to the network's core infrastructure. YOLOv7 now includes decoupled heads, which allow it to efficiently glean useful insights from a wide variety of data. In addition, a Bi-directional Feature Pyramid Network (BiFPN) is used to speed up multi-scale feature fusion and to better collect features associated with tumors. The outcomes verify the efficiency of our suggested method, which achieves a higher overall accuracy in tumor detection than previous state-of-the-art models. As a result, this framework has a lot of potential as a helpful decision-making tool for experts in the field of diagnosing brain tumors.
Collapse
Affiliation(s)
| | | | - Taeg Keun Whangbo
- Department of Computer Engineering, Gachon University, Seongnam-si 13120, Republic of Korea;
| |
Collapse
|
8
|
Ito S, Nakashima H, Segi N, Ouchida J, Oda M, Yamauchi I, Oishi R, Miyairi Y, Mori K, Imagama S. Automated Detection and Diagnosis of Spinal Schwannomas and Meningiomas Using Deep Learning and Magnetic Resonance Imaging. J Clin Med 2023; 12:5075. [PMID: 37568477 PMCID: PMC10419638 DOI: 10.3390/jcm12155075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Revised: 07/24/2023] [Accepted: 07/31/2023] [Indexed: 08/13/2023] Open
Abstract
Spinal cord tumors are infrequently identified spinal diseases that are often difficult to diagnose even with magnetic resonance imaging (MRI) findings. To minimize the probability of overlooking these tumors and improve diagnostic accuracy, an automatic diagnostic system is needed. We aimed to develop an automated system for detecting and diagnosing spinal schwannomas and meningiomas based on deep learning using You Only Look Once (YOLO) version 4 and MRI. In this retrospective diagnostic accuracy study, the data of 50 patients with spinal schwannomas, 45 patients with meningiomas, and 100 control cases were reviewed, respectively. Sagittal T1-weighted (T1W) and T2-weighted (T2W) images were used for object detection, classification, training, and validation. The object detection and diagnosis system was developed using YOLO version 4. The accuracies of the proposed object detections based on T1W, T2W, and T1W + T2W images were 84.8%, 90.3%, and 93.8%, respectively. The accuracies of the object detection for two spine surgeons were 88.9% and 90.1%, respectively. The accuracies of the proposed diagnoses based on T1W, T2W, and T1W + T2W images were 76.4%, 83.3%, and 84.1%, respectively. The accuracies of the diagnosis for two spine surgeons were 77.4% and 76.1%, respectively. We demonstrated an accurate, automated detection and diagnosis of spinal schwannomas and meningiomas using the developed deep learning-based method based on MRI. This system could be valuable in supporting radiological diagnosis of spinal schwannomas and meningioma, with a potential of reducing the radiologist's overall workload.
Collapse
Affiliation(s)
- Sadayuki Ito
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya 466-8560, Japan (Y.M.)
| | - Hiroaki Nakashima
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya 466-8560, Japan (Y.M.)
| | - Naoki Segi
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya 466-8560, Japan (Y.M.)
| | - Jun Ouchida
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya 466-8560, Japan (Y.M.)
| | - Masahiro Oda
- Information Strategy Office, Information and Communications, Nagoya University, Nagoya 464-8601, Japan
| | - Ippei Yamauchi
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya 466-8560, Japan (Y.M.)
| | - Ryotaro Oishi
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya 466-8560, Japan (Y.M.)
| | - Yuichi Miyairi
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya 466-8560, Japan (Y.M.)
| | - Kensaku Mori
- Information Strategy Office, Information and Communications, Nagoya University, Nagoya 464-8601, Japan
- Department of Intelligent Systems, Nagoya University Graduate School of Informatics, Nagoya 464-8601, Japan
- Research Center for Medical Bigdata, National Institute of Informatics, Tokyo 101-8430, Japan
| | - Shiro Imagama
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya 466-8560, Japan (Y.M.)
| |
Collapse
|
9
|
Sobhaninia Z, Karimi N, Khadivi P, Samavi S. Brain tumor segmentation by cascaded multiscale multitask learning framework based on feature aggregation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
|
10
|
Zulfiqar F, Ijaz Bajwa U, Mehmood Y. Multi-class classification of brain tumor types from MR images using EfficientNets. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
|
11
|
Ali HS, Ismail AI, El‐Rabaie EM, Abd El‐Samie FE. Deep residual architectures and ensemble learning for efficient brain tumour classification. EXPERT SYSTEMS 2023; 40. [DOI: 10.1111/exsy.13226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 12/12/2022] [Indexed: 09/02/2023]
Abstract
AbstractThe prompt and accurate detection of brain tumours is essential for disease management and life‐saving. This paper introduces an efficient and robust completely automated system for classifying the three prominent types of brain tumour. The aim is to contribute for enhanced classification accuracy with minimum pre‐processing and less inference time. The power of deep networks is thoroughly investigated, with and without transfer learning. Fine‐tuned deep Residual Networks (ResNets) with depth up to 101 are introduced to manage the complex nature of brain images, and to capture their microstructural information. The proposed residual architectures with their in‐depth representations are evaluated and compared to other fine‐tuned networks (AlexNet, GoogLeNet and VGG16). A novel Convolutional Network (ConvNet) built and trained from scratch is also proposed for tumour type classification. Proven models are integrated by combining their decisions using majority voting to obtain the final classification accuracy. Results show that the residual architectures can be optimized efficiently, and a noticeable accuracy can be gained with them. Although ResNet models are deeper than VGG16, they show lower complexity. Results also indicate that building ensemble of models is a successful strategy to enhance the system performance. Each model in the ensemble learns specific patterns with certain filters. This stochastic nature boosts the classification accuracy. The accuracies obtained from ResNet18, ResNet101, and the proposed ConvNet are 98.91%, 97.39% and 95.43%, respectively. The accuracy based on decision fusion for the three networks is 99.57%, which is better than those of all state‐of‐the‐art techniques. The accuracy obtained with ResNet50 is 98.26%, and its fusion with ResNet18 and the designed network yields a 99.35% accuracy, which is also better than those of previous methods, meanwhile achieving minimum detection time requirements. Finally, visual representation of the learned features is provided to understand what the models have learned.
Collapse
Affiliation(s)
- Hanaa S. Ali
- Electronics & Communication Department, Faculty of Engineering Zagazig University Zagazig Egypt
| | - Asmaa I. Ismail
- Department of Electronics and Electrical Communications, Faculty of Electronic Engineering Menoufia University Menouf Egypt
| | - El‐Sayed M. El‐Rabaie
- Department of Electronics and Electrical Communications, Faculty of Electronic Engineering Menoufia University Menouf Egypt
| | - Fathi E. Abd El‐Samie
- Department of Electronics and Electrical Communications, Faculty of Electronic Engineering Menoufia University Menouf Egypt
| |
Collapse
|
12
|
Kibriya H, Amin R, Kim J, Nawaz M, Gantassi R. A Novel Approach for Brain Tumor Classification Using an Ensemble of Deep and Hand-Crafted Features. SENSORS (BASEL, SWITZERLAND) 2023; 23:4693. [PMID: 37430604 PMCID: PMC10221077 DOI: 10.3390/s23104693] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 04/24/2023] [Accepted: 05/08/2023] [Indexed: 07/12/2023]
Abstract
One of the most severe types of cancer caused by the uncontrollable proliferation of brain cells inside the skull is brain tumors. Hence, a fast and accurate tumor detection method is critical for the patient's health. Many automated artificial intelligence (AI) methods have recently been developed to diagnose tumors. These approaches, however, result in poor performance; hence, there is a need for an efficient technique to perform precise diagnoses. This paper suggests a novel approach for brain tumor detection via an ensemble of deep and hand-crafted feature vectors (FV). The novel FV is an ensemble of hand-crafted features based on the GLCM (gray level co-occurrence matrix) and in-depth features based on VGG16. The novel FV contains robust features compared to independent vectors, which improve the suggested method's discriminating capabilities. The proposed FV is then classified using SVM or support vector machines and the k-nearest neighbor classifier (KNN). The framework achieved the highest accuracy of 99% on the ensemble FV. The results indicate the reliability and efficacy of the proposed methodology; hence, radiologists can use it to detect brain tumors through MRI (magnetic resonance imaging). The results show the robustness of the proposed method and can be deployed in the real environment to detect brain tumors from MRI images accurately. In addition, the performance of our model was validated via cross-tabulated data.
Collapse
Affiliation(s)
- Hareem Kibriya
- Department of Computer Sciences, University of Engineering and Technology, Taxila 47050, Pakistan
| | - Rashid Amin
- Department of Computer Sciences, University of Chakwal, Chakwal 48800, Pakistan
| | - Jinsul Kim
- School of Electronics and Computer Engineering, Chonnam National University, 300 Yongbong-dong, Buk-gu, Gwangju 500757, Republic of Korea
| | - Marriam Nawaz
- Department of Software Engineering, University of Engineering and Technology, Taxila 47050, Pakistan
| | - Rahma Gantassi
- Department of Electrical Engineering, Chonnam National University, Gwangju 61186, Republic of Korea
| |
Collapse
|
13
|
Wali A, Ahmad M, Naseer A, Tamoor M, Gilani S. StynMedGAN: Medical images augmentation using a new GAN model for improved diagnosis of diseases. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2023. [DOI: 10.3233/jifs-223996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2023]
Abstract
Deep networks require a considerable amount of training data otherwise these networks generalize poorly. Data Augmentation techniques help the network generalize better by providing more variety in the training data. Standard data augmentation techniques such as flipping, and scaling, produce new data that is a modified version of the original data. Generative Adversarial networks (GANs) have been designed to generate new data that can be exploited. In this paper, we propose a new GAN model, named StynMedGAN for synthetically generating medical images to improve the performance of classification models. StynMedGAN builds upon the state-of-the-art styleGANv2 that has produced remarkable results generating all kinds of natural images. We introduce a regularization term that is a normalized loss factor in the existing discriminator loss of styleGANv2. It is used to force the generator to produce normalized images and penalize it if it fails. Medical imaging modalities, such as X-Rays, CT-Scans, and MRIs are different in nature, we show that the proposed GAN extends the capacity of styleGANv2 to handle medical images in a better way. This new GAN model (StynMedGAN) is applied to three types of medical imaging: X-Rays, CT scans, and MRI to produce more data for the classification tasks. To validate the effectiveness of the proposed model for the classification, 3 classifiers (CNN, DenseNet121, and VGG-16) are used. Results show that the classifiers trained with StynMedGAN-augmented data outperform other methods that only used the original data. The proposed model achieved 100%, 99.6%, and 100% for chest X-Ray, Chest CT-Scans, and Brain MRI respectively. The results are promising and favor a potentially important resource that can be used by practitioners and radiologists to diagnose different diseases.
Collapse
Affiliation(s)
- Aamir Wali
- Department of Computer Science, National University of Computer and Emerging Science, Faisal Town, Lahore, Pakistan
| | - Muzammil Ahmad
- Department of Computer Science, National University of Computer and Emerging Science, Faisal Town, Lahore, Pakistan
| | - Asma Naseer
- Department of Computer Science, National University of Computer and Emerging Science, Faisal Town, Lahore, Pakistan
| | - Maria Tamoor
- Department of Computer Science, Forman Christian College University, Zahoor Ilahi Road, Lahore, Pakistan
| | - S.A.M. Gilani
- Department of Computer Science, National University of Computer and Emerging Science, Faisal Town, Lahore, Pakistan
| |
Collapse
|
14
|
Classification of Tumor in Brain MR Images Using Deep Convolutional Neural Network and Global Average Pooling. Processes (Basel) 2023. [DOI: 10.3390/pr11030679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2023] Open
Abstract
Brain tumors can cause serious health complications and lead to death if not detected accurately. Therefore, early-stage detection of brain tumors and accurate classification of types of brain tumors play a major role in diagnosis. Recently, deep convolutional neural network (DCNN) based approaches using brain magnetic resonance imaging (MRI) images have shown excellent performance in detection and classification tasks. However, the accuracy of DCNN architectures depends on the training of data samples since it requires more precise data for better output. Thus, we propose a transfer learning-based DCNN framework to classify brain tumors for example meningioma tumors, glioma tumors, and pituitary tumors. We use a pre-trained DCNN architecture VGGNet which is previously trained on huge datasets and used to transfer its learning parameters to the target dataset. Also, we employ transfer learning aspects such as fine-tune the convolutional network and freeze the layers of the convolutional network for better performance. Further, this proposed approach uses a Global Average Pooling (GAP) layer at the output to avoid overfitting issues and vanishing gradient problems. The proposed architecture is assessed and compared with competing deep learning based brain tumor classification approaches on the Figshare dataset. Our proposed approach produces 98.93% testing accuracy and outperforms the contemporary learning-based approaches.
Collapse
|
15
|
Advanced Deep Learning Approaches for Accurate Brain Tumor Classification in Medical Imaging. Symmetry (Basel) 2023. [DOI: 10.3390/sym15030571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023] Open
Abstract
A brain tumor can have an impact on the symmetry of a person’s face or head, depending on its location and size. If a brain tumor is located in an area that affects the muscles responsible for facial symmetry, it can cause asymmetry. However, not all brain tumors cause asymmetry. Some tumors may be located in areas that do not affect facial symmetry or head shape. Additionally, the asymmetry caused by a brain tumor may be subtle and not easily noticeable, especially in the early stages of the condition. Brain tumor classification using deep learning involves using artificial neural networks to analyze medical images of the brain and classify them as either benign (not cancerous) or malignant (cancerous). In the field of medical imaging, Convolutional Neural Networks (CNN) have been used for tasks such as the classification of brain tumors. These models can then be used to assist in the diagnosis of brain tumors in new cases. Brain tissues can be analyzed using magnetic resonance imaging (MRI). By misdiagnosing forms of brain tumors, patients’ chances of survival will be significantly lowered. Checking the patient’s MRI scans is a common way to detect existing brain tumors. This approach takes a long time and is prone to human mistakes when dealing with large amounts of data and various kinds of brain tumors. In our proposed research, Convolutional Neural Network (CNN) models were trained to detect the three most prevalent forms of brain tumors, i.e., Glioma, Meningioma, and Pituitary; they were optimized using Aquila Optimizer (AQO), which was used for the initial population generation and modification for the selected dataset, dividing it into 80% for the training set and 20% for the testing set. We used the VGG-16, VGG-19, and Inception-V3 architectures with AQO optimizer for the training and validation of the brain tumor dataset and to obtain the best accuracy of 98.95% for the VGG-19 model.
Collapse
|
16
|
Chaki J, Woźniak M. Deep learning for neurodegenerative disorder (2016 to 2022): A systematic review. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
17
|
Hybrid Techniques of Analyzing MRI Images for Early Diagnosis of Brain Tumours Based on Hybrid Features. Processes (Basel) 2023. [DOI: 10.3390/pr11010212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
Brain tumours are considered one of the deadliest tumours in humans and have a low survival rate due to their heterogeneous nature. Several types of benign and malignant brain tumours need to be diagnosed early to administer appropriate treatment. Magnetic resonance (MR) images provide details of the brain’s internal structure, which allow radiologists and doctors to diagnose brain tumours. However, MR images contain complex details that require highly qualified experts and a long time to analyse. Artificial intelligence techniques solve these challenges. This paper presents four proposed systems, each with more than one technology. These techniques vary between machine, deep and hybrid learning. The first system comprises artificial neural network (ANN) and feedforward neural network (FFNN) algorithms based on the hybrid features between local binary pattern (LBP), grey-level co-occurrence matrix (GLCM) and discrete wavelet transform (DWT) algorithms. The second system comprises pre-trained GoogLeNet and ResNet-50 models for dataset classification. The two models achieved superior results in distinguishing between the types of brain tumours. The third system is a hybrid technique between convolutional neural network and support vector machine. This system also achieved superior results in distinguishing brain tumours. The fourth proposed system is a hybrid of the features of GoogLeNet and ResNet-50 with the LBP, GLCM and DWT algorithms (handcrafted features) to obtain representative features and classify them using the ANN and FFNN. This method achieved superior results in distinguishing between brain tumours and performed better than the other methods. With the hybrid features of GoogLeNet and hand-crafted features, FFNN achieved an accuracy of 99.9%, a precision of 99.84%, a sensitivity of 99.95%, a specificity of 99.85% and an AUC of 99.9%.
Collapse
|
18
|
Copy move forgery detection and segmentation using improved mask region-based convolution network (RCNN). Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
19
|
Albahli S, Masood M. Efficient attention-based CNN network (EANet) for multi-class maize crop disease classification. FRONTIERS IN PLANT SCIENCE 2022; 13:1003152. [PMID: 36311068 PMCID: PMC9597248 DOI: 10.3389/fpls.2022.1003152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 09/26/2022] [Indexed: 06/16/2023]
Abstract
Maize leaf disease significantly reduces the quality and overall crop yield. Therefore, it is crucial to monitor and diagnose illnesses during the growth season to take necessary actions. However, accurate identification is challenging to achieve as the existing automated methods are computationally complex or perform well on images with a simple background. Whereas, the realistic field conditions include a lot of background noise that makes this task difficult. In this study, we presented an end-to-end learning CNN architecture, Efficient Attention Network (EANet) based on the EfficientNetv2 model to identify multi-class maize crop diseases. To further enhance the capacity of the feature representation, we introduced a spatial-channel attention mechanism to focus on affected locations and help the detection network accurately recognize multiple diseases. We trained the EANet model using focal loss to overcome class-imbalanced data issues and transfer learning to enhance network generalization. We evaluated the presented approach on the publically available datasets having samples captured under various challenging environmental conditions such as varying background, non-uniform light, and chrominance variances. Our approach showed an overall accuracy of 99.89% for the categorization of various maize crop diseases. The experimental and visual findings reveal that our model shows improved performance compared to conventional CNNs, and the attention mechanism properly accentuates the disease-relevant information by ignoring the background noise.
Collapse
Affiliation(s)
- Saleh Albahli
- Department of Information Technology, College of Computer, Qassim University, Buraydah, Saudi Arabia
| | - Momina Masood
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
| |
Collapse
|
20
|
Albahli S, Nawaz M. DCNet: DenseNet-77-based CornerNet model for the tomato plant leaf disease detection and classification. FRONTIERS IN PLANT SCIENCE 2022; 13:957961. [PMID: 36160977 PMCID: PMC9499263 DOI: 10.3389/fpls.2022.957961] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 08/12/2022] [Indexed: 06/16/2023]
Abstract
Early recognition of tomato plant leaf diseases is mandatory to improve the food yield and save agriculturalists from costly spray procedures. The correct and timely identification of several tomato plant leaf diseases is a complicated task as the healthy and affected areas of plant leaves are highly similar. Moreover, the incidence of light variation, color, and brightness changes, and the occurrence of blurring and noise on the images further increase the complexity of the detection process. In this article, we have presented a robust approach for tackling the existing issues of tomato plant leaf disease detection and classification by using deep learning. We have proposed a novel approach, namely the DenseNet-77-based CornerNet model, for the localization and classification of the tomato plant leaf abnormalities. Specifically, we have used the DenseNet-77 as the backbone network of the CornerNet. This assists in the computing of the more nominative set of image features from the suspected samples that are later categorized into 10 classes by the one-stage detector of the CornerNet model. We have evaluated the proposed solution on a standard dataset, named PlantVillage, which is challenging in nature as it contains samples with immense brightness alterations, color variations, and leaf images with different dimensions and shapes. We have attained an average accuracy of 99.98% over the employed dataset. We have conducted several experiments to assure the effectiveness of our approach for the timely recognition of the tomato plant leaf diseases that can assist the agriculturalist to replace the manual systems.
Collapse
Affiliation(s)
- Saleh Albahli
- Department of Information Technology, College of Computer, Qassim University, Buraydah, Saudi Arabia
| | - Marriam Nawaz
- Department of Computer Science, University of Engineering and Technology–Taxila, Taxila, Pakistan
- Department of Software Engineering, University of Engineering and Technology–Taxila, Taxila, Pakistan
| |
Collapse
|
21
|
An attention-guided convolutional neural network for automated classification of brain tumor from MRI. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07742-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
22
|
Accurate Brain Tumor Detection Using Deep Convolutional Neural Network. Comput Struct Biotechnol J 2022; 20:4733-4745. [PMID: 36147663 PMCID: PMC9468505 DOI: 10.1016/j.csbj.2022.08.039] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 08/09/2022] [Accepted: 08/16/2022] [Indexed: 11/28/2022] Open
Abstract
Detection and Classification of a brain tumor is an important step to better understanding its mechanism. Magnetic Reasoning Imaging (MRI) is an experimental medical imaging technique that helps the radiologist find the tumor region. However, it is a time taking process and requires expertise to test the MRI images, manually. Nowadays, the advancement of Computer-assisted Diagnosis (CAD), machine learning, and deep learning in specific allow the radiologist to more reliably identify brain tumors. The traditional machine learning methods used to tackle this problem require a handcrafted feature for classification purposes. Whereas deep learning methods can be designed in a way to not require any handcrafted feature extraction while achieving accurate classification results. This paper proposes two deep learning models to identify both binary (normal and abnormal) and multiclass (meningioma, glioma, and pituitary) brain tumors. We use two publicly available datasets that include 3064 and 152 MRI images, respectively. To build our models, we first apply a 23-layers convolution neural network (CNN) to the first dataset since there is a large number of MRI images for the training purpose. However, when dealing with limited volumes of data, which is the case in the second dataset, our proposed “23-layers CNN” architecture faces overfitting problem. To address this issue, we use transfer learning and combine VGG16 architecture along with the reflection of our proposed “23 layers CNN” architecture. Finally, we compare our proposed models with those reported in the literature. Our experimental results indicate that our models achieve up to 97.8% and 100% classification accuracy for our employed datasets, respectively, exceeding all other state-of-the-art models. Our proposed models, employed datasets, and all the source codes are publicly available at: (https://github.com/saikat15010/Brain-Tumor-Detection).
Collapse
|
23
|
Efficient 3D AlexNet Architecture for Object Recognition Using Syntactic Patterns from Medical Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7882924. [PMID: 35634047 PMCID: PMC9142332 DOI: 10.1155/2022/7882924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 04/05/2022] [Accepted: 04/09/2022] [Indexed: 12/04/2022]
Abstract
In computer vision and medical image processing, object recognition is the primary concern today. Humans require only a few milliseconds for object recognition and visual stimulation. This led to the development of a computer-specific pattern recognition method in this study for identifying objects in medical images such as brain tumors. Initially, an adaptive median filter is used to remove the noise from MRI images. Thereafter, the contrast image enhancement technique is used to improve the quality of the image. To evaluate the wireframe model, the cellular logic array processing (CLAP)-based algorithm is then applied to images. The basic patterns of three-dimensional (3D) images are then identified from the input image by scanning the whole image. The frequency of these patterns is also used for object classification. A deep neural network is then utilized for the classification of brain tumor. In the proposed model, the syntactic pattern recognition technique is used to find the feature vector and 3D AlexNet is used for brain tumor classification. To evaluate the performance of the proposed work, three benchmark brain tumor datasets are used, i.e., Figshare, Brain MRI Kaggle, and Medical MRI datasets and BraTS 2019 dataset. The comparative analyses reveal that the proposed brain tumor classification model achieves significantly better performance than the existing models.
Collapse
|
24
|
Intelligent Ultra-Light Deep Learning Model for Multi-Class Brain Tumor Detection. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12083715] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
The diagnosis and surgical resection using Magnetic Resonance (MR) images in brain tumors is a challenging task to minimize the neurological defects after surgery owing to the non-linear nature of the size, shape, and textural variation. Radiologists, clinical experts, and brain surgeons examine brain MRI scans using the available methods, which are tedious, error-prone, time-consuming, and still exhibit positional accuracy up to 2–3 mm, which is very high in the case of brain cells. In this context, we propose an automated Ultra-Light Brain Tumor Detection (UL-BTD) system based on a novel Ultra-Light Deep Learning Architecture (UL-DLA) for deep features, integrated with highly distinctive textural features, extracted by Gray Level Co-occurrence Matrix (GLCM). It forms a Hybrid Feature Space (HFS), which is used for tumor detection using Support Vector Machine (SVM), culminating in high prediction accuracy and optimum false negatives with limited network size to fit within the average GPU resources of a modern PC system. The objective of this study is to categorize multi-class publicly available MRI brain tumor datasets with a minimum time thus real-time tumor detection can be carried out without compromising accuracy. Our proposed framework includes a sensitivity analysis of image size, One-versus-All and One-versus-One coding schemes with stringent efforts to assess the complexity and reliability performance of the proposed system with K-fold cross-validation as a part of the evaluation protocol. The best generalization achieved using SVM has an average detection rate of 99.23% (99.18%, 98.86%, and 99.67%), and F-measure of 0.99 (0.99, 0.98, and 0.99) for (glioma, meningioma, and pituitary tumors), respectively. Our results have been found to improve the state-of-the-art (97.30%) by 2%, indicating that the system exhibits capability for translation in modern hospitals during real-time surgical brain applications. The method needs 11.69 ms with an accuracy of 99.23% compared to 15 ms achieved by the state-of-the-art to earlier to detect tumors on a test image without any dedicated hardware providing a route for a desktop application in brain surgery.
Collapse
|
25
|
Nawaz M, Nazir T, Masood M, Mehmood A, Mahum R, Khan MA, Kadry S, Thinnukool O. Analysis of Brain MRI Images Using Improved CornerNet Approach. Diagnostics (Basel) 2021; 11:diagnostics11101856. [PMID: 34679554 PMCID: PMC8535141 DOI: 10.3390/diagnostics11101856] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/24/2021] [Accepted: 09/27/2021] [Indexed: 01/18/2023] Open
Abstract
The brain tumor is a deadly disease that is caused by the abnormal growth of brain cells, which affects the human blood cells and nerves. Timely and precise detection of brain tumors is an important task to avoid complex and painful treatment procedures, as it can assist doctors in surgical planning. Manual brain tumor detection is a time-consuming activity and highly dependent on the availability of area experts. Therefore, it is a need of the hour to design accurate automated systems for the detection and classification of various types of brain tumors. However, the exact localization and categorization of brain tumors is a challenging job due to extensive variations in their size, position, and structure. To deal with the challenges, we have presented a novel approach, namely, DenseNet-41-based CornerNet framework. The proposed solution comprises three steps. Initially, we develop annotations to locate the exact region of interest. In the second step, a custom CornerNet with DenseNet-41 as a base network is introduced to extract the deep features from the suspected samples. In the last step, the one-stage detector CornerNet is employed to locate and classify several brain tumors. To evaluate the proposed method, we have utilized two databases, namely, the Figshare and Brain MRI datasets, and attained an average accuracy of 98.8% and 98.5%, respectively. Both qualitative and quantitative analysis show that our approach is more proficient and consistent with detecting and classifying various types of brain tumors than other latest techniques.
Collapse
Affiliation(s)
- Marriam Nawaz
- Department of Computer Science, University of Engineering and Technology, Taxila 47050, Pakistan; (M.N.); (T.N.); (M.M.); (A.M.); (R.M.)
| | - Tahira Nazir
- Department of Computer Science, University of Engineering and Technology, Taxila 47050, Pakistan; (M.N.); (T.N.); (M.M.); (A.M.); (R.M.)
| | - Momina Masood
- Department of Computer Science, University of Engineering and Technology, Taxila 47050, Pakistan; (M.N.); (T.N.); (M.M.); (A.M.); (R.M.)
| | - Awais Mehmood
- Department of Computer Science, University of Engineering and Technology, Taxila 47050, Pakistan; (M.N.); (T.N.); (M.M.); (A.M.); (R.M.)
| | - Rabbia Mahum
- Department of Computer Science, University of Engineering and Technology, Taxila 47050, Pakistan; (M.N.); (T.N.); (M.M.); (A.M.); (R.M.)
| | | | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway;
| | - Orawit Thinnukool
- Research Group of Embedded Systems and Mobile Application in Health Science, College of Arts, Media and Technology, Chiang Mai University, Chiang Mai 50200, Thailand
- Correspondence:
| |
Collapse
|