1
|
Esfandiari A, Nasiri N. Gene selection and cancer classification using interaction-based feature clustering and improved-binary Bat algorithm. Comput Biol Med 2024; 181:109071. [PMID: 39205342 DOI: 10.1016/j.compbiomed.2024.109071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Revised: 08/13/2024] [Accepted: 08/22/2024] [Indexed: 09/04/2024]
Abstract
In high-dimensional gene expression data, selecting an optimal subset of genes is crucial for achieving high classification accuracy and reliable diagnosis of diseases. This paper proposes a two-stage hybrid model for gene selection based on clustering and a swarm intelligence algorithm to identify the most informative genes with high accuracy. First, a clustering-based multivariate filter approach is performed to explore the interactions between the features and eliminate any redundant or irrelevant ones. Then, by controlling for the problem of premature convergence in the binary Bat algorithm, the optimal gene subset is determined using different classifiers with the Monte Carlo cross-validation data partitioning model. The effectiveness of our proposed framework is evaluated using eight gene expression datasets, by comparison with other recently published algorithms in the literature. Experiments confirm that in seven out of eight datasets, the proposed method can achieve superior results in terms of classification accuracy and gene subset size. In particular, it achieves a classification accuracy of 100% in Lymphoma and Ovarian datasets and above 97.4% in the rest with a minimum number of genes. The results demonstrate that our proposed algorithm has the potential to solve the feature selection problem in different applications with high-dimensional datasets.
Collapse
Affiliation(s)
- Ahmad Esfandiari
- Department of Computer Engineering, Sari Branch, Islamic Azad University, Sari, Iran.
| | - Niki Nasiri
- Pediatric Infectious Diseases Research Center, Communicable Diseases Institute, Mazandaran University of Medical Sciences, Sari, Iran
| |
Collapse
|
2
|
Chen A, Lin D, Gao Q. Enhancing brain tumor detection in MRI images using YOLO-NeuroBoost model. Front Neurol 2024; 15:1445882. [PMID: 39239397 PMCID: PMC11374633 DOI: 10.3389/fneur.2024.1445882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Accepted: 08/01/2024] [Indexed: 09/07/2024] Open
Abstract
Brain tumors are diseases characterized by abnormal cell growth within or around brain tissues, including various types such as benign and malignant tumors. However, there is currently a lack of early detection and precise localization of brain tumors in MRI images, posing challenges to diagnosis and treatment. In this context, achieving accurate target detection of brain tumors in MRI images becomes particularly important as it can improve the timeliness of diagnosis and the effectiveness of treatment. To address this challenge, we propose a novel approach-the YOLO-NeuroBoost model. This model combines the improved YOLOv8 algorithm with several innovative techniques, including dynamic convolution KernelWarehouse, attention mechanism CBAM (Convolutional Block Attention Module), and Inner-GIoU loss function. Our experimental results demonstrate that our method achieves mAP scores of 99.48 and 97.71 on the Br35H dataset and the open-source Roboflow dataset, respectively, indicating the high accuracy and efficiency of this method in detecting brain tumors in MRI images. This research holds significant importance for improving early diagnosis and treatment of brain tumors and provides new possibilities for the development of the medical image analysis field.
Collapse
Affiliation(s)
- Aruna Chen
- College of Mathematics Science, Inner Mongolia Normal University, Hohhot, China
- Center for Applied Mathematical Science, Inner Mongolia, Hohhot, China
- Laboratory of Infinite-Dimensional Hamiltonian System and Its Algorithm Application, Ministry of Education (IMNU), Hohhot, China
| | - Da Lin
- School of Mathematical Sciences, Inner Mongolia University, Hohhot, China
| | - Qiqi Gao
- College of Mathematics Science, Inner Mongolia Normal University, Hohhot, China
| |
Collapse
|
3
|
Saqib SM, Mazhar T, Iqbal M, Shahazad T, Almogren A, Ouahada K, Hamam H. Deep learning-based electricity theft prediction in non-smart grid environments. Heliyon 2024; 10:e35167. [PMID: 39166039 PMCID: PMC11334629 DOI: 10.1016/j.heliyon.2024.e35167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2024] [Revised: 07/23/2024] [Accepted: 07/24/2024] [Indexed: 08/22/2024] Open
Abstract
In developing countries, smart grids are nonexistent, and electricity theft significantly hampers power supply. This research introduces a lightweight deep-learning model using monthly customer readings as input data. By employing careful direct and indirect feature engineering techniques, including Principal Component Analysis (PCA), t-distributed Stochastic Neighbor Embedding (t-SNE), UMAP (Uniform Manifold Approximation and Projection), and resampling methods such as Random-Under-Sampler (RUS), Synthetic Minority Over-sampling Technique (SMOTE), and Random-Over-Sampler (ROS), an effective solution is proposed. Previous studies indicate that models achieve high precision, recall, and F1 score for the non-theft (0) class, but perform poorly, even achieving 0 %, for the theft (1) class. Through parameter tuning and employing Random-Over-Sampler (ROS), significant improvements in accuracy, precision (89 %), recall (94 %), and F1 score (91 %) for the theft (1) class are achieved. The results demonstrate that the proposed model outperforms existing methods, showcasing its efficacy in detecting electricity theft in non-smart grid environments.
Collapse
Affiliation(s)
- Sheikh Muhammad Saqib
- Department of Computing and Information Technology, Gomal University, Dera Ismail Khan, Pakistan
| | - Tehseen Mazhar
- Department of Computer Science, Virtual University of Pakistan, Lahore, 51000, Pakistan
| | - Muhammad Iqbal
- Department of Computing and Information Technology, Gomal University, Dera Ismail Khan, Pakistan
| | - Tariq Shahazad
- School of Electrical Engineering, Dept. of Electrical and Electronic Eng. Science, University of Johannesburg, Johannesburg, 2006, South Africa
| | - Ahmad Almogren
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, 11633, Saudi Arabia
| | - Khmaies Ouahada
- School of Electrical Engineering, Dept. of Electrical and Electronic Eng. Science, University of Johannesburg, Johannesburg, 2006, South Africa
| | - Habib Hamam
- School of Electrical Engineering, Dept. of Electrical and Electronic Eng. Science, University of Johannesburg, Johannesburg, 2006, South Africa
- Faculty of Engineering, Université de Moncton , Moncton, NB, E1A3E9, Canada
- Hodmas University College, Taleh Area, Mogadishu, Banadir, 521376, Somalia
- Bridges for Academic Excellence, Tunis, Centre-Ville, 1002, Tunisia
| |
Collapse
|
4
|
Salam A, Ullah F, Amin F, Ahmad Khan I, Garcia Villena E, Kuc Castilla A, de la Torre I. Efficient prediction of anticancer peptides through deep learning. PeerJ Comput Sci 2024; 10:e2171. [PMID: 39145253 PMCID: PMC11323142 DOI: 10.7717/peerj-cs.2171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Accepted: 06/11/2024] [Indexed: 08/16/2024]
Abstract
Background Cancer remains one of the leading causes of mortality globally, with conventional chemotherapy often resulting in severe side effects and limited effectiveness. Recent advancements in bioinformatics and machine learning, particularly deep learning, offer promising new avenues for cancer treatment through the prediction and identification of anticancer peptides. Objective This study aimed to develop and evaluate a deep learning model utilizing a two-dimensional convolutional neural network (2D CNN) to enhance the prediction accuracy of anticancer peptides, addressing the complexities and limitations of current prediction methods. Methods A diverse dataset of peptide sequences with annotated anticancer activity labels was compiled from various public databases and experimental studies. The sequences were preprocessed and encoded using one-hot encoding and additional physicochemical properties. The 2D CNN model was trained and optimized using this dataset, with performance evaluated through metrics such as accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC-ROC). Results The proposed 2D CNN model achieved superior performance compared to existing methods, with an accuracy of 0.87, precision of 0.85, recall of 0.89, F1-score of 0.87, and an AUC-ROC value of 0.91. These results indicate the model's effectiveness in accurately predicting anticancer peptides and capturing intricate spatial patterns within peptide sequences. Conclusion The findings demonstrate the potential of deep learning, specifically 2D CNNs, in advancing the prediction of anticancer peptides. The proposed model significantly improves prediction accuracy, offering a valuable tool for identifying effective peptide candidates for cancer treatment. Future Work Further research should focus on expanding the dataset, exploring alternative deep learning architectures, and validating the model's predictions through experimental studies. Efforts should also aim at optimizing computational efficiency and translating these predictions into clinical applications.
Collapse
Affiliation(s)
- Abdu Salam
- Department of Computer Science, Abdul Wali Khan University, Mardan, Pakistan
| | - Faizan Ullah
- Department of Computer Science, Bacha Khan University, Charsadda, Pakistan
| | - Farhan Amin
- School of Computer Science and Engineering, Yeungnam University, Gyeongsan, Republic of Korea
| | - Izaz Ahmad Khan
- Department of Computer Science, Bacha Khan University, Charsadda, Pakistan
| | | | | | | |
Collapse
|
5
|
Albalawi E, Thakur A, Dorai DR, Bhatia Khan S, Mahesh TR, Almusharraf A, Aurangzeb K, Anwar MS. Enhancing brain tumor classification in MRI scans with a multi-layer customized convolutional neural network approach. Front Comput Neurosci 2024; 18:1418546. [PMID: 38933391 PMCID: PMC11199693 DOI: 10.3389/fncom.2024.1418546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 05/23/2024] [Indexed: 06/28/2024] Open
Abstract
Background The necessity of prompt and accurate brain tumor diagnosis is unquestionable for optimizing treatment strategies and patient prognoses. Traditional reliance on Magnetic Resonance Imaging (MRI) analysis, contingent upon expert interpretation, grapples with challenges such as time-intensive processes and susceptibility to human error. Objective This research presents a novel Convolutional Neural Network (CNN) architecture designed to enhance the accuracy and efficiency of brain tumor detection in MRI scans. Methods The dataset used in the study comprises 7,023 brain MRI images from figshare, SARTAJ, and Br35H, categorized into glioma, meningioma, no tumor, and pituitary classes, with a CNN-based multi-task classification model employed for tumor detection, classification, and location identification. Our methodology focused on multi-task classification using a single CNN model for various brain MRI classification tasks, including tumor detection, classification based on grade and type, and tumor location identification. Results The proposed CNN model incorporates advanced feature extraction capabilities and deep learning optimization techniques, culminating in a groundbreaking paradigm shift in automated brain MRI analysis. With an exceptional tumor classification accuracy of 99%, our method surpasses current methodologies, demonstrating the remarkable potential of deep learning in medical applications. Conclusion This study represents a significant advancement in the early detection and treatment planning of brain tumors, offering a more efficient and accurate alternative to traditional MRI analysis methods.
Collapse
Affiliation(s)
- Eid Albalawi
- Department of Computer Science, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Arastu Thakur
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - D. Ramya Dorai
- Department of Information Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - Surbhi Bhatia Khan
- School of Science, Engineering and Environment, University of Salford, Manchester, United Kingdom
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon
| | - T. R. Mahesh
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - Ahlam Almusharraf
- Department of Management, College of Business Administration, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | | |
Collapse
|
6
|
Albalawi E, T R M, Thakur A, Kumar VV, Gupta M, Khan SB, Almusharraf A. Integrated approach of federated learning with transfer learning for classification and diagnosis of brain tumor. BMC Med Imaging 2024; 24:110. [PMID: 38750436 PMCID: PMC11097560 DOI: 10.1186/s12880-024-01261-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Accepted: 03/27/2024] [Indexed: 05/18/2024] Open
Abstract
Brain tumor classification using MRI images is a crucial yet challenging task in medical imaging. Accurate diagnosis is vital for effective treatment planning but is often hindered by the complex nature of tumor morphology and variations in imaging. Traditional methodologies primarily rely on manual interpretation of MRI images, supplemented by conventional machine learning techniques. These approaches often lack the robustness and scalability needed for precise and automated tumor classification. The major limitations include a high degree of manual intervention, potential for human error, limited ability to handle large datasets, and lack of generalizability to diverse tumor types and imaging conditions.To address these challenges, we propose a federated learning-based deep learning model that leverages the power of Convolutional Neural Networks (CNN) for automated and accurate brain tumor classification. This innovative approach not only emphasizes the use of a modified VGG16 architecture optimized for brain MRI images but also highlights the significance of federated learning and transfer learning in the medical imaging domain. Federated learning enables decentralized model training across multiple clients without compromising data privacy, addressing the critical need for confidentiality in medical data handling. This model architecture benefits from the transfer learning technique by utilizing a pre-trained CNN, which significantly enhances its ability to classify brain tumors accurately by leveraging knowledge gained from vast and diverse datasets.Our model is trained on a diverse dataset combining figshare, SARTAJ, and Br35H datasets, employing a federated learning approach for decentralized, privacy-preserving model training. The adoption of transfer learning further bolsters the model's performance, making it adept at handling the intricate variations in MRI images associated with different types of brain tumors. The model demonstrates high precision (0.99 for glioma, 0.95 for meningioma, 1.00 for no tumor, and 0.98 for pituitary), recall, and F1-scores in classification, outperforming existing methods. The overall accuracy stands at 98%, showcasing the model's efficacy in classifying various tumor types accurately, thus highlighting the transformative potential of federated learning and transfer learning in enhancing brain tumor classification using MRI images.
Collapse
Affiliation(s)
- Eid Albalawi
- Department of Computer science, College of Computer Science and Information Technology, King faisal University, 31982, Hofuf, Saudi Arabia
| | - Mahesh T R
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), 562112, Bangalore, India
| | - Arastu Thakur
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), 562112, Bangalore, India
| | - V Vinoth Kumar
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, 632014, Vellore, India
| | - Muskan Gupta
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), 562112, Bangalore, India
| | - Surbhi Bhatia Khan
- School of Science, Engineering and environment, University of Salford, M5 4WT, Manchester, UK.
- , Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon, Lebanon.
| | - Ahlam Almusharraf
- Department of Business Administration, College of Business and Administration, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Riyadh, Saudi Arabia
| |
Collapse
|
7
|
Krishnapriya S, Karuna Y. A deep learning model for the localization and extraction of brain tumors from MR images using YOLOv7 and grab cut algorithm. Front Oncol 2024; 14:1347363. [PMID: 38680854 PMCID: PMC11045991 DOI: 10.3389/fonc.2024.1347363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 03/25/2024] [Indexed: 05/01/2024] Open
Abstract
Introduction Brain tumors are a common disease that affects millions of people worldwide. Considering the severity of brain tumors (BT), it is important to diagnose the disease in its early stages. With advancements in the diagnostic process, Magnetic Resonance Imaging (MRI) has been extensively used in disease detection. However, the accurate identification of BT is a complex task, and conventional techniques are not sufficiently robust to localize and extract tumors in MRI images. Therefore, in this study, we used a deep learning model combined with a segmentation algorithm to localize and extract tumors from MR images. Method This paper presents a Deep Learning (DL)-based You Look Only Once (YOLOv7) model in combination with the Grab Cut algorithm to extract the foreground of the tumor image to enhance the detection process. YOLOv7 is used to localize the tumor region, and the Grab Cut algorithm is used to extract the tumor from the localized region. Results The performance of the YOLOv7 model with and without the Grab Cut algorithm is evaluated. The results show that the proposed approach outperforms other techniques, such as hybrid CNN-SVM, YOLOv5, and YOLOv6, in terms of accuracy, precision, recall, specificity, and F1 score. Discussion Our results show that the proposed technique achieves a high dice score between tumor-extracted images and ground truth images. The findings show that the performance of the YOLOv7 model is improved by the inclusion of the Grab Cut algorithm compared to the performance of the model without the algorithm.
Collapse
Affiliation(s)
| | - Yepuganti Karuna
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India
| |
Collapse
|
8
|
Çetin-Kaya Y, Kaya M. A Novel Ensemble Framework for Multi-Classification of Brain Tumors Using Magnetic Resonance Imaging. Diagnostics (Basel) 2024; 14:383. [PMID: 38396422 PMCID: PMC10888105 DOI: 10.3390/diagnostics14040383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 02/01/2024] [Accepted: 02/06/2024] [Indexed: 02/25/2024] Open
Abstract
Brain tumors can have fatal consequences, affecting many body functions. For this reason, it is essential to detect brain tumor types accurately and at an early stage to start the appropriate treatment process. Although convolutional neural networks (CNNs) are widely used in disease detection from medical images, they face the problem of overfitting in the training phase on limited labeled and insufficiently diverse datasets. The existing studies use transfer learning and ensemble models to overcome these problems. When the existing studies are examined, it is evident that there is a lack of models and weight ratios that will be used with the ensemble technique. With the framework proposed in this study, several CNN models with different architectures are trained with transfer learning and fine-tuning on three brain tumor datasets. A particle swarm optimization-based algorithm determined the optimum weights for combining the five most successful CNN models with the ensemble technique. The results across three datasets are as follows: Dataset 1, 99.35% accuracy and 99.20 F1-score; Dataset 2, 98.77% accuracy and 98.92 F1-score; and Dataset 3, 99.92% accuracy and 99.92 F1-score. We achieved successful performances on three brain tumor datasets, showing that the proposed framework is reliable in classification. As a result, the proposed framework outperforms existing studies, offering clinicians enhanced decision-making support through its high-accuracy classification performance.
Collapse
Affiliation(s)
- Yasemin Çetin-Kaya
- Department of Computer Engineering, Faculty of Engineering and Architecture, Tokat Gaziosmanpaşa University, Tokat 60250, Turkey
| | - Mahir Kaya
- Department of Computer Engineering, Faculty of Engineering and Architecture, Tokat Gaziosmanpaşa University, Tokat 60250, Turkey
| |
Collapse
|
9
|
Ullah MS, Khan MA, Masood A, Mzoughi O, Saidani O, Alturki N. Brain tumor classification from MRI scans: a framework of hybrid deep learning model with Bayesian optimization and quantum theory-based marine predator algorithm. Front Oncol 2024; 14:1335740. [PMID: 38390266 PMCID: PMC10882068 DOI: 10.3389/fonc.2024.1335740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 01/12/2024] [Indexed: 02/24/2024] Open
Abstract
Brain tumor classification is one of the most difficult tasks for clinical diagnosis and treatment in medical image analysis. Any errors that occur throughout the brain tumor diagnosis process may result in a shorter human life span. Nevertheless, most currently used techniques ignore certain features that have particular significance and relevance to the classification problem in favor of extracting and choosing deep significance features. One important area of research is the deep learning-based categorization of brain tumors using brain magnetic resonance imaging (MRI). This paper proposes an automated deep learning model and an optimal information fusion framework for classifying brain tumor from MRI images. The dataset used in this work was imbalanced, a key challenge for training selected networks. This imbalance in the training dataset impacts the performance of deep learning models because it causes the classifier performance to become biased in favor of the majority class. We designed a sparse autoencoder network to generate new images that resolve the problem of imbalance. After that, two pretrained neural networks were modified and the hyperparameters were initialized using Bayesian optimization, which was later utilized for the training process. After that, deep features were extracted from the global average pooling layer. The extracted features contain few irrelevant information; therefore, we proposed an improved Quantum Theory-based Marine Predator Optimization algorithm (QTbMPA). The proposed QTbMPA selects both networks' best features and finally fuses using a serial-based approach. The fused feature set is passed to neural network classifiers for the final classification. The proposed framework tested on an augmented Figshare dataset and an improved accuracy of 99.80%, a sensitivity rate of 99.83%, a false negative rate of 17%, and a precision rate of 99.83% is obtained. Comparison and ablation study show the improvement in the accuracy of this work.
Collapse
Affiliation(s)
| | | | - Anum Masood
- Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
| | - Olfa Mzoughi
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Oumaima Saidani
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Nazik Alturki
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| |
Collapse
|
10
|
Pitarch C, Ungan G, Julià-Sapé M, Vellido A. Advances in the Use of Deep Learning for the Analysis of Magnetic Resonance Image in Neuro-Oncology. Cancers (Basel) 2024; 16:300. [PMID: 38254790 PMCID: PMC10814384 DOI: 10.3390/cancers16020300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 12/28/2023] [Accepted: 01/08/2024] [Indexed: 01/24/2024] Open
Abstract
Machine Learning is entering a phase of maturity, but its medical applications still lag behind in terms of practical use. The field of oncological radiology (and neuro-oncology in particular) is at the forefront of these developments, now boosted by the success of Deep-Learning methods for the analysis of medical images. This paper reviews in detail some of the most recent advances in the use of Deep Learning in this field, from the broader topic of the development of Machine-Learning-based analytical pipelines to specific instantiations of the use of Deep Learning in neuro-oncology; the latter including its use in the groundbreaking field of ultra-low field magnetic resonance imaging.
Collapse
Affiliation(s)
- Carla Pitarch
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Eurecat, Digital Health Unit, Technology Centre of Catalonia, 08005 Barcelona, Spain
| | - Gulnur Ungan
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Margarida Julià-Sapé
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Alfredo Vellido
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| |
Collapse
|
11
|
Rasa SM, Islam MM, Talukder MA, Uddin MA, Khalid M, Kazi M, Kazi MZ. Brain tumor classification using fine-tuned transfer learning models on magnetic resonance imaging (MRI) images. Digit Health 2024; 10:20552076241286140. [PMID: 39381813 PMCID: PMC11459499 DOI: 10.1177/20552076241286140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Accepted: 08/30/2024] [Indexed: 10/10/2024] Open
Abstract
Objective Brain tumors are a leading global cause of mortality, often leading to reduced life expectancy and challenging recovery. Early detection significantly improves survival rates. This paper introduces an efficient deep learning model to expedite brain tumor detection through timely and accurate identification using magnetic resonance imaging images. Methods Our approach leverages deep transfer learning with six transfer learning algorithms: VGG16, ResNet50, MobileNetV2, DenseNet201, EfficientNetB3, and InceptionV3. We optimize data preprocessing, upsample data through augmentation, and train the models using two optimizers: Adam and AdaMax. We perform three experiments with binary and multi-class datasets, fine-tuning parameters to reduce overfitting. Model effectiveness is analyzed using various performance scores with and without cross-validation. Results With smaller datasets, the models achieve 100% accuracy in both training and testing without cross-validation. After applying cross-validation, the framework records an outstanding accuracy of 99.96% with a receiver operating characteristic of 100% on average across five tests. For larger datasets, accuracy ranges from 96.34% to 98.20% across different models. The methodology also demonstrates a small computation time, contributing to its reliability and speed. Conclusion The study establishes a new standard for brain tumor classification, surpassing existing methods in accuracy and efficiency. Our deep learning approach, incorporating advanced transfer learning algorithms and optimized data processing, provides a robust and rapid solution for brain tumor detection.
Collapse
Affiliation(s)
- Sadia Maduri Rasa
- Department of Computer Science and Engineering, Jagannath University, Dhaka, Bangladesh
| | | | - Mohammed Alamin Talukder
- Department of Computer Science and Engineering, International University of Business Agriculture and Technology, Dhaka, Bangladesh
| | | | - Majdi Khalid
- Department of Computer Science and Artificial Intelligence,
College of Computing, Umm Al-Qura University, Makkah,
Saudi Arabia
| | - Mohsin Kazi
- Department of Pharmaceutics, College of Pharmacy, King Saud University, Riyadh, Saudi Arabia
| | - Mohammed Zobayer Kazi
- Department of Computer Science and Engineering, Jagannath University, Dhaka, Bangladesh
| |
Collapse
|
12
|
B. A, Kaur M, Singh D, Roy S, Amoon M. Efficient Skip Connections-Based Residual Network (ESRNet) for Brain Tumor Classification. Diagnostics (Basel) 2023; 13:3234. [PMID: 37892055 PMCID: PMC10606037 DOI: 10.3390/diagnostics13203234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 10/10/2023] [Accepted: 10/12/2023] [Indexed: 10/29/2023] Open
Abstract
Brain tumors pose a complex and urgent challenge in medical diagnostics, requiring precise and timely classification due to their diverse characteristics and potentially life-threatening consequences. While existing deep learning (DL)-based brain tumor classification (BTC) models have shown significant progress, they encounter limitations like restricted depth, vanishing gradient issues, and difficulties in capturing intricate features. To address these challenges, this paper proposes an efficient skip connections-based residual network (ESRNet). leveraging the residual network (ResNet) with skip connections. ESRNet ensures smooth gradient flow during training, mitigating the vanishing gradient problem. Additionally, the ESRNet architecture includes multiple stages with increasing numbers of residual blocks for improved feature learning and pattern recognition. ESRNet utilizes residual blocks from the ResNet architecture, featuring skip connections that enable identity mapping. Through direct addition of the input tensor to the convolutional layer output within each block, skip connections preserve the gradient flow. This mechanism prevents vanishing gradients, ensuring effective information propagation across network layers during training. Furthermore, ESRNet integrates efficient downsampling techniques and stabilizing batch normalization layers, which collectively contribute to its robust and reliable performance. Extensive experimental results reveal that ESRNet significantly outperforms other approaches in terms of accuracy, sensitivity, specificity, F-score, and Kappa statistics, with median values of 99.62%, 99.68%, 99.89%, 99.47%, and 99.42%, respectively. Moreover, the achieved minimum performance metrics, including accuracy (99.34%), sensitivity (99.47%), specificity (99.79%), F-score (99.04%), and Kappa statistics (99.21%), underscore the exceptional effectiveness of ESRNet for BTC. Therefore, the proposed ESRNet showcases exceptional performance and efficiency in BTC, holding the potential to revolutionize clinical diagnosis and treatment planning.
Collapse
Affiliation(s)
- Ashwini B.
- Department of ISE, NMAM Institute of Technology, Nitte (Deemed to be University), Nitte 574110, India;
| | - Manjit Kaur
- School of Computer Science and Artificial Intelligence, SR University, Warangal 506371, India
| | - Dilbag Singh
- Center of Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, USA;
- Research and Development Cell, Lovely Professional University, Phagwara 144411, India
| | - Satyabrata Roy
- Department of Computer Science and Engineering, Manipal University Jaipur, Jaipur 303007, India;
| | - Mohammed Amoon
- Department of Computer Science, Community College, King Saud University, P.O. Box 28095, Riyadh 11437, Saudi Arabia
| |
Collapse
|
13
|
Ullah N, Javed A, Alhazmi A, Hasnain SM, Tahir A, Ashraf R. TumorDetNet: A unified deep learning model for brain tumor detection and classification. PLoS One 2023; 18:e0291200. [PMID: 37756305 PMCID: PMC10530039 DOI: 10.1371/journal.pone.0291200] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 08/23/2023] [Indexed: 09/29/2023] Open
Abstract
Accurate diagnosis of the brain tumor type at an earlier stage is crucial for the treatment process and helps to save the lives of a large number of people worldwide. Because they are non-invasive and spare patients from having an unpleasant biopsy, magnetic resonance imaging (MRI) scans are frequently employed to identify tumors. The manual identification of tumors is difficult and requires considerable time due to the large number of three-dimensional images that an MRI scan of one patient's brain produces from various angles. Moreover, the variations in location, size, and shape of the brain tumor also make it challenging to detect and classify different types of tumors. Thus, computer-aided diagnostics (CAD) systems have been proposed for the detection of brain tumors. In this paper, we proposed a novel unified end-to-end deep learning model named TumorDetNet for brain tumor detection and classification. Our TumorDetNet framework employs 48 convolution layers with leaky ReLU (LReLU) and ReLU activation functions to compute the most distinctive deep feature maps. Moreover, average pooling and a dropout layer are also used to learn distinctive patterns and reduce overfitting. Finally, one fully connected and a softmax layer are employed to detect and classify the brain tumor into multiple types. We assessed the performance of our method on six standard Kaggle brain tumor MRI datasets for brain tumor detection and classification into (malignant and benign), and (glioma, pituitary, and meningioma). Our model successfully identified brain tumors with remarkable accuracy of 99.83%, classified benign and malignant brain tumors with an ideal accuracy of 100%, and meningiomas, pituitary, and gliomas tumors with an accuracy of 99.27%. These outcomes demonstrate the potency of the suggested methodology for the reliable identification and categorization of brain tumors.
Collapse
Affiliation(s)
- Naeem Ullah
- Department of Software Engineering, University of Engineering and Technology, Taxila, Pakistan
| | - Ali Javed
- Department of Software Engineering, University of Engineering and Technology, Taxila, Pakistan
| | - Ali Alhazmi
- College of Computer Science and Information Technology, Jazan University, Jazan, Saudi Arabia
| | - Syed M. Hasnain
- Department of Mathematics and Natural Sciences, Prince Mohammad Bin Fahd University, Al Kobar, Saudi Arabia
| | - Ali Tahir
- College of Computer Science and Information Technology, Jazan University, Jazan, Saudi Arabia
| | - Rehan Ashraf
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| |
Collapse
|
14
|
Ravinder M, Saluja G, Allabun S, Alqahtani MS, Abbas M, Othman M, Soufiene BO. Enhanced brain tumor classification using graph convolutional neural network architecture. Sci Rep 2023; 13:14938. [PMID: 37697022 PMCID: PMC10495443 DOI: 10.1038/s41598-023-41407-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Accepted: 08/25/2023] [Indexed: 09/13/2023] Open
Abstract
The Brain Tumor presents a highly critical situation concerning the brain, characterized by the uncontrolled growth of an abnormal cell cluster. Early brain tumor detection is essential for accurate diagnosis and effective treatment planning. In this paper, a novel Convolutional Neural Network (CNN) based Graph Neural Network (GNN) model is proposed using the publicly available Brain Tumor dataset from Kaggle to predict whether a person has brain tumor or not and if yes then which type (Meningioma, Pituitary or Glioma). The objective of this research and the proposed models is to provide a solution to the non-consideration of non-Euclidean distances in image data and the inability of conventional models to learn on pixel similarity based upon the pixel proximity. To solve this problem, we have proposed a Graph based Convolutional Neural Network (GCNN) model and it is found that the proposed model solves the problem of considering non-Euclidean distances in images. We aimed at improving brain tumor detection and classification using a novel technique which combines GNN and a 26 layered CNN that takes in a Graph input pre-convolved using Graph Convolution operation. The objective of Graph Convolution is to modify the node features (data linked to each node) by combining information from nearby nodes. A standard pre-computed Adjacency matrix is used, and the input graphs were updated as the averaged sum of local neighbor nodes, which carry the regional information about the tumor. These modified graphs are given as the input matrices to a standard 26 layered CNN with Batch Normalization and Dropout layers intact. Five different networks namely Net-0, Net-1, Net-2, Net-3 and Net-4 are proposed, and it is found that Net-2 outperformed the other networks namely Net-0, Net-1, Net-3 and Net-4. The highest accuracy achieved was 95.01% by Net-2. With its current effectiveness, the model we propose represents a critical alternative for the statistical detection of brain tumors in patients who are suspected of having one.
Collapse
Affiliation(s)
- M Ravinder
- CSE, Indira Gandhi Delhi Technical University for Women, New Delhi, India
| | - Garima Saluja
- CSE, Indira Gandhi Delhi Technical University for Women, New Delhi, India
| | - Sarah Allabun
- Department of Medical Education, College of Medicine, Princess Nourah bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia
| | - Mohammed S Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, King Khalid University, 61421, Abha, Saudi Arabia
- BioImaging Unit, Space Research Centre, Michael Atiyah Building, University of Leicester, Leicester, LE1 7RH, UK
| | - Mohamed Abbas
- Electrical Engineering Department, College of Engineering, King Khalid University, 61421, Abha, Saudi Arabia
| | - Manal Othman
- Department of Medical Education, College of Medicine, Princess Nourah bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia
| | - Ben Othman Soufiene
- PRINCE Laboratory Research, ISITcom, University of Sousse, Hammam Sousse, Tunisia.
| |
Collapse
|
15
|
Elshahawy M, Elnemr A, Oproescu M, Schiopu AG, Elgarayhi A, Elmogy MM, Sallah M. Early Melanoma Detection Based on a Hybrid YOLOv5 and ResNet Technique. Diagnostics (Basel) 2023; 13:2804. [PMID: 37685342 PMCID: PMC10486497 DOI: 10.3390/diagnostics13172804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 08/11/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
Skin cancer, specifically melanoma, is a serious health issue that arises from the melanocytes, the cells that produce melanin, the pigment responsible for skin color. With skin cancer on the rise, the timely identification of skin lesions is crucial for effective treatment. However, the similarity between some skin lesions can result in misclassification, which is a significant problem. It is important to note that benign skin lesions are more prevalent than malignant ones, which can lead to overly cautious algorithms and incorrect results. As a solution, researchers are developing computer-assisted diagnostic tools to detect malignant tumors early. First, a new model based on the combination of "you only look once" (YOLOv5) and "ResNet50" is proposed for melanoma detection with its degree using humans against a machine with 10,000 training images (HAM10000). Second, feature maps integrate gradient change, which allows rapid inference, boosts precision, and reduces the number of hyperparameters in the model, making it smaller. Finally, the current YOLOv5 model is changed to obtain the desired outcomes by adding new classes for dermatoscopic images of typical lesions with pigmented skin. The proposed approach improves melanoma detection with a real-time speed of 0.4 MS of non-maximum suppression (NMS) per image. The performance metrics average is 99.0%, 98.6%, 98.8%, 99.5, 98.3%, and 98.7% for the precision, recall, dice similarity coefficient (DSC), accuracy, mean average precision (MAP) from 0.0 to 0.5, and MAP from 0.5 to 0.95, respectively. Compared to current melanoma detection approaches, the provided approach is more efficient in using deep features.
Collapse
Affiliation(s)
- Manar Elshahawy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt;
| | - Ahmed Elnemr
- Applied Mathematical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt; (A.E.); (A.E.)
| | - Mihai Oproescu
- Faculty of Electronics, Communication, and Computer Science, University of Pitesti, 110040 Pitesti, Romania
| | - Adriana-Gabriela Schiopu
- Department of Manufacturing and Industrial Management, Faculty of Mechanics and Technology, University of Pitesti, 110040 Pitesti, Romania;
| | - Ahmed Elgarayhi
- Applied Mathematical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt; (A.E.); (A.E.)
| | - Mohammed M. Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt;
| | - Mohammed Sallah
- Department of Physics, College of Sciences, University of Bisha, P.O. Box 344, Bisha 61922, Saudi Arabia;
| |
Collapse
|
16
|
Ullah F, Nadeem M, Abrar M, Al-Razgan M, Alfakih T, Amin F, Salam A. Brain Tumor Segmentation from MRI Images Using Handcrafted Convolutional Neural Network. Diagnostics (Basel) 2023; 13:2650. [PMID: 37627909 PMCID: PMC10453895 DOI: 10.3390/diagnostics13162650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 08/04/2023] [Accepted: 08/05/2023] [Indexed: 08/27/2023] Open
Abstract
Brain tumor segmentation from magnetic resonance imaging (MRI) scans is critical for the diagnosis, treatment planning, and monitoring of therapeutic outcomes. Thus, this research introduces a novel hybrid approach that combines handcrafted features with convolutional neural networks (CNNs) to enhance the performance of brain tumor segmentation. In this study, handcrafted features were extracted from MRI scans that included intensity-based, texture-based, and shape-based features. In parallel, a unique CNN architecture was developed and trained to detect the features from the data automatically. The proposed hybrid method was combined with the handcrafted features and the features identified by CNN in different pathways to a new CNN. In this study, the Brain Tumor Segmentation (BraTS) challenge dataset was used to measure the performance using a variety of assessment measures, for instance, segmentation accuracy, dice score, sensitivity, and specificity. The achieved results showed that our proposed approach outperformed the traditional handcrafted feature-based and individual CNN-based methods used for brain tumor segmentation. In addition, the incorporation of handcrafted features enhanced the performance of CNN, yielding a more robust and generalizable solution. This research has significant potential for real-world clinical applications where precise and efficient brain tumor segmentation is essential. Future research directions include investigating alternative feature fusion techniques and incorporating additional imaging modalities to further improve the proposed method's performance.
Collapse
Affiliation(s)
- Faizan Ullah
- Department of Computer Science, International Islamic University, Islamabad 44000, Pakistan; (F.U.); (M.N.)
| | - Muhammad Nadeem
- Department of Computer Science, International Islamic University, Islamabad 44000, Pakistan; (F.U.); (M.N.)
| | - Mohammad Abrar
- Department of Computer Science, Bacha Khan University, Charsadda 24420, Pakistan;
| | - Muna Al-Razgan
- Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11345, Saudi Arabia
| | - Taha Alfakih
- Department of Information Systems, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia;
| | - Farhan Amin
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
| | - Abdu Salam
- Department of Computer Science, Abdul Wali Khan University, Mardan 23200, Pakistan
| |
Collapse
|
17
|
Saidani O, Aljrees T, Umer M, Alturki N, Alshardan A, Khan SW, Alsubai S, Ashraf I. Enhancing Prediction of Brain Tumor Classification Using Images and Numerical Data Features. Diagnostics (Basel) 2023; 13:2544. [PMID: 37568907 PMCID: PMC10417332 DOI: 10.3390/diagnostics13152544] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Revised: 07/23/2023] [Accepted: 07/26/2023] [Indexed: 08/13/2023] Open
Abstract
Brain tumors, along with other diseases that harm the neurological system, are a significant contributor to global mortality. Early diagnosis plays a crucial role in effectively treating brain tumors. To distinguish individuals with tumors from those without, this study employs a combination of images and data-based features. In the initial phase, the image dataset is enhanced, followed by the application of a UNet transfer-learning-based model to accurately classify patients as either having tumors or being normal. In the second phase, this research utilizes 13 features in conjunction with a voting classifier. The voting classifier incorporates features extracted from deep convolutional layers and combines stochastic gradient descent with logistic regression to achieve better classification results. The reported accuracy score of 0.99 achieved by both proposed models shows its superior performance. Also, comparing results with other supervised learning algorithms and state-of-the-art models validates its performance.
Collapse
Affiliation(s)
- Oumaima Saidani
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia; (O.S.); (N.A.); (A.A.)
| | - Turki Aljrees
- Department College of Computer Science and Engineering, University of Hafr Al-Batin, Hafar Al-Batin 39524, Saudi Arabia;
| | - Muhammad Umer
- Department of Computer Science & Information Technology, The Islamia University of Bahawalpur, Bahawalpur 63100, Pakistan
| | - Nazik Alturki
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia; (O.S.); (N.A.); (A.A.)
| | - Amal Alshardan
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia; (O.S.); (N.A.); (A.A.)
| | - Sardar Waqar Khan
- Department of Computer Science & Information Technology, The University of Lahore, Lahore 54000, Pakistan;
| | - Shtwai Alsubai
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia;
| | - Imran Ashraf
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
| |
Collapse
|
18
|
Sathya Preiya V, Kumar VDA. Deep Learning-Based Classification and Feature Extraction for Predicting Pathogenesis of Foot Ulcers in Patients with Diabetes. Diagnostics (Basel) 2023; 13:1983. [PMID: 37370878 DOI: 10.3390/diagnostics13121983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 05/28/2023] [Accepted: 05/31/2023] [Indexed: 06/29/2023] Open
Abstract
The World Health Organization (WHO) has identified that diabetes mellitus (DM) is one of the most prevalent disease worldwide. Individuals with DM have a higher risk of mortality, and it is crucial to prioritize the treatment of foot ulcers, which is a significant complication associated with the disease, as they lead to the development of plantar ulcers, which results in the need to amputate part of the foot or leg. People with diabetes are at risk of experiencing various complications, such as heart disease, eye problems, kidney dysfunction, nerve damage, skin issues, foot ulcers, and dental diseases. Unawareness of the risk associated with diabetic foot ulcers (DFU) is a significant contributing factor to the mortality of diabetic patients. Evolving technological advancements such as deep learning techniques can be used to predict the symptoms of diabetic foot ulcers as early as possible, which helps to provide effective treatment to DM patients. This research introduces a methodology for analyzing images of foot ulcers in diabetic patients, focusing on feature extraction and classification. The dataset used in this study was collected from historical medical records and foot images of patients with diabetes, who commonly experience foot ulcers as a major complication. The dataset was pre-processed and segmented, and features were extracted using a deep recurrent neural network (DRNN). Image and numerical/text data were extracted separately, and the normal and abnormal diabetes ranges were identified. Foot images of patients with abnormal diabetes ranges were separated and classified using a pre-trained fast convolutional neural network (PFCNN) with U++net. The classification procedure involves the analysis of foot ulcers to predict their pathogenesis. To assess the effectiveness of the proposed technique, the study presented simulation results, including a confusion matrix and receiver operating characteristic curve. These results specifically focused on predicting two classes: normal and abnormal diabetes foot ulcerations. The analysis yielded various parameters, including accuracy, precision, recall curve, and area under the curve. The main goal of the study was to introduce an novel technique for assessing the risk of foot ulceration development in patients with diabetes, leveraging the analysis of foot ulcer images. The researchers collected a dataset of foot images and medical data from historical records of patients with diabetes and pre-processed and segmented the data. They then used a deep recurrent neural network to extract features from the segmented data and identified normal and abnormal diabetes ranges based on numerical and text data. Foot images of patients with abnormal diabetes ranges were classified using a pre-trained fast convolutional neural network with U++net to examine foot ulcers and forecast the development of the risk of diabetic foot ulcers (DFU). The study assessed the accuracy of the proposed technique as 99.32% by simulating results for feature extraction and the classification of diabetic foot ulcers. A comparison was made between this proposed technique and existing approaches.
Collapse
Affiliation(s)
- V Sathya Preiya
- Department of Computer Science and Engineering, Panimalar Engineering College, Anna University, Chennai 600123, India
| | - V D Ambeth Kumar
- Department of Computer Engineering, Mizoram University, Aizawl 796004, India
| |
Collapse
|
19
|
Muezzinoglu T, Baygin N, Tuncer I, Barua PD, Baygin M, Dogan S, Tuncer T, Palmer EE, Cheong KH, Acharya UR. PatchResNet: Multiple Patch Division-Based Deep Feature Fusion Framework for Brain Tumor Classification Using MRI Images. J Digit Imaging 2023; 36:973-987. [PMID: 36797543 PMCID: PMC10287865 DOI: 10.1007/s10278-023-00789-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 01/30/2023] [Accepted: 01/31/2023] [Indexed: 02/18/2023] Open
Abstract
Modern computer vision algorithms are based on convolutional neural networks (CNNs), and both end-to-end learning and transfer learning modes have been used with CNN for image classification. Thus, automated brain tumor classification models have been proposed by deploying CNNs to help medical professionals. Our primary objective is to increase the classification performance using CNN. Therefore, a patch-based deep feature engineering model has been proposed in this work. Nowadays, patch division techniques have been used to attain high classification performance, and variable-sized patches have achieved good results. In this work, we have used three types of patches of different sizes (32 × 32, 56 × 56, 112 × 112). Six feature vectors have been obtained using these patches and two layers of the pretrained ResNet50 (global average pooling and fully connected layers). In the feature selection phase, three selectors-neighborhood component analysis (NCA), Chi2, and ReliefF-have been used, and 18 final feature vectors have been obtained. By deploying k nearest neighbors (kNN), 18 results have been calculated. Iterative hard majority voting (IHMV) has been applied to compute the general classification accuracy of this framework. This model uses different patches, feature extractors (two layers of the ResNet50 have been utilized as feature extractors), and selectors, making this a framework that we have named PatchResNet. A public brain image dataset containing four classes (glioblastoma multiforme (GBM), meningioma, pituitary tumor, healthy) has been used to develop the proposed PatchResNet model. Our proposed PatchResNet attained 98.10% classification accuracy using the public brain tumor image dataset. The developed PatchResNet model obtained high classification accuracy and has the advantage of being a self-organized framework. Therefore, the proposed method can choose the best result validation prediction vectors and achieve high image classification performance.
Collapse
Affiliation(s)
- Taha Muezzinoglu
- Department of Computer Engineering, Faculty of Engineering, Munzur University, Tunceli, Turkey
| | - Nursena Baygin
- Department of Computer Engineering, Faculty of Engineering, Erzurum Technical University, Erzurum, Turkey
| | | | - Prabal Datta Barua
- School of Management & Enterprise, University of Southern Queensland, Toowoomba, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, Australia
| | - Mehmet Baygin
- Department of Computer Engineering, Faculty of Engineering, Ardahan University, Ardahan, Turkey
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Elizabeth Emma Palmer
- Centre of Clinical Genetics, Sydney Children’s Hospitals Network, Randwick, 2031 Australia
- School of Women’s and Children’s Health, University of New South Wales, Randwick, 2031 Australia
| | - Kang Hao Cheong
- Science, Mathematics and Technology Cluster, Singapore University of Technology and Design, Singapore, S487372 Singapore
| | - U. Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore, 599489 Singapore
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
20
|
Rajinikanth V, Vincent PMDR, Gnanaprakasam CN, Srinivasan K, Chang CY. Brain Tumor Class Detection in Flair/T2 Modality MRI Slices Using Elephant-Herd Algorithm Optimized Features. Diagnostics (Basel) 2023; 13:diagnostics13111832. [PMID: 37296683 DOI: 10.3390/diagnostics13111832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 05/08/2023] [Accepted: 05/19/2023] [Indexed: 06/12/2023] Open
Abstract
Several advances in computing facilities were made due to the advancement of science and technology, including the implementation of automation in multi-specialty hospitals. This research aims to develop an efficient deep-learning-based brain-tumor (BT) detection scheme to detect the tumor in FLAIR- and T2-modality magnetic-resonance-imaging (MRI) slices. MRI slices of the axial-plane brain are used to test and verify the scheme. The reliability of the developed scheme is also verified through clinically collected MRI slices. In the proposed scheme, the following stages are involved: (i) pre-processing the raw MRI image, (ii) deep-feature extraction using pretrained schemes, (iii) watershed-algorithm-based BT segmentation and mining the shape features, (iv) feature optimization using the elephant-herding algorithm (EHA), and (v) binary classification and verification using three-fold cross-validation. Using (a) individual features, (b) dual deep features, and (c) integrated features, the BT-classification task is accomplished in this study. Each experiment is conducted separately on the chosen BRATS and TCIA benchmark MRI slices. This research indicates that the integrated feature-based scheme helps to achieve a classification accuracy of 99.6667% when a support-vector-machine (SVM) classifier is considered. Further, the performance of this scheme is verified using noise-attacked MRI slices, and better classification results are achieved.
Collapse
Affiliation(s)
- Venkatesan Rajinikanth
- Department of Computer Science and Engineering, Division of Research and Innovation, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai 602105, India
| | - P M Durai Raj Vincent
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - C N Gnanaprakasam
- Department of Electronics and Instrumentation Engineering, St. Joseph's College of Engineering, OMR, Chennai 600119, India
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Chuan-Yu Chang
- Department of Computer Science and Information Engineering, National Yunlin University of Science and Technology, Yunlin 64002, Taiwan
- Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu 310401, Taiwan
| |
Collapse
|
21
|
Srinivasan S, Bai PSM, Mathivanan SK, Muthukumaran V, Babu JC, Vilcekova L. Grade Classification of Tumors from Brain Magnetic Resonance Images Using a Deep Learning Technique. Diagnostics (Basel) 2023; 13:diagnostics13061153. [PMID: 36980463 PMCID: PMC10046932 DOI: 10.3390/diagnostics13061153] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/14/2023] [Accepted: 03/14/2023] [Indexed: 03/22/2023] Open
Abstract
To improve the accuracy of tumor identification, it is necessary to develop a reliable automated diagnostic method. In order to precisely categorize brain tumors, researchers developed a variety of segmentation algorithms. Segmentation of brain images is generally recognized as one of the most challenging tasks in medical image processing. In this article, a novel automated detection and classification method was proposed. The proposed approach consisted of many phases, including pre-processing MRI images, segmenting images, extracting features, and classifying images. During the pre-processing portion of an MRI scan, an adaptive filter was utilized to eliminate background noise. For feature extraction, the local-binary grey level co-occurrence matrix (LBGLCM) was used, and for image segmentation, enhanced fuzzy c-means clustering (EFCMC) was used. After extracting the scan features, we used a deep learning model to classify MRI images into two groups: glioma and normal. The classifications were created using a convolutional recurrent neural network (CRNN). The proposed technique improved brain image classification from a defined input dataset. MRI scans from the REMBRANDT dataset, which consisted of 620 testing and 2480 training sets, were used for the research. The data demonstrate that the newly proposed method outperformed its predecessors. The proposed CRNN strategy was compared against BP, U-Net, and ResNet, which are three of the most prevalent classification approaches currently being used. For brain tumor classification, the proposed system outcomes were 98.17% accuracy, 91.34% specificity, and 98.79% sensitivity.
Collapse
Affiliation(s)
- Saravanan Srinivasan
- Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai 600062, India
| | | | - Sandeep Kumar Mathivanan
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Venkatesan Muthukumaran
- Department of Mathematics, College of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur 603203, India
| | - Jyothi Chinna Babu
- Department of Electronics and Communications Engineering, Annamacharya Institute of Technology and Sciences, Rajampet 516126, India
| | - Lucia Vilcekova
- Faculty of Management, Comenius University Bratislava, Odbojarov 10, 820 05 Bratislava, Slovakia
- Correspondence:
| |
Collapse
|
22
|
Alturki N, Umer M, Ishaq A, Abuzinadah N, Alnowaiser K, Mohamed A, Saidani O, Ashraf I. Combining CNN Features with Voting Classifiers for Optimizing Performance of Brain Tumor Classification. Cancers (Basel) 2023; 15:cancers15061767. [PMID: 36980653 PMCID: PMC10046217 DOI: 10.3390/cancers15061767] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 02/20/2023] [Accepted: 03/04/2023] [Indexed: 03/17/2023] Open
Abstract
Brain tumors and other nervous system cancers are among the top ten leading fatal diseases. The effective treatment of brain tumors depends on their early detection. This research work makes use of 13 features with a voting classifier that combines logistic regression with stochastic gradient descent using features extracted by deep convolutional layers for the efficient classification of tumorous victims from the normal. From the first and second-order brain tumor features, deep convolutional features are extracted for model training. Using deep convolutional features helps to increase the precision of tumor and non-tumor patient classification. The proposed voting classifier along with convoluted features produces results that show the highest accuracy of 99.9%. Compared to cutting-edge methods, the proposed approach has demonstrated improved accuracy.
Collapse
Affiliation(s)
- Nazik Alturki
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Muhammad Umer
- Department of Computer Science & Information Technology, The Islamia University of Bahawalpur, Bahawalpur 63100, Pakistan
| | - Abid Ishaq
- Department of Computer Science & Information Technology, The Islamia University of Bahawalpur, Bahawalpur 63100, Pakistan
| | - Nihal Abuzinadah
- Faculty of Computer Science and Information Technology, King Abdulaziz University, P.O. Box. 80200, Jeddah 21589, Saudi Arabia
| | - Khaled Alnowaiser
- Department of Computer Engineering, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| | - Abdullah Mohamed
- Research Centre, Future University in Egypt, New Cairo 11745, Egypt
| | - Oumaima Saidani
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Imran Ashraf
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
- Correspondence:
| |
Collapse
|
23
|
Taşcı B. Attention Deep Feature Extraction from Brain MRIs in Explainable Mode: DGXAINet. Diagnostics (Basel) 2023; 13:859. [PMID: 36900004 PMCID: PMC10000758 DOI: 10.3390/diagnostics13050859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 02/09/2023] [Accepted: 02/17/2023] [Indexed: 03/07/2023] Open
Abstract
Artificial intelligence models do not provide information about exactly how the predictions are reached. This lack of transparency is a major drawback. Particularly in medical applications, interest in explainable artificial intelligence (XAI), which helps to develop methods of visualizing, explaining, and analyzing deep learning models, has increased recently. With explainable artificial intelligence, it is possible to understand whether the solutions offered by deep learning techniques are safe. This paper aims to diagnose a fatal disease such as a brain tumor faster and more accurately using XAI methods. In this study, we preferred datasets that are widely used in the literature, such as the four-class kaggle brain tumor dataset (Dataset I) and the three-class figshare brain tumor dataset (Dataset II). To extract features, a pre-trained deep learning model is chosen. DenseNet201 is used as the feature extractor in this case. The proposed automated brain tumor detection model includes five stages. First, training of brain MR images with DenseNet201, the tumor area was segmented with GradCAM. The features were extracted from DenseNet201 trained using the exemplar method. Extracted features were selected with iterative neighborhood component (INCA) feature selector. Finally, the selected features were classified using support vector machine (SVM) with 10-fold cross-validation. An accuracy of 98.65% and 99.97%, were obtained for Datasets I and II, respectively. The proposed model obtained higher performance than the state-of-the-art methods and can be used to aid radiologists in their diagnosis.
Collapse
Affiliation(s)
- Burak Taşcı
- Vocational School of Technical Sciences, Firat University, Elazig 23119, Turkey
| |
Collapse
|
24
|
Papadomanolakis TN, Sergaki ES, Polydorou AA, Krasoudakis AG, Makris-Tsalikis GN, Polydorou AA, Afentakis NM, Athanasiou SA, Vardiambasis IO, Zervakis ME. Tumor Diagnosis against Other Brain Diseases Using T2 MRI Brain Images and CNN Binary Classifier and DWT. Brain Sci 2023; 13:348. [PMID: 36831891 PMCID: PMC9954603 DOI: 10.3390/brainsci13020348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 02/08/2023] [Accepted: 02/14/2023] [Indexed: 02/22/2023] Open
Abstract
PURPOSE Brain tumors are diagnosed and classified manually and noninvasively by radiologists using Magnetic Resonance Imaging (MRI) data. The risk of misdiagnosis may exist due to human factors such as lack of time, fatigue, and relatively low experience. Deep learning methods have become increasingly important in MRI classification. To improve diagnostic accuracy, researchers emphasize the need to develop Computer-Aided Diagnosis (CAD) computational diagnostics based on artificial intelligence (AI) systems by using deep learning methods such as convolutional neural networks (CNN) and improving the performance of CNN by combining it with other data analysis tools such as wavelet transform. In this study, a novel diagnostic framework based on CNN and DWT data analysis is developed for the diagnosis of glioma tumors in the brain, among other tumors and other diseases, with T2-SWI MRI scans. It is a binary CNN classifier that treats the disease "glioma tumor" as positive and the other pathologies as negative, resulting in a very unbalanced binary problem. The study includes a comparative analysis of a CNN trained with wavelet transform data of MRIs instead of their pixel intensity values in order to demonstrate the increased performance of the CNN and DWT analysis in diagnosing brain gliomas. The results of the proposed CNN architecture are also compared with a deep CNN pre-trained on VGG16 transfer learning network and with the SVM machine learning method using DWT knowledge. METHODS To improve the accuracy of the CNN classifier, the proposed CNN model uses as knowledge the spatial and temporal features extracted by converting the original MRI images to the frequency domain by performing Discrete Wavelet Transformation (DWT), instead of the traditionally used original scans in the form of pixel intensities. Moreover, no pre-processing was applied to the original images. The images used are MRIs of type T2-SWI sequences parallel to the axial plane. Firstly, a compression step is applied for each MRI scan applying DWT up to three levels of decomposition. These data are used to train a 2D CNN in order to classify the scans as showing glioma or not. The proposed CNN model is trained on MRI slices originated from 382 various male and female adult patients, showing healthy and pathological images from a selection of diseases (showing glioma, meningioma, pituitary, necrosis, edema, non-enchasing tumor, hemorrhagic foci, edema, ischemic changes, cystic areas, etc.). The images are provided by the database of the Medical Image Computing and Computer-Assisted Intervention (MICCAI) and the Ischemic Stroke Lesion Segmentation (ISLES) challenges on Brain Tumor Segmentation (BraTS) challenges 2016 and 2017, as well as by the numerous records kept in the public general hospital of Chania, Crete, "Saint George". RESULTS The proposed frameworks are experimentally evaluated by examining MRI slices originating from 190 different patients (not included in the training set), of which 56% are showing gliomas by the longest two axes less than 2 cm and 44% are showing other pathological effects or healthy cases. Results show convincing performance when using as information the spatial and temporal features extracted by the original scans. With the proposed CNN model and with data in DWT format, we achieved the following statistic percentages: accuracy 0.97, sensitivity (recall) 1, specificity 0.93, precision 0.95, FNR 0, and FPR 0.07. These numbers are higher for this data format (respectively: accuracy by 6% higher, recall by 11%, specificity by 7%, precision by 5%, FNR by 0.1%, and FPR is the same) than it would be, had we used as input data the intensity values of the MRIs (instead of the DWT analysis of the MRIs). Additionally, our study showed that when our CNN takes into account the TL of the existing network VGG, the performance values are lower, as follows: accuracy 0.87, sensitivity (recall) 0.91, specificity 0.84, precision 0.86, FNR of 0.08, and FPR 0.14. CONCLUSIONS The experimental results show the outperformance of the CNN, which is not based on transfer learning, but is using as information the MRI brain scans decomposed into DWT information instead of the pixel intensity of the original scans. The results are promising for the proposed CNN based on DWT knowledge to serve for binary diagnosis of glioma tumors among other tumors and diseases. Moreover, the SVM learning model using DWT data analysis performs with higher accuracy and sensitivity than using pixel values.
Collapse
Affiliation(s)
| | - Eleftheria S. Sergaki
- School of Electrical and Computer Engineering, Technical University of Crete, 73100 Chania, Greece
| | - Andreas A. Polydorou
- Areteio Hospital, 2nd University Department of Surgery, Medical School, National and Kapodistrian University of Athens, 11528 Athens, Greece
| | | | | | - Alexios A. Polydorou
- Medical School, National and Kapodistrian University of Athens, 11528 Athens, Greece
| | - Nikolaos M. Afentakis
- Department of Electronic Engineering, Hellenic Mediterranean University, 73133 Chania, Greece
| | - Sofia A. Athanasiou
- Department of Electronic Engineering, Hellenic Mediterranean University, 73133 Chania, Greece
| | - Ioannis O. Vardiambasis
- Department of Electronic Engineering, Hellenic Mediterranean University, 73133 Chania, Greece
| | - Michail E. Zervakis
- School of Electrical and Computer Engineering, Technical University of Crete, 73100 Chania, Greece
| |
Collapse
|
25
|
Investigating the Impact of Two Major Programming Environments on the Accuracy of Deep Learning-Based Glioma Detection from MRI Images. Diagnostics (Basel) 2023; 13:diagnostics13040651. [PMID: 36832138 PMCID: PMC9955350 DOI: 10.3390/diagnostics13040651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 02/04/2023] [Accepted: 02/07/2023] [Indexed: 02/12/2023] Open
Abstract
Brain tumors have been the subject of research for many years. Brain tumors are typically classified into two main groups: benign and malignant tumors. The most common tumor type among malignant brain tumors is known as glioma. In the diagnosis of glioma, different imaging technologies could be used. Among these techniques, MRI is the most preferred imaging technology due to its high-resolution image data. However, the detection of gliomas from a huge set of MRI data could be challenging for the practitioners. In order to solve this concern, many Deep Learning (DL) models based on Convolutional Neural Networks (CNNs) have been proposed to be used in detecting glioma. However, understanding which CNN architecture would work efficiently under various conditions including development environment or programming aspects as well as performance analysis has not been studied so far. In this research work, therefore, the purpose is to investigate the impact of two major programming environments (namely, MATLAB and Python) on the accuracy of CNN-based glioma detection from Magnetic Resonance Imaging (MRI) images. To this end, experiments on the Brain Tumor Segmentation (BraTS) dataset (2016 and 2017) consisting of multiparametric magnetic MRI images are performed by implementing two popular CNN architectures, the three-dimensional (3D) U-Net and the V-Net in the programming environments. From the results, it is concluded that the use of Python with Google Colaboratory (Colab) might be highly useful in the implementation of CNN-based models for glioma detection. Moreover, the 3D U-Net model is found to perform better, attaining a high accuracy on the dataset. The authors believe that the results achieved from this study would provide useful information to the research community in their appropriate implementation of DL approaches for brain tumor detection.
Collapse
|
26
|
Multiple Brain Tumor Classification with Dense CNN Architecture Using Brain MRI Images. Life (Basel) 2023; 13:life13020349. [PMID: 36836705 PMCID: PMC9964555 DOI: 10.3390/life13020349] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 01/17/2023] [Accepted: 01/22/2023] [Indexed: 02/03/2023] Open
Abstract
Brain MR images are the most suitable method for detecting chronic nerve diseases such as brain tumors, strokes, dementia, and multiple sclerosis. They are also used as the most sensitive method in evaluating diseases of the pituitary gland, brain vessels, eye, and inner ear organs. Many medical image analysis methods based on deep learning techniques have been proposed for health monitoring and diagnosis from brain MRI images. CNNs (Convolutional Neural Networks) are a sub-branch of deep learning and are often used to analyze visual information. Common uses include image and video recognition, suggestive systems, image classification, medical image analysis, and natural language processing. In this study, a new modular deep learning model was created to retain the existing advantages of known transfer learning methods (DenseNet, VGG16, and basic CNN architectures) in the classification process of MR images and eliminate their disadvantages. Open-source brain tumor images taken from the Kaggle database were used. For the training of the model, two types of splitting were utilized. First, 80% of the MRI image dataset was used in the training phase and 20% in the testing phase. Secondly, 10-fold cross-validation was used. When the proposed deep learning model and other known transfer learning methods were tested on the same MRI dataset, an improvement in classification performance was obtained, but an increase in processing time was observed.
Collapse
|
27
|
Elsheakh DN, Mohamed RA, Fahmy OM, Ezzat K, Eldamak AR. Complete Breast Cancer Detection and Monitoring System by Using Microwave Textile Based Antenna Sensors. BIOSENSORS 2023; 13:87. [PMID: 36671922 PMCID: PMC9855354 DOI: 10.3390/bios13010087] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 12/27/2022] [Accepted: 12/29/2022] [Indexed: 06/17/2023]
Abstract
This paper presents the development of a new complete wearable system for detecting breast tumors based on fully textile antenna-based sensors. The proposed sensor is compact and fully made of textiles so that it fits conformably and comfortably on the breasts with dimensions of 24 × 45 × 0.17 mm3 on a cotton substrate. The proposed antenna sensor is fed with a coplanar waveguide feed for easy integration with other systems. It realizes impedance bandwidth from 1.6 GHz up to 10 GHz at |S11| ≤ -6 dB (VSWR ≤ 3) and from 1.8 to 2.4 GHz and from 4 up to 10 GHz at |S11| ≤ -10 dB (VSWR ≤ 2). The proposed sensor acquires a low specific absorption rate (SAR) of 0.55 W/kg and 0.25 W/kg at 1g and 10 g, respectively, at 25 dBm power level over the operating band. Furthermore, the proposed system utilizes machine-learning algorithms (MLA) to differentiate between malignant tumor and benign breast tissues. Simulation examples have been recorded to verify and validate machine-learning algorithms in detecting tumors at different sizes of 10 mm and 20 mm, respectively. The classification accuracy reached 100% on the tested dataset when considering |S21| parameter features. The proposed system is vision as a "Smart Bra" that is capable of providing an easy interface for women who require continuous breast monitoring in the comfort of their homes.
Collapse
Affiliation(s)
- Dalia N. Elsheakh
- Department of Electrical Engineering, Faculty of Engineering and Technology, Badr University in Cairo, Badr City 11829, Egypt
- Microstrip Department, Electronics Research Institute, Nozha, Cairo 11843, Egypt
| | - Rawda A. Mohamed
- Department of Electrical Engineering, Faculty of Engineering and Technology, Badr University in Cairo, Badr City 11829, Egypt
| | - Omar M. Fahmy
- Department of Electrical Engineering, Faculty of Engineering and Technology, Badr University in Cairo, Badr City 11829, Egypt
| | - Khaled Ezzat
- Department of Electrical Engineering, Faculty of Engineering and Technology, Badr University in Cairo, Badr City 11829, Egypt
| | - Angie R. Eldamak
- Electronics and Communications Engineering Department, Faculty of Engineering, Ain Shams University, Cairo 11517, Egypt
| |
Collapse
|
28
|
Ullah N, Khan JA, El-Sappagh S, El-Rashidy N, Khan MS. A Holistic Approach to Identify and Classify COVID-19 from Chest Radiographs, ECG, and CT-Scan Images Using ShuffleNet Convolutional Neural Network. Diagnostics (Basel) 2023; 13:162. [PMID: 36611454 PMCID: PMC9818310 DOI: 10.3390/diagnostics13010162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/21/2022] [Accepted: 12/28/2022] [Indexed: 01/05/2023] Open
Abstract
Early and precise COVID-19 identification and analysis are pivotal in reducing the spread of COVID-19. Medical imaging techniques, such as chest X-ray or chest radiographs, computed tomography (CT) scan, and electrocardiogram (ECG) trace images are the most widely known for early discovery and analysis of the coronavirus disease (COVID-19). Deep learning (DL) frameworks for identifying COVID-19 positive patients in the literature are limited to one data format, either ECG or chest radiograph images. Moreover, using several data types to recover abnormal patterns caused by COVID-19 could potentially provide more information and restrict the spread of the virus. This study presents an effective COVID-19 detection and classification approach using the Shufflenet CNN by employing three types of images, i.e., chest radiograph, CT-scan, and ECG-trace images. For this purpose, we performed extensive classification experiments with the proposed approach using each type of image. With the chest radiograph dataset, we performed three classification experiments at different levels of granularity, i.e., binary, three-class, and four-class classifications. In addition, we performed a binary classification experiment with the proposed approach by classifying CT-scan images into COVID-positive and normal. Finally, utilizing the ECG-trace images, we conducted three experiments at different levels of granularity, i.e., binary, three-class, and five-class classifications. We evaluated the proposed approach with the baseline COVID-19 Radiography Database, SARS-CoV-2 CT-scan, and ECG images dataset of cardiac and COVID-19 patients. The average accuracy of 99.98% for COVID-19 detection in the three-class classification scheme using chest radiographs, optimal accuracy of 100% for COVID-19 detection using CT scans, and average accuracy of 99.37% for five-class classification scheme using ECG trace images have proved the efficacy of our proposed method over the contemporary methods. The optimal accuracy of 100% for COVID-19 detection using CT scans and the accuracy gain of 1.54% (in the case of five-class classification using ECG trace images) from the previous approach, which utilized ECG images for the first time, has a major contribution to improving the COVID-19 prediction rate in early stages. Experimental findings demonstrate that the proposed framework outperforms contemporary models. For example, the proposed approach outperforms state-of-the-art DL approaches, such as Squeezenet, Alexnet, and Darknet19, by achieving the accuracy of 99.98 (proposed method), 98.29, 98.50, and 99.67, respectively.
Collapse
Affiliation(s)
- Naeem Ullah
- Department of Software Engineering, University of Engineering and Technology Taxila, Taxila 47050, Pakistan
| | - Javed Ali Khan
- Department of Software Engineering, University of Science and Technology Bannu, Bannu 28100, Pakistan
| | - Shaker El-Sappagh
- Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Banha 13518, Egypt
| | - Nora El-Rashidy
- Department of Machine Learning and Information Retrieval, Faculty of Artificial Intelligence, Kafrelsheiksh University, Kafr Elsheikh 33516, Egypt
| | - Mohammad Sohail Khan
- Department of Computer Software Engineering, University of Engineering and Technology Mardan, Mardan 23200, Pakistan
| |
Collapse
|
29
|
Qi J, Ruan G, Liu J, Yang Y, Cao Q, Wei Y, Nian Y. PHF 3 Technique: A Pyramid Hybrid Feature Fusion Framework for Severity Classification of Ulcerative Colitis Using Endoscopic Images. Bioengineering (Basel) 2022; 9:632. [PMID: 36354543 PMCID: PMC9687195 DOI: 10.3390/bioengineering9110632] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 10/27/2022] [Accepted: 10/31/2022] [Indexed: 08/25/2024] Open
Abstract
Evaluating the severity of ulcerative colitis (UC) through the Mayo endoscopic subscore (MES) is crucial for understanding patient conditions and providing effective treatment. However, UC lesions present different characteristics in endoscopic images, exacerbating interclass similarities and intraclass differences in MES classification. In addition, inexperience and review fatigue in endoscopists introduces nontrivial challenges to the reliability and repeatability of MES evaluations. In this paper, we propose a pyramid hybrid feature fusion framework (PHF3) as an auxiliary diagnostic tool for clinical UC severity classification. Specifically, the PHF3 model has a dual-branch hybrid architecture with ResNet50 and a pyramid vision Transformer (PvT), where the local features extracted by ResNet50 represent the relationship between the intestinal wall at the near-shot point and its depth, and the global representations modeled by the PvT capture similar information in the cross-section of the intestinal cavity. Furthermore, a feature fusion module (FFM) is designed to combine local features with global representations, while second-order pooling (SOP) is applied to enhance discriminative information in the classification process. The experimental results show that, compared with existing methods, the proposed PHF3 model has competitive performance. The area under the receiver operating characteristic curve (AUC) of MES 0, MES 1, MES 2, and MES 3 reached 0.996, 0.972, 0.967, and 0.990, respectively, and the overall accuracy reached 88.91%. Thus, our proposed method is valuable for developing an auxiliary assessment system for UC severity.
Collapse
Affiliation(s)
- Jing Qi
- Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University (Third Military Medical University), Chongqing 400038, China
| | - Guangcong Ruan
- Department of Gastroenterology, Daping Hospital, Army Medical University (Third Military Medical University), Chongqing 400042, China
| | - Jia Liu
- Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University (Third Military Medical University), Chongqing 400038, China
| | - Yi Yang
- Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University (Third Military Medical University), Chongqing 400038, China
| | - Qian Cao
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou 310016, China
| | - Yanling Wei
- Department of Gastroenterology, Daping Hospital, Army Medical University (Third Military Medical University), Chongqing 400042, China
| | - Yongjian Nian
- Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University (Third Military Medical University), Chongqing 400038, China
| |
Collapse
|
30
|
Samee NA, Mahmoud NF, Atteia G, Abdallah HA, Alabdulhafith M, Al-Gaashani MSAM, Ahmad S, Muthanna MSA. Classification Framework for Medical Diagnosis of Brain Tumor with an Effective Hybrid Transfer Learning Model. Diagnostics (Basel) 2022; 12:diagnostics12102541. [PMID: 36292230 PMCID: PMC9600529 DOI: 10.3390/diagnostics12102541] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 10/13/2022] [Accepted: 10/13/2022] [Indexed: 11/16/2022] Open
Abstract
Brain tumors (BTs) are deadly diseases that can strike people of every age, all over the world. Every year, thousands of people die of brain tumors. Brain-related diagnoses require caution, and even the smallest error in diagnosis can have negative repercussions. Medical errors in brain tumor diagnosis are common and frequently result in higher patient mortality rates. Magnetic resonance imaging (MRI) is widely used for tumor evaluation and detection. However, MRI generates large amounts of data, making manual segmentation difficult and laborious work, limiting the use of accurate measurements in clinical practice. As a result, automated and dependable segmentation methods are required. Automatic segmentation and early detection of brain tumors are difficult tasks in computer vision due to their high spatial and structural variability. Therefore, early diagnosis or detection and treatment are critical. Various traditional Machine learning (ML) techniques have been used to detect various types of brain tumors. The main issue with these models is that the features were manually extracted. To address the aforementioned insightful issues, this paper presents a hybrid deep transfer learning (GN-AlexNet) model of BT tri-classification (pituitary, meningioma, and glioma). The proposed model combines GoogleNet architecture with the AlexNet model by removing the five layers of GoogleNet and adding ten layers of the AlexNet model, which extracts features and classifies them automatically. On the same CE-MRI dataset, the proposed model was compared to transfer learning techniques (VGG-16, AlexNet, SqeezNet, ResNet, and MobileNet-V2) and ML/DL. The proposed model outperformed the current methods in terms of accuracy and sensitivity (accuracy of 99.51% and sensitivity of 98.90%).
Collapse
Affiliation(s)
- Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Noha F. Mahmoud
- Rehabilitation Sciences Department, Health and Rehabilitation Sciences College, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
- Correspondence: (N.F.M.); (G.A.)
| | - Ghada Atteia
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
- Correspondence: (N.F.M.); (G.A.)
| | - Hanaa A. Abdallah
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Maali Alabdulhafith
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Mehdhar S. A. M. Al-Gaashani
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Shahab Ahmad
- School of Economics & Management, Chongqing University of Post and Telecommunication, Chongqing 400065, China
| | - Mohammed Saleh Ali Muthanna
- Institute of Computer Technologies and Information Security, Southern Federal University, 347922 Taganrog, Russia
| |
Collapse
|
31
|
Development and Validation of Embedded Device for Electrocardiogram Arrhythmia Empowered with Transfer Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:5054641. [PMID: 36268157 PMCID: PMC9578866 DOI: 10.1155/2022/5054641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 08/30/2022] [Accepted: 09/14/2022] [Indexed: 11/18/2022]
Abstract
With the emergence of the Internet of Things (IoT), investigation of different diseases in healthcare improved, and cloud computing helped to centralize the data and to access patient records throughout the world. In this way, the electrocardiogram (ECG) is used to diagnose heart diseases or abnormalities. The machine learning techniques have been used previously but are feature-based and not as accurate as transfer learning; the proposed development and validation of embedded device prove ECG arrhythmia by using the transfer learning (DVEEA-TL) model. This model is the combination of hardware, software, and two datasets that are augmented and fused and further finds the accuracy results in high proportion as compared to the previous work and research. In the proposed model, a new dataset is made by the combination of the Kaggle dataset and the other, which is made by taking the real-time healthy and unhealthy datasets, and later, the AlexNet transfer learning approach is applied to get a more accurate reading in terms of ECG signals. In this proposed research, the DVEEA-TL model diagnoses the heart abnormality in respect of accuracy during the training and validation stages as 99.9% and 99.8%, respectively, which is the best and more reliable approach as compared to the previous research in this field.
Collapse
|
32
|
Ullah N, Khan MS, Khan JA, Choi A, Anwar MS. A Robust End-to-End Deep Learning-Based Approach for Effective and Reliable BTD Using MR Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:7575. [PMID: 36236674 PMCID: PMC9570935 DOI: 10.3390/s22197575] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 10/01/2022] [Accepted: 10/02/2022] [Indexed: 06/16/2023]
Abstract
Detection of a brain tumor in the early stages is critical for clinical practice and survival rate. Brain tumors arise in multiple shapes, sizes, and features with various treatment options. Tumor detection manually is challenging, time-consuming, and prone to error. Magnetic resonance imaging (MRI) scans are mostly used for tumor detection due to their non-invasive properties and also avoid painful biopsy. MRI scanning of one patient's brain generates many 3D images from multiple directions, making the manual detection of tumors very difficult, error-prone, and time-consuming. Therefore, there is a considerable need for autonomous diagnostics tools to detect brain tumors accurately. In this research, we have presented a novel TumorResnet deep learning (DL) model for brain detection, i.e., binary classification. The TumorResNet model employs 20 convolution layers with a leaky ReLU (LReLU) activation function for feature map activation to compute the most distinctive deep features. Finally, three fully connected classification layers are used to classify brain tumors MRI into normal and tumorous. The performance of the proposed TumorResNet architecture is evaluated on a standard Kaggle brain tumor MRI dataset for brain tumor detection (BTD), which contains brain tumor and normal MR images. The proposed model achieved a good accuracy of 99.33% for BTD. These experimental results, including the cross-dataset setting, validate the superiority of the TumorResNet model over the contemporary frameworks. This study offers an automated BTD method that aids in the early diagnosis of brain cancers. This procedure has a substantial impact on improving treatment options and patient survival.
Collapse
Affiliation(s)
- Naeem Ullah
- Department of Software Engineering, University of Engineering and Technology, Taxila 47050, Pakistan
| | - Mohammad Sohail Khan
- Department of Computer Software Engineering, University of Engineering and Technology Mardan, Mardan 23200, Pakistan
| | - Javed Ali Khan
- Department of Software Engineering, University of Science and Technology Bannu, Bannu 28100, Pakistan
| | - Ahyoung Choi
- Department of AI, Software Gachon University, Seongnem-si 13120, Korea
| | | |
Collapse
|
33
|
Wahab F, Zhao Y, Javeed D, Al-Adhaileh MH, Almaaytah SA, Khan W, Saeed MS, Kumar Shah R. An AI-Driven Hybrid Framework for Intrusion Detection in IoT-Enabled E-Health. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6096289. [PMID: 36045979 PMCID: PMC9420579 DOI: 10.1155/2022/6096289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 07/22/2022] [Accepted: 07/26/2022] [Indexed: 11/17/2022]
Abstract
E-health has grown into a billion-dollar industry in the last decade. Its device's high throughput makes it an obvious target for cyberattacks, and these environments desperately need protection. In this scientific study, we presented an artificial intelligence (AI)-driven software-defined networking (SDN)-enabled intrusion detection system (IDS) to address increasing cyber threats in the E-health and internet of medical things (IoMT) environments. AI's success in various fields, including big data and intrusion detection systems, has prompted us to develop a flexible and cost-effective approach to protect such critical environments from cyberattacks. We present a hybrid model consisting of long short-term memory (LSTM) and gated recurrent unit (GRU). The proposed model was thoroughly evaluated using the publicly available CICDDoS2019 dataset and conventional evaluation measures. Furthermore, for proper validation, the proposed framework is compared with relevant classifiers, such as cu-GRU+ DNN and cu-BLSTM. We have further compared the proposed model with existing literature to prove its efficacy. Lastly, 10-fold cross-validation is also used to verify that our results are unbiased. The proposed approach has bypassed the current literature with extraordinary performance ramifications such as 99.01% accuracy, 99.04% precision, 98.80 percent recall, and 99.12% F1-score.
Collapse
Affiliation(s)
- Fazal Wahab
- College of Computer Science and Technology, Northeastern University, Shenyang 110169, China
| | - Yuhai Zhao
- College of Computer Science and Technology, Northeastern University, Shenyang 110169, China
| | - Danish Javeed
- Software College, Northeastern University, Shenyang 110169, China
| | - Mosleh Hmoud Al-Adhaileh
- Deanship of E-Learning and Distance Education, King Faisal University, P.O. Box 400, Al-Ahsa, Saudi Arabia
| | | | - Wasiat Khan
- Department of Software Engineering, University of Science and Technology Bannu, Bannu, Pakistan
| | | | | |
Collapse
|
34
|
Kumar A, Singh AK, Ahmad I, Kumar Singh P, Anushree, Verma PK, Alissa KA, Bajaj M, Ur Rehman A, Tag-Eldin E. A Novel Decentralized Blockchain Architecture for the Preservation of Privacy and Data Security against Cyberattacks in Healthcare. SENSORS (BASEL, SWITZERLAND) 2022; 22:5921. [PMID: 35957478 PMCID: PMC9371396 DOI: 10.3390/s22155921] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 07/28/2022] [Accepted: 08/03/2022] [Indexed: 09/03/2024]
Abstract
Nowadays, in a world full of uncertainties and the threat of digital and cyber-attacks, blockchain technology is one of the major critical developments playing a vital role in the creative professional world. Along with energy, finance, governance, etc., the healthcare sector is one of the most prominent areas where blockchain technology is being used. We all are aware that data constitute our wealth and our currency; vulnerability and security become even more significant and a vital point of concern for healthcare. Recent cyberattacks have raised the questions of planning, requirement, and implementation to develop more cyber-secure models. This paper is based on a blockchain that classifies network participants into clusters and preserves a single copy of the blockchain for every cluster. The paper introduces a novel blockchain mechanism for secure healthcare sector data management, which reduces the communicational and computational overhead costs compared to the existing bitcoin network and the lightweight blockchain architecture. The paper also discusses how the proposed design can be utilized to address the recognized threats. The experimental results show that, as the number of nodes rises, the suggested architecture speeds up ledger updates by 63% and reduces network traffic by 10 times.
Collapse
Affiliation(s)
- Ajitesh Kumar
- Department of Computer Engineering and Applications, GLA University, Mathura 281406, India
| | - Akhilesh Kumar Singh
- Department of Computer Engineering and Applications, GLA University, Mathura 281406, India
| | - Ijaz Ahmad
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences (UCAS), Shenzhen 518055, China
| | - Pradeep Kumar Singh
- Department of Computer Engineering and Applications, GLA University, Mathura 281406, India
| | - Anushree
- Department of Computer Engineering and Applications, GLA University, Mathura 281406, India
| | - Pawan Kumar Verma
- Department of Computer Science Engineering, MIT Art, Design and Technology University, Pune 412201, India
| | - Khalid A. Alissa
- SAUDI ARAMCO Cybersecurity Chair, Networks and Communications Department, College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal University, P.O. Box 1982, Dammam 31441, Saudi Arabia
| | - Mohit Bajaj
- Department of Electrical and Electronics Engineering, National Institute of Technology Delhi, Delhi 110040, India
- Department of Electrical Engineering, Graphic Era (Deemed to be University), Dehradun 248002, India
| | - Ateeq Ur Rehman
- College of Internet of Things Engineering, Hohai University, Changzhou 213022, China
| | - Elsayed Tag-Eldin
- Faculty of Engineering and Technology, Future University in Egypt, New Cairo 11835, Egypt
| |
Collapse
|
35
|
Kazemi N, Gholizadeh N, Musilek P. Selective Microwave Zeroth-Order Resonator Sensor Aided by Machine Learning. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22145362. [PMID: 35891042 PMCID: PMC9323907 DOI: 10.3390/s22145362] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 07/03/2022] [Accepted: 07/15/2022] [Indexed: 06/13/2023]
Abstract
Microwave sensors are principally sensitive to effective permittivity, and hence not selective to a specific material under test (MUT). In this work, a highly compact microwave planar sensor based on zeroth-order resonance is designed to operate at three distant frequencies of 3.5, 4.3, and 5 GHz, with the size of only λg-min/8 per resonator. This resonator is deployed to characterize liquid mixtures with one desired MUT (here water) combined with an interfering material (e.g., methanol, ethanol, or acetone) with various concentrations (0%:10%:100%). To achieve a sensor with selectivity to water, a convolutional neural network (CNN) is used to recognize different concentrations of water regardless of the host medium. To obtain a high accuracy of this classification, Style-GAN is utilized to generate a reliable sensor response for concentrations between water and the host medium (methanol, ethanol, and acetone). A high accuracy of 90.7% is achieved using CNN for selectively discriminating water concentrations.
Collapse
Affiliation(s)
- Nazli Kazemi
- Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada; (N.K.); (N.G.)
| | - Nastaran Gholizadeh
- Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada; (N.K.); (N.G.)
| | - Petr Musilek
- Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada; (N.K.); (N.G.)
- Applied Cybernetics, University of Hradec Králové, 500 03 Hradec Králové, Czech Republic
| |
Collapse
|
36
|
Accessing Artificial Intelligence for Fetus Health Status Using Hybrid Deep Learning Algorithm (AlexNet-SVM) on Cardiotocographic Data. SENSORS 2022; 22:s22145103. [PMID: 35890783 PMCID: PMC9319518 DOI: 10.3390/s22145103] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 06/25/2022] [Accepted: 07/04/2022] [Indexed: 12/22/2022]
Abstract
Artificial intelligence is serving as an impetus in digital health, clinical support, and health informatics for an informed patient’s outcome. Previous studies only consider classification accuracies of cardiotocographic (CTG) datasets and disregard computational time, which is a relevant parameter in a clinical environment. This paper proposes a modified deep neural algorithm to classify untapped pathological and suspicious CTG recordings with the desired time complexity. In our newly developed classification algorithm, AlexNet architecture is merged with support vector machines (SVMs) at the fully connected layers to reduce time complexity. We used an open-source UCI (Machine Learning Repository) dataset of cardiotocographic (CTG) recordings. We divided 2126 CTG recordings into 3 classes (Normal, Pathological, and Suspected), including 23 attributes that were dynamically programmed and fed to our algorithm. We employed a deep transfer learning (TL) mechanism to transfer prelearned features to our model. To reduce time complexity, we implemented a strategy wherein layers in the convolutional base were partially trained to leave others in the frozen states. We used an ADAM optimizer for the optimization of hyperparameters. The presented algorithm also outperforms the leading architectures (RCNNs, ResNet, DenseNet, and GoogleNet) with respect to real-time accuracies, sensitivities, and specificities of 99.72%, 96.67%, and 99.6%, respectively, making it a viable candidate for clinical settings after real-time validation.
Collapse
|
37
|
Ahmad S, Ullah T, Ahmad I, AL-Sharabi A, Ullah K, Khan RA, Rasheed S, Ullah I, Uddin MN, Ali MS. A Novel Hybrid Deep Learning Model for Metastatic Cancer Detection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8141530. [PMID: 35785076 PMCID: PMC9249449 DOI: 10.1155/2022/8141530] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Revised: 04/28/2022] [Accepted: 06/01/2022] [Indexed: 12/18/2022]
Abstract
Cancer has been found as a heterogeneous disease with various subtypes and aims to destroy the body's normal cells abruptly. As a result, it is essential to detect and prognosis the distinct type of cancer since they may help cancer survivors with treatment in the early stage. It must also divide cancer patients into high- and low-risk groups. While realizing efficient detection of cancer is frequently a time-taking and exhausting task with the high possibility of pathologist errors and previous studies employed data mining and machine learning (ML) techniques to identify cancer, these strategies rely on handcrafted feature extraction techniques that result in incorrect classification. On the contrary, deep learning (DL) is robust in feature extraction and has recently been widely used for classification and detection purposes. This research implemented a novel hybrid AlexNet-gated recurrent unit (AlexNet-GRU) model for the lymph node (LN) breast cancer detection and classification. We have used a well-known Kaggle (PCam) data set to classify LN cancer samples. This study is tested and compared among three models: convolutional neural network GRU (CNN-GRU), CNN long short-term memory (CNN-LSTM), and the proposed AlexNet-GRU. The experimental results indicated that the performance metrics accuracy, precision, sensitivity, and specificity (99.50%, 98.10%, 98.90%, and 97.50) of the proposed model can reduce the pathologist errors that occur during the diagnosis process of incorrect classification and significantly better performance than CNN-GRU and CNN-LSTM models. The proposed model is compared with other recent ML/DL algorithms to analyze the model's efficiency, which reveals that the proposed AlexNet-GRU model is computationally efficient. Also, the proposed model presents its superiority over state-of-the-art methods for LN breast cancer detection and classification.
Collapse
Affiliation(s)
- Shahab Ahmad
- School of Management Science and Engineering, Chongqing University of Post and Telecommunication, Chongqing 400065, China
| | - Tahir Ullah
- Department of Electronics and Information Engineering, Xian Jiaotong University, Xian, China
| | - Ijaz Ahmad
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| | | | - Kalim Ullah
- Department of Zoology, Kohat University of Science and Technology, Kohat 26000, Pakistan
| | - Rehan Ali Khan
- Department of Electrical Engineering, University of Science and Technology, Bannu 28100, Pakistan
| | - Saim Rasheed
- Department of Information Technology, Faculty of Computing and Information Technology, King Abdulaziz University Jeddah, Saudi Arabia
| | - Inam Ullah
- College of Internet of Things (IoT) Engineering, Hohai University (HHU), Changzhou Campus, Nanjing 213022, China
| | - Md. Nasir Uddin
- Communication Research Laboratory, Department of Information and Communication Technology, Islamic University, Kushtia 7003, Bangladesh
| | - Md. Sadek Ali
- Communication Research Laboratory, Department of Information and Communication Technology, Islamic University, Kushtia 7003, Bangladesh
| |
Collapse
|
38
|
A Novel CovidDetNet Deep Learning Model for Effective COVID-19 Infection Detection Using Chest Radiograph Images. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12126269] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
The suspected cases of COVID-19 must be detected quickly and accurately to avoid the transmission of COVID-19 on a large scale. Existing COVID-19 diagnostic tests are slow and take several hours to generate the required results. However, on the other hand, most X-rays or chest radiographs only take less than 15 min to complete. Therefore, we can utilize chest radiographs to create a solution for early and accurate COVID-19 detection and diagnosis to reduce COVID-19 patient treatment problems and save time. For this purpose, CovidDetNet is proposed, which comprises ten learnable layers that are nine convolutional layers and one fully-connected layer. The architecture uses two activation functions: the ReLu activation function and the Leaky Relu activation function and two normalization operations that are batch normalization and cross channel normalization, making it a novel COVID-19 detection model. It is a novel deep learning-based approach that automatically and reliably detects COVID-19 using chest radiograph images. Towards this, a fine-grained COVID-19 classification experiment is conducted to identify and classify chest radiograph images into normal, COVID-19 positive, and pneumonia. In addition, the performance of the proposed novel CovidDetNet deep learning model is evaluated on a standard COVID-19 Radiography Database. Moreover, we compared the performance of our approach with hybrid approaches in which we used deep learning models as feature extractors and support vector machines (SVM) as a classifier. Experimental results on the dataset showed the superiority of the proposed CovidDetNet model over the existing methods. The proposed CovidDetNet outperformed the baseline hybrid deep learning-based models by achieving a high accuracy of 98.40%.
Collapse
|
39
|
Ahmad I, Wang X, Zhu M, Wang C, Pi Y, Khan JA, Khan S, Samuel OW, Chen S, Li G. EEG-Based Epileptic Seizure Detection via Machine/Deep Learning Approaches: A Systematic Review. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6486570. [PMID: 35755757 PMCID: PMC9232335 DOI: 10.1155/2022/6486570] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 05/10/2022] [Indexed: 12/21/2022]
Abstract
Epileptic seizure is one of the most chronic neurological diseases that instantaneously disrupts the lifestyle of affected individuals. Toward developing novel and efficient technology for epileptic seizure management, recent diagnostic approaches have focused on developing machine/deep learning model (ML/DL)-based electroencephalogram (EEG) methods. Importantly, EEG's noninvasiveness and ability to offer repeated patterns of epileptic-related electrophysiological information have motivated the development of varied ML/DL algorithms for epileptic seizure diagnosis in the recent years. However, EEG's low amplitude and nonstationary characteristics make it difficult for existing ML/DL models to achieve a consistent and satisfactory diagnosis outcome, especially in clinical settings, where environmental factors could hardly be avoided. Though several recent works have explored the use of EEG-based ML/DL methods and statistical feature for seizure diagnosis, it is unclear what the advantages and limitations of these works are, which might preclude the advancement of research and development in the field of epileptic seizure diagnosis and appropriate criteria for selecting ML/DL models and statistical feature extraction methods for EEG-based epileptic seizure diagnosis. Therefore, this paper attempts to bridge this research gap by conducting an extensive systematic review on the recent developments of EEG-based ML/DL technologies for epileptic seizure diagnosis. In the review, current development in seizure diagnosis, various statistical feature extraction methods, ML/DL models, their performances, limitations, and core challenges as applied in EEG-based epileptic seizure diagnosis were meticulously reviewed and compared. In addition, proper criteria for selecting appropriate and efficient feature extraction techniques and ML/DL models for epileptic seizure diagnosis were also discussed. Findings from this study will aid researchers in deciding the most efficient ML/DL models with optimal feature extraction methods to improve the performance of EEG-based epileptic seizure detection.
Collapse
Affiliation(s)
- Ijaz Ahmad
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Chinese Academy of Sciences, Shenzhen, China
| | - Xin Wang
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Chinese Academy of Sciences, Shenzhen, China
| | - Mingxing Zhu
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
- School of Electronics and Information Engineering, Harbin Institute of Technology, Shenzhen, China
| | - Cheng Wang
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Chinese Academy of Sciences, Shenzhen, China
| | - Yao Pi
- School of Biomedical Engineering, Sun Yat-Sen University, Guangzhou, China
| | - Javed Ali Khan
- Department of Software Engineering, University of Science and Technology, Bannu, Khyber Pakhtunkhwa, Pakistan
| | - Siyab Khan
- Institute of Computer Science and Information Technology, The University of Agriculture, Peshawar, Khyber Pakhtunkhwa, Pakistan
| | - Oluwarotimi Williams Samuel
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Chinese Academy of Sciences, Shenzhen, China
| | - Shixiong Chen
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Chinese Academy of Sciences, Shenzhen, China
| | - Guanglin Li
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
40
|
Abstract
Objective solutions of multi-objective optimization problems (MOPs) are required to balance convergence and distribution to the Pareto front. This paper proposes a multi-objective quantum-inspired seagull optimization algorithm (MOQSOA) to optimize the convergence and distribution of solutions in multi-objective optimization problems. The proposed algorithm adopts opposite-based learning, the migration and attacking behavior of seagulls, grid ranking, and the superposition principles of quantum computing. To obtain a better initialized population in the absence of a priori knowledge, an opposite-based learning mechanism is used for initialization. The proposed algorithm uses nonlinear migration and attacking operation, simulating the behavior of seagulls for exploration and exploitation. Moreover, the real-coded quantum representation of the current optimal solution and quantum rotation gate are adopted to update the seagull population. In addition, a grid mechanism including global grid ranking and grid density ranking provides a criterion for leader selection and archive control. The experimental results of the IGD and Spacing metrics performed on ZDT, DTLZ, and UF test suites demonstrate the superiority of MOQSOA over NSGA-II, MOEA/D, MOPSO, IMMOEA, RVEA, and LMEA for enhancing the distribution and convergence performance of MOPs.
Collapse
|
41
|
Dogan S, Barua PD, Baygin M, Chakraborty S, Ciaccio E, Tuncer T, Abd Kadir KA, Md Shah MN, Azman RR, Lee CC, Ng KH, Acharya UR. Novel multiple pooling and local phase quantization stable feature extraction techniques for automated classification of brain infarcts. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.06.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|