1
|
MoradiAmin M, Yousefpour M, Samadzadehaghdam N, Ghahari L, Ghorbani M, Mafi M. Automatic classification of acute lymphoblastic leukemia cells and lymphocyte subtypes based on a novel convolutional neural network. Microsc Res Tech 2024; 87:1615-1626. [PMID: 38445461 DOI: 10.1002/jemt.24551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 01/14/2024] [Accepted: 02/26/2024] [Indexed: 03/07/2024]
Abstract
Acute lymphoblastic leukemia (ALL) is a life-threatening disease that commonly affects children and is classified into three subtypes: L1, L2, and L3. Traditionally, ALL is diagnosed through morphological analysis, involving the examination of blood and bone marrow smears by pathologists. However, this manual process is time-consuming, laborious, and prone to errors. Moreover, the significant morphological similarity between ALL and various lymphocyte subtypes, such as normal, atypic, and reactive lymphocytes, further complicates the feature extraction and detection process. The aim of this study is to develop an accurate and efficient automatic system to distinguish ALL cells from these similar lymphocyte subtypes without the need for direct feature extraction. First, the contrast of microscopic images is enhanced using histogram equalization, which improves the visibility of important features. Next, a fuzzy C-means clustering algorithm is employed to segment cell nuclei, as they play a crucial role in ALL diagnosis. Finally, a novel convolutional neural network (CNN) with three convolutional layers is utilized to classify the segmented nuclei into six distinct classes. The CNN is trained on a labeled dataset, allowing it to learn the distinguishing features of each class. To evaluate the performance of the proposed model, quantitative metrics are employed, and a comparison is made with three well-known deep networks: VGG-16, DenseNet, and Xception. The results demonstrate that the proposed model outperforms these networks, achieving an approximate accuracy of 97%. Moreover, the model's performance surpasses that of other studies focused on 6-class classification in the context of ALL diagnosis. RESEARCH HIGHLIGHTS: Deep neural networks eliminate the requirement for feature extraction in ALL classification The proposed convolutional neural network achieves an impressive accuracy of approximately 97% in classifying six ALL and lymphocyte subtypes.
Collapse
Affiliation(s)
- Morteza MoradiAmin
- Department of Physiology, Faculty of Medicine, AJA University of Medical Sciences, Tehran, Iran
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences (TUMS), Tehran, Iran
| | - Mitra Yousefpour
- Department of Physiology, Faculty of Medicine, AJA University of Medical Sciences, Tehran, Iran
| | - Nasser Samadzadehaghdam
- Department of Biomedical Engineering, Faculty of Advanced Medical Sciences, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Laya Ghahari
- Department of Anatomy, Faculty of Medicine, AJA University of Medical Sciences, Tehran, Iran
| | - Mahdi Ghorbani
- Department of Medical Laboratory Sciences, School of Allied Medical Sciences, AJA University of Medical Sciences, Tehran, Iran
- Medical Biotechnology Research Center, AJA University of Medical Sciences, Tehran, Iran
| | - Majid Mafi
- Mechanical Engineering Department, Iran University of Science and Technology, Tehran, Iran
| |
Collapse
|
2
|
Soleimani M, Harooni A, Erfani N, Khan AR, Saba T, Bahaj SA. Classification of cancer types based on microRNA expression using a hybrid radial basis function and particle swarm optimization algorithm. Microsc Res Tech 2024; 87:1052-1062. [PMID: 38230557 DOI: 10.1002/jemt.24492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 11/27/2023] [Accepted: 12/25/2023] [Indexed: 01/18/2024]
Abstract
The diagnosis and treatment of cancer is one of the most challenging aspects of the medical profession, despite advances in disease diagnosis. MicroRNAs are small noncoding RNA molecules involved in regulating gene expression and are associated with several cancer types. Therefore, the analysis of microRNA data has become one of the most important areas of cancer research in recent years. This paper presents an improved method for cancer-type classification based on microRNA expression data using a hybrid radial basis function (RBF) and particle swarm optimization (PSO) algorithm. Two datasets containing microRNA information were used, and preprocessing and normalization operations were performed on the raw data. Feature selection was carried out by using the PSO algorithm, which can identify the most relevant and informative features in the data along with helping to prioritize them. Using a PSO algorithm for feature selection is an effective approach to microRNA analysis. This enhances the accuracy and reliability of cancer-type classifications based on microRNA expression data. In the proposed method, we, respectively, achieved an accuracy of 0.95% and 0.91% on both datasets, with an average of 0.93%, using an improved RBF neural network classifier. These results demonstrate that the proposed method outperforms previous works. RESEARCH HIGHLIGHTS: To enhance the accuracy of cancer-type classifications based on microRNA expression data. We present a minimal feature selection method using particle swarm optimization to reduce computational load & radial basis function to improve accuracy.
Collapse
Affiliation(s)
- Masoumeh Soleimani
- Department of Mathematics and Statistical Sciences, Clemson University, Clemson, South Carolina, USA
| | - Aryan Harooni
- Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan, Iran
| | - Nasim Erfani
- Department of Computer Engineering, Dolatabad Branch, Islamic Azad University, Isfahan, Iran
| | - Amjad Rehman Khan
- Artificial Intelligence & Data Analytics Lab CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department College of Business Administration, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| |
Collapse
|
3
|
Alyami J, Rehman A, Sadad T, Alruwaythi M, Saba T, Bahaj SA. Automatic skin lesions detection from images through microscopic hybrid features set and machine learning classifiers. Microsc Res Tech 2022; 85:3600-3607. [PMID: 35876390 DOI: 10.1002/jemt.24211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 04/11/2022] [Accepted: 06/11/2022] [Indexed: 11/07/2022]
Abstract
Skin cancer occurrences increase exponentially worldwide due to the lack of awareness of significant populations and skin specialists. Medical imaging can help with early detection and more accurate diagnosis of skin cancer. The physicians usually follow the manual diagnosis method in their clinics but nonprofessional dermatologists sometimes affect the accuracy of the results. Thus, the automated system is required to assist physicians in diagnosing skin cancer at early stage precisely to decrease the mortality rate. This article presents an automatic skin lesions detection through a microscopic hybrid feature set and machine learning-based classification. The employment of deep features through AlexNet architecture with local optimal-oriented pattern can accurately predict skin lesions. The proposed model is tested on two open-access datasets PAD-UFES-20 and MED-NODE comprising melanoma and nevus images. Experimental results on both datasets exhibit the efficacy of hybrid features with the help of machine learning. Finally, the proposed model achieved 94.7% accuracy using an ensemble classifier.
Collapse
Affiliation(s)
- Jaber Alyami
- Department of Diagnostic Radiology, King Abdulaziz University, Jeddah, Saudi Arabia.,Animal House Unit, King Fahd Medical Research Center, King Abdulaziz University, Jeddah, Saudi Arabia.,Smart Medical Imaging Research Group, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Tariq Sadad
- Department of Computer Science & Software Engineering, International Islamic University, Islamabad, Pakistan
| | - Maryam Alruwaythi
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department, College of Business Administration, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| |
Collapse
|
4
|
Facial Emotion Recognition Using Conventional Machine Learning and Deep Learning Methods: Current Achievements, Analysis and Remaining Challenges. INFORMATION 2022. [DOI: 10.3390/info13060268] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Facial emotion recognition (FER) is an emerging and significant research area in the pattern recognition domain. In daily life, the role of non-verbal communication is significant, and in overall communication, its involvement is around 55% to 93%. Facial emotion analysis is efficiently used in surveillance videos, expression analysis, gesture recognition, smart homes, computer games, depression treatment, patient monitoring, anxiety, detecting lies, psychoanalysis, paralinguistic communication, detecting operator fatigue and robotics. In this paper, we present a detailed review on FER. The literature is collected from different reputable research published during the current decade. This review is based on conventional machine learning (ML) and various deep learning (DL) approaches. Further, different FER datasets for evaluation metrics that are publicly available are discussed and compared with benchmark results. This paper provides a holistic review of FER using traditional ML and DL methods to highlight the future gap in this domain for new researchers. Finally, this review work is a guidebook and very helpful for young researchers in the FER area, providing a general understating and basic knowledge of the current state-of-the-art methods, and to experienced researchers looking for productive directions for future work.
Collapse
|
5
|
Akbar S, Hassan SA, Shoukat A, Alyami J, Bahaj SA. Detection of microscopic glaucoma through fundus images using deep transfer learning approach. Microsc Res Tech 2022; 85:2259-2276. [PMID: 35170136 DOI: 10.1002/jemt.24083] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Revised: 01/05/2022] [Accepted: 01/27/2022] [Indexed: 11/07/2022]
Abstract
Glaucoma disease in humans can lead to blindness if it progresses to the point where it affects the oculus' optic nerve head. It is not easily detected since there are no symptoms, but it can be detected using tonometry, ophthalmoscopy, and perimeter. However, advances in artificial intelligence approaches have permitted machine learning techniques to diagnose at an early stage. Numerous methods have been proposed using Machine Learning to diagnose glaucoma with different data sets and techniques but these are complex methods. Although, medical imaging instruments are used as glaucoma screening methods, fundus imaging specifically is the most used screening technique for glaucoma detection. This study presents a novel DenseNet and DarkNet combination to classify normal and glaucoma affected fundus image. These frameworks have been trained and tested on three data sets of high-resolution fundus (HRF), RIM 1, and ACRIMA. A total of 658 images have been used for healthy eyes and 612 images for glaucoma-affected eyes classification. It has also been observed that the fusion of DenseNet and DarkNet outperforms the two CNN networks and achieved 99.7% accuracy, 98.9% sensitivity, 100% specificity for the HRF database. In contrast, for the RIM1 database, 89.3% accuracy, 93.3% sensitivity, 88.46% specificity has been attained. Moreover, for the ACRIMA database, 99% accuracy, 100% sensitivity, 99% specificity has been achieved. Therefore, the proposed method is robust and efficient with less computational time and complexity compared to the literature available.
Collapse
Affiliation(s)
- Shahzad Akbar
- Riphah College of Computing, Riphah International University, Faisalabad Campus, Faisalabad, Pakistan
| | - Syed Ale Hassan
- Riphah College of Computing, Riphah International University, Faisalabad Campus, Faisalabad, Pakistan
| | - Ayesha Shoukat
- Riphah College of Computing, Riphah International University, Faisalabad Campus, Faisalabad, Pakistan
| | - Jaber Alyami
- Department of Diagnostic Radiology, Faculty of Applied Medical Sciences, King Abdulaziz University, Jeddah, 21589, Saudi Arabia.,Imaging Unit, King Fahd Medical Research Center, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department, College of Business Administration, Prince Sattam Bin Abdulaziz University, Alkharj, 11942, Saudi Arabia
| |
Collapse
|
6
|
Alyami J, Khan AR, Bahaj SA, Fati SM. Microscopic handcrafted features selection from computed tomography scans for
early stage
lungs cancer diagnosis using hybrid classifiers. Microsc Res Tech 2022; 85:2181-2191. [DOI: 10.1002/jemt.24075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 12/26/2021] [Accepted: 01/07/2022] [Indexed: 11/09/2022]
Affiliation(s)
- Jabar Alyami
- Department of Diagnostic Radiology, Faculty of Applied Medical Sciences King Abdulaziz University Jeddah Saudi Arabia
- Imaging Unit, King Fahd Medical Research Center King Abdulaziz University Jeddah Saudi Arabia
| | - Amjad Rehman Khan
- Artificial Intelligence & Data Analytics Lab CCIS Prince Sultan University Riyadh Riyadh Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department College of Business Administration Prince Sattam bin Abdulaziz University Alkharj Saudi Arabia
| | - Suliman Mohamed Fati
- Artificial Intelligence & Data Analytics Lab CCIS Prince Sultan University Riyadh Riyadh Saudi Arabia
| |
Collapse
|
7
|
Saba T, Rehman A, Shahzad MN, Latif R, Bahaj SA, Alyami J. Machine learning for post-traumatic stress disorder identification utilizing resting-state functional magnetic resonance imaging. Microsc Res Tech 2022; 85:2083-2094. [PMID: 35088496 DOI: 10.1002/jemt.24065] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2020] [Revised: 12/14/2021] [Accepted: 01/01/2022] [Indexed: 01/13/2023]
Abstract
Early detection of post-traumatic stress disorder (PTSD) is essential for proper treatment of the patients to recover from this disorder. The aligned purpose of this study was to investigate the performance deviations in regions of interest (ROI) of PTSD than the healthy brain regions, to assess interregional functional connectivity and applications of machine learning techniques to identify PTSD and healthy control using resting-state functional magnetic resonance imaging (rs-fMRI). The rs-fMRI data of 10 ROI was extracted from 14 approved PTSD subjects and 14 healthy controls. The rs-fMRI data of the selected ROI were used in ANOVA to measure performance level and Pearson's correlation to investigate the interregional functional connectivity in PTSD brains. In machine learning approaches, the logistic regression, K-nearest neighbor (KNN), support vector machine (SVM) with linear, radial basis function, and polynomial kernels were used to classify the PTSD and control subjects. The performance level in brain regions of PTSD deviated as compared to the regions in the healthy brain. In addition, significant positive or negative functional connectivity was observed among ROI in PTSD brains. The rs-fMRI data have been distributed in training, validation, and testing group for maturity, implementation of machine learning techniques. The KNN and SVM with radial basis function kernel were outperformed for classification among other methods with high accuracies (96.6%, 94.8%, 98.5%) and (93.7%, 95.2%, 99.2%) to train, validate, and test datasets, respectively. The study's findings may provide a guideline to observe performance and functional connectivity of the brain regions in PTSD and to discriminate PTSD subject using only the suggested algorithms.
Collapse
Affiliation(s)
- Tanzila Saba
- Artificial Intelligence & Data Analytics Lab (AIDA), CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab (AIDA), CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | | | - Rabia Latif
- Artificial Intelligence & Data Analytics Lab (AIDA), CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department College of Business Administration, Prince Sattam bin Abdulaziz University, Alkharj, 11942, Saudi Arabia
| | - Jaber Alyami
- Department of Diagnostic Radiology, Faculty of Applied Medical Science, King Abdulaziz University, Jeddah, 21589, Saudi Arabia.,Imaging Unit, King Fahd Medical Research Center, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
| |
Collapse
|
8
|
Harris Hawks Sparse Auto-Encoder Networks for Automatic Speech Recognition System. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031091] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Automatic speech recognition (ASR) is an effective technique that can convert human speech into text format or computer actions. ASR systems are widely used in smart appliances, smart homes, and biometric systems. Signal processing and machine learning techniques are incorporated to recognize speech. However, traditional systems have low performance due to a noisy environment. In addition to this, accents and local differences negatively affect the ASR system’s performance while analyzing speech signals. A precise speech recognition system was developed to improve the system performance to overcome these issues. This paper uses speech information from jim-schwoebel voice datasets processed by Mel-frequency cepstral coefficients (MFCCs). The MFCC algorithm extracts the valuable features that are used to recognize speech. Here, a sparse auto-encoder (SAE) neural network is used to classify the model, and the hidden Markov model (HMM) is used to decide on the speech recognition. The network performance is optimized by applying the Harris Hawks optimization (HHO) algorithm to fine-tune the network parameter. The fine-tuned network can effectively recognize speech in a noisy environment.
Collapse
|
9
|
Rehman A, Harouni M, Karimi M, Saba T, Bahaj SA, Awan MJ. Microscopic retinal blood vessels detection and segmentation using support vector machine and K-nearest neighbors. Microsc Res Tech 2022; 85:1899-1914. [PMID: 35037735 DOI: 10.1002/jemt.24051] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 11/14/2021] [Accepted: 12/12/2021] [Indexed: 01/08/2023]
Abstract
The retina is the deepest layer of texture covering the rear of the eye, recorded by fundus images. Vessel detection and segmentation are useful in disease diagnosis. The retina's blood vessels could help diagnose maladies such as glaucoma, diabetic retinopathy, and blood pressure. A mix of supervised and unsupervised strategies exists for the detection and segmentation of blood vessels images. The tree structure of retinal blood vessels, their random area, and different thickness have caused vessel detection difficulties at machine learning calculations. Since the green band of retinal images conveys more information about the vessels, they are utilized for microscopic vessels detection. The current research proposes an administered calculation for segmentation of retinal vessels, where two upgrading stages depending on filtering and comparative histogram were applied after pre-processing and image quality improvement. At that point, statistical features of vessel tracking, maximum curvature and curvelet coefficient are extracted for each pixel. The extracted features are classified by support vector machine and the k-nearest neighbors. The morphological operators then enhance the classified image at the final stage to segment with higher accuracy. The dice coefficient is utilized for the evaluation of the proposed method. The proposed approach is concluded to be better than different strategies with a normal of 92%.
Collapse
Affiliation(s)
- Amjad Rehman
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Majid Harouni
- Department of Computer Engineering, Dolatabad Branch, Islamic Azad University, Isfahan, Iran
| | - Mohsen Karimi
- Department of Bioelectrics and Biomedical Engineering, School of Advanced Technologies in Medicine, Isfahan, University of Medical Sciences, Isfahan, Iran
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department College of Business Administration, Prince Sattam Bin Abdulaziz University, Alkharj, Saudi Arabia
| | - Mazar Javed Awan
- Department of Software Engineering, University of Management and Technology, Lahore, Pakistan
| |
Collapse
|
10
|
Saba T, Abunadi I, Sadad T, Khan AR, Bahaj SA. Optimizing the transfer-learning with pretrained deep convolutional neural networks for first stage breast tumor diagnosis using breast ultrasound visual images. Microsc Res Tech 2021; 85:1444-1453. [PMID: 34908213 DOI: 10.1002/jemt.24008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 09/09/2021] [Accepted: 10/26/2021] [Indexed: 11/10/2022]
Abstract
Female accounts for approximately 50% of the total population worldwide and many of them had breast cancer. Computer-aided diagnosis frameworks could reduce the number of needless biopsies and the workload of radiologists. This research aims to detect benign and malignant tumors automatically using breast ultrasound (BUS) images. Accordingly, two pretrained deep convolutional neural network (CNN) models were employed for transfer learning using BUS images like AlexNet and DenseNet201. A total of 697 BUS images containing benign and malignant tumors are preprocessed and performed classification tasks using the transfer learning-based CNN models. The classification accuracy of the benign and malignant tasks is completed and achieved 92.8% accuracy using the DensNet201 model. The results thus achieved compared in state of the art using benchmark data set and concluded proposed model outperforms in accuracy from first stage breast tumor diagnosis. Finally, the proposed model could help radiologists diagnose benign and malignant tumors swiftly by screening suspected patients.
Collapse
Affiliation(s)
- Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Ibrahim Abunadi
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Tariq Sadad
- Department of Computer Science and Software Engineering, International Islamic University, Islamabad, 44000, Pakistan
| | - Amjad Rehman Khan
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department, College of Business Administration, Prince Sattam bin Abdulaziz University, Alkharj, 11942, Saudi Arabia
| |
Collapse
|
11
|
Rehman A, Harouni M, Karchegani NHS, Saba T, Bahaj SA, Roy S. Identity verification using palm print microscopic images based on median robust extended local binary pattern features and k-nearest neighbor classifier. Microsc Res Tech 2021; 85:1224-1237. [PMID: 34904758 DOI: 10.1002/jemt.23989] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Accepted: 08/17/2021] [Indexed: 11/10/2022]
Abstract
Automatic identity verification is one of the most critical and research-demanding areas. One of the most effective and reliable identity verification methods is using unique human biological characteristics and biometrics. Among all types of biometrics, palm print is recognized as one of the most accurate and reliable identity verification methods. However, this biometrics domain also has several critical challenges: image rotation, image displacement, change in image scaling, presence of noise in the image due to devices, region of interest (ROI) detection, or user error. For this purpose, a new method of identity verification based on median robust extended local binary pattern (MRELBP) is introduced in this study. In this system, after normalizing the images and extracting the ROI from the microscopic input image, the images enter the feature extraction step with the MRELBP algorithm. Next, these features are reduced by the dimensionality reduction step, and finally, feature vectors are classified using the k-nearest neighbor classifier. The microscopic images used in this study were selected from IITD and CASIA data sets, and the identity verification rate for these two data sets without challenge was 97.2% and 96.6%, respectively. In addition, computed detection rates have been broadly stable against changes such as salt-and-pepper noise up to 0.16, rotation up to 5°, displacement up to 6 pixels, and scale change up to 94%.
Collapse
Affiliation(s)
- Amjad Rehman
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Majid Harouni
- Department of Computer Engineering, Dolatabad Branch, Islamic Azad University, Isfahan, Iran
| | | | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department College of Business Administration, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Sudipta Roy
- Artificial Intelligence & Data Science Programme, JIO Institute, Navi Mumbai, Maharashtra, India
| |
Collapse
|
12
|
Awan MJ, Rahim MSM, Salim N, Rehman A, Nobanee H, Shabir H. Improved Deep Convolutional Neural Network to Classify Osteoarthritis from Anterior Cruciate Ligament Tear Using Magnetic Resonance Imaging. J Pers Med 2021; 11:jpm11111163. [PMID: 34834515 PMCID: PMC8617867 DOI: 10.3390/jpm11111163] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Revised: 11/01/2021] [Accepted: 11/03/2021] [Indexed: 12/14/2022] Open
Abstract
Anterior cruciate ligament (ACL) tear is caused by partially or completely torn ACL ligament in the knee, especially in sportsmen. There is a need to classify the ACL tear before it fully ruptures to avoid osteoarthritis. This research aims to identify ACL tears automatically and efficiently with a deep learning approach. A dataset was gathered, consisting of 917 knee magnetic resonance images (MRI) from Clinical Hospital Centre Rijeka, Croatia. The dataset we used consists of three classes: non-injured, partial tears, and fully ruptured knee MRI. The study compares and evaluates two variants of convolutional neural networks (CNN). We first tested the standard CNN model of five layers and then a customized CNN model of eleven layers. Eight different hyper-parameters were adjusted and tested on both variants. Our customized CNN model showed good results after a 25% random split using RMSprop and a learning rate of 0.001. The average evaluations are measured by accuracy, precision, sensitivity, specificity, and F1-score in the case of the standard CNN using the Adam optimizer with a learning rate of 0.001, i.e., 96.3%, 95%, 96%, 96.9%, and 95.6%, respectively. In the case of the customized CNN model, using the same evaluation measures, the model performed at 98.6%, 98%, 98%, 98.5%, and 98%, respectively, using an RMSprop optimizer with a learning rate of 0.001. Moreover, we also present our results on the receiver operating curve and area under the curve (ROC AUC). The customized CNN model with the Adam optimizer and a learning rate of 0.001 achieved 0.99 over three classes was highest among all. The model showed good results overall, and in the future, we can improve it to apply other CNN architectures to detect and segment other ligament parts like meniscus and cartilages.
Collapse
Affiliation(s)
- Mazhar Javed Awan
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Skudai 81310, Malaysia; (M.S.M.R.); (N.S.)
- Department of Software Engineering, University of Management and Technology, Lahore 54770, Pakistan;
- Correspondence: (M.J.A.); or or or (H.N.)
| | - Mohd Shafry Mohd Rahim
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Skudai 81310, Malaysia; (M.S.M.R.); (N.S.)
| | - Naomie Salim
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Skudai 81310, Malaysia; (M.S.M.R.); (N.S.)
| | - Amjad Rehman
- Artificial Intelligence and Data Analytics Research Laboratory, CCIS, Prince Sultan University, Riyadh 11586, Saudi Arabia;
| | - Haitham Nobanee
- College of Business, Abu Dhabi University, P.O. Box 59911, Abu Dhabi 59911, United Arab Emirates
- Oxford Centre for Islamic Studies, University of Oxford, Oxford OX1 2J, UK
- School of Histories, Languages and Cultures, The University of Liverpool, Liverpool L69 3BX, UK
- Correspondence: (M.J.A.); or or or (H.N.)
| | - Hassan Shabir
- Department of Software Engineering, University of Management and Technology, Lahore 54770, Pakistan;
| |
Collapse
|
13
|
Amin J, Anjum MA, Sharif M, Rehman A, Saba T, Zahra R. Microscopic segmentation and classification of COVID-19 infection with ensemble convolutional neural network. Microsc Res Tech 2021; 85:385-397. [PMID: 34435702 PMCID: PMC8646237 DOI: 10.1002/jemt.23913] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 07/10/2021] [Accepted: 08/11/2021] [Indexed: 01/19/2023]
Abstract
The detection of biological RNA from sputum has a comparatively poor positive rate in the initial/early stages of discovering COVID‐19, as per the World Health Organization. It has a different morphological structure as compared to healthy images, manifested by computer tomography (CT). COVID‐19 diagnosis at an early stage can aid in the timely cure of patients, lowering the mortality rate. In this reported research, three‐phase model is proposed for COVID‐19 detection. In Phase I, noise is removed from CT images using a denoise convolutional neural network (DnCNN). In the Phase II, the actual lesion region is segmented from the enhanced CT images by using deeplabv3 and ResNet‐18. In Phase III, segmented images are passed to the stack sparse autoencoder (SSAE) deep learning model having two stack auto‐encoders (SAE) with the selected hidden layers. The designed SSAE model is based on both SAE and softmax layers for COVID19 classification. The proposed method is evaluated on actual patient data of Pakistan Ordinance Factories and other public benchmark data sets with different scanners/mediums. The proposed method achieved global segmentation accuracy of 0.96 and 0.97 for classification.
Collapse
Affiliation(s)
- Javeria Amin
- Department of Computer Science, University of Wah, Wah Cantt, Pakistan
| | - Muhammad Almas Anjum
- Dean of University, National University of Technology (NUTECH), Islamabad, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad Wah Campus, Wah Cantt, Pakistan
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Rida Zahra
- Department of Computer Science, University of Wah, Wah Cantt, Pakistan
| |
Collapse
|
14
|
Sajjad M, Ramzan F, Khan MUG, Rehman A, Kolivand M, Fati SM, Bahaj SA. Deep convolutional generative adversarial network for Alzheimer's disease classification using positron emission tomography (PET) and synthetic data augmentation. Microsc Res Tech 2021; 84:3023-3034. [PMID: 34245203 DOI: 10.1002/jemt.23861] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2020] [Revised: 05/13/2021] [Accepted: 06/15/2021] [Indexed: 11/09/2022]
Abstract
With the evolution of deep learning technologies, computer vision-related tasks achieved tremendous success in the biomedical domain. For supervised deep learning training, we need a large number of labeled datasets. The task of achieving a large number of label dataset is a challenging. The availability of data makes it difficult to achieve and enhance an automated disease diagnosis model's performance. To synthesize data and improve the disease diagnosis model's accuracy, we proposed a novel approach for the generation of images for three different stages of Alzheimer's disease using deep convolutional generative adversarial networks. The proposed model out-perform in synthesis of brain positron emission tomography images for all three stages of Alzheimer disease. The three-stage of Alzheimer's disease is normal control, mild cognitive impairment, and Alzheimer's disease. The model performance is measured using a classification model that achieved an accuracy of 72% against synthetic images. We also experimented with quantitative measures, that is, peak signal-to-noise (PSNR) and structural similarity index measure (SSIM). We achieved average PSNR score values of 82 for AD, 72 for CN, and 73 for MCI and SSIM average score values of 25.6 for AD, 22.6 for CN, and 22.8 for MCI.
Collapse
Affiliation(s)
- Muhammad Sajjad
- National Center of Artificial Intelligence (NCAI), Al-Khawarizmi Institute of Computer Science (KICS), University of Engineering and Technology (UET), Lahore, Pakistan
| | - Farheen Ramzan
- Department of Computer Science, University of Engineering and Technology (UET), Lahore, Pakistan
| | - Muhammad Usman Ghani Khan
- National Center of Artificial Intelligence (NCAI), Al-Khawarizmi Institute of Computer Science (KICS), University of Engineering and Technology (UET), Lahore, Pakistan.,Department of Computer Science, University of Engineering and Technology (UET), Lahore, Pakistan
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics (AIDA) Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Mahyar Kolivand
- Department of Medicine, University of Liverpool, Liverpool, UK
| | - Suliman Mohamed Fati
- Artificial Intelligence & Data Analytics (AIDA) Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department College of Business Administration, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| |
Collapse
|
15
|
Saba T, Akbar S, Kolivand H, Ali Bahaj S. Automatic detection of papilledema through fundus retinal images using deep learning. Microsc Res Tech 2021; 84:3066-3077. [PMID: 34236733 DOI: 10.1002/jemt.23865] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2021] [Revised: 04/22/2021] [Accepted: 05/29/2021] [Indexed: 11/09/2022]
Abstract
Papilledema is a syndrome of the retina in which retinal optic nerve is inflated by elevation of intracranial pressure. The papilledema abnormalities such as retinal nerve fiber layer (RNFL) opacification may lead to blindness. These abnormalities could be seen through capturing of retinal images by means of fundus camera. This paper presents a deep learning-based automated system that detects and grades the papilledema through U-Net and Dense-Net architectures. The proposed approach has two main stages. First, optic disc and its surrounding area in fundus retinal image are localized and cropped for input to Dense-Net which classifies the optic disc as papilledema or normal. Second, consists of preprocessing of Dense-Net classified papilledema fundus image by Gabor filter. The preprocessed papilledema image is input to U-Net to achieve the segmented vascular network from which the vessel discontinuity index (VDI) and vessel discontinuity index to disc proximity (VDIP) are calculated for grading of papilledema. The VDI and VDIP are standard parameter to check the severity and grading of papilledema. The proposed system is evaluated on 60 papilledema and 40 normal fundus images taken from STARE dataset. The experimental results for classification of papilledema through Dense-Net are much better in terms of sensitivity 98.63%, specificity 97.83%, and accuracy 99.17%. Similarly, the grading results for mild and severe papilledema classification through U-Net are also much better in terms of sensitivity 99.82%, specificity 98.65%, and accuracy 99.89%. The deep learning-based automated detection and grading of papilledema for clinical purposes is first effort in state of art.
Collapse
Affiliation(s)
- Tanzila Saba
- Artificial Intelligence & Data Analytics (AIDA) Lab CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Shahzad Akbar
- Department of Computing, Riphah International University, Faisalabad Campus, Faisalabad, 38000, Pakistan
| | - Hoshang Kolivand
- School of Computer Science and Mathematics, Liverpool John Moores University, Liverpool, United Kingdom.,School of Computing and Digital Technologies, Staffordshire University, Staffordshire, United Kingdom
| | - Saeed Ali Bahaj
- MIS Department, College of Business Administration, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| |
Collapse
|
16
|
Khan AR, Doosti F, Karimi M, Harouni M, Tariq U, Fati SM, Ali Bahaj S. Authentication through gender classification from iris images using support vector machine. Microsc Res Tech 2021; 84:2666-2676. [PMID: 33991003 DOI: 10.1002/jemt.23816] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 04/03/2021] [Accepted: 04/24/2021] [Indexed: 11/07/2022]
Abstract
Soft biometric information, such as gender, iris, and voice, can be helpful in various applications, such as security, authentication, and validation. Iris is secure biometrics with low forgery and error rates due to its highly certain features are being used in the last few decades. Iris recognition could be used both independently and in part for secure recognition and authentication systems. Existing iris-based gender classification techniques have low accuracy rates as well as high computational complexity. Accordingly, this paper presents an authentication approach through gender classification from iris images using support vector machine (SVM) that has an excellent response to sustained changes using the Zernike, Legendre invariant moments, and Gradient-oriented histogram. In this study, invariant moments are used as feature extraction from iris images. After extracting these descriptors' attributes, the attributes are categorized through keycode fusion. SVM is employed for gender classification using a fused feature vector. The proposed approach is evaluated on the CVBL data set and results are compared in state of the art based on local binary patterns and Gabor filters. The proposed approach came out with 98% gender classification rate with low computational complexity that could be used as an authentication measure.
Collapse
Affiliation(s)
- Amjad Rehman Khan
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Fatemeh Doosti
- Department of Computer Engineering, Asharfi Isfahani University, Isfahan, Iran
| | - Mohsen Karimi
- Department of Computer Engineering, Dolatabad Branch, Islamic Azad University, Isfahan, Iran
| | - Majid Harouni
- Department of Computer Engineering, Dolatabad Branch, Islamic Azad University, Isfahan, Iran
| | - Usman Tariq
- College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Alkharj, Saudi Arabia
| | - Suliman Mohamed Fati
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department College of Business Administration, Prince Sattam bin Abdulaziz University, Alkharj, Saudi Arabia
| |
Collapse
|
17
|
Sadad T, Khan AR, Hussain A, Tariq U, Fati SM, Bahaj SA, Munir A. Internet of medical things embedding deep learning with data augmentation for mammogram density classification. Microsc Res Tech 2021; 84:2186-2194. [PMID: 33908111 DOI: 10.1002/jemt.23773] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 03/14/2021] [Accepted: 03/29/2021] [Indexed: 11/09/2022]
Abstract
Females are approximately half of the total population worldwide, and most of them are victims of breast cancer (BC). Computer-aided diagnosis (CAD) frameworks can help radiologists to find breast density (BD), which further helps in BC detection precisely. This research detects BD automatically using mammogram images based on Internet of Medical Things (IoMT) supported devices. Two pretrained deep convolutional neural network models called DenseNet201 and ResNet50 were applied through a transfer learning approach. A total of 322 mammogram images containing 106 fatty, 112 dense, and 104 glandular cases were obtained from the Mammogram Image Analysis Society dataset. The pruning out irrelevant regions and enhancing target regions is performed in preprocessing. The overall classification accuracy of the BD task is performed and accomplished 90.47% through DensNet201 model. Such a framework is beneficial in identifying BD more rapidly to assist radiologists and patients without delay.
Collapse
Affiliation(s)
- Tariq Sadad
- Department of Computer Science & Software Engineering, International Islamic University, Islamabad, Pakistan
| | - Amjad Rehman Khan
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Ayyaz Hussain
- Department of Computer Science, Quaid-i-Azam University, Islamabad, Pakistan
| | - Usman Tariq
- College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Alkharj, Saudi Arabia
| | - Suliman Mohamed Fati
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department College of Business Administration, Prince Sattam bin Abdulaziz University, Alkharj, Saudi Arabia
| | - Asim Munir
- Department of Computer Science & Software Engineering, International Islamic University, Islamabad, Pakistan
| |
Collapse
|
18
|
Khan AR, Khan S, Harouni M, Abbasi R, Iqbal S, Mehmood Z. Brain tumor segmentation using K-means clustering and deep learning with synthetic data augmentation for classification. Microsc Res Tech 2021; 84:1389-1399. [PMID: 33524220 DOI: 10.1002/jemt.23694] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2020] [Revised: 11/11/2020] [Accepted: 11/27/2020] [Indexed: 12/19/2022]
Abstract
Image processing plays a major role in neurologists' clinical diagnosis in the medical field. Several types of imagery are used for diagnostics, tumor segmentation, and classification. Magnetic resonance imaging (MRI) is favored among all modalities due to its noninvasive nature and better representation of internal tumor information. Indeed, early diagnosis may increase the chances of being lifesaving. However, the manual dissection and classification of brain tumors based on MRI is vulnerable to error, time-consuming, and formidable task. Consequently, this article presents a deep learning approach to classify brain tumors using an MRI data analysis to assist practitioners. The recommended method comprises three main phases: preprocessing, brain tumor segmentation using k-means clustering, and finally, classify tumors into their respective categories (benign/malignant) using MRI data through a finetuned VGG19 (i.e., 19 layered Visual Geometric Group) model. Moreover, for better classification accuracy, the synthetic data augmentation concept i s introduced to increase available data size for classifier training. The proposed approach was evaluated on BraTS 2015 benchmarks data sets through rigorous experiments. The results endorse the effectiveness of the proposed strategy and it achieved better accuracy compared to the previously reported state of the art techniques.
Collapse
Affiliation(s)
- Amjad Rehman Khan
- Artificial Intelligence and Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Siraj Khan
- Department of Computer Science, Islamia College University, Peshawar, Pakistan
| | - Majid Harouni
- Department of Computer Engineering, Dolatabad Branch, Islamic Azad University, Isfahan, Iran
| | - Rashid Abbasi
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Sichuan, China
| | - Sajid Iqbal
- Department of Computer Science, Bahauddin Zakariya University, Multan, Pakistan
| | - Zahid Mehmood
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
| |
Collapse
|
19
|
Saba T, Abunadi I, Shahzad MN, Khan AR. Machine learning techniques to detect and forecast the daily total COVID-19 infected and deaths cases under different lockdown types. Microsc Res Tech 2021; 84:1462-1474. [PMID: 33522669 PMCID: PMC8014446 DOI: 10.1002/jemt.23702] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 11/27/2020] [Accepted: 12/27/2020] [Indexed: 12/13/2022]
Abstract
COVID-19 has impacted the world in many ways, including loss of lives, economic downturn and social isolation. COVID-19 was emerged due to the SARS-CoV-2 that is highly infectious pandemic. Every country tried to control the COVID-19 spread by imposing different types of lockdowns. Therefore, there is an urgent need to forecast the daily confirmed infected cases and deaths in different types of lockdown to select the most appropriate lockdown strategies to control the intensity of this pandemic and reduce the burden in hospitals. Currently are imposed three types of lockdown (partial, herd, complete) in different countries. In this study, three countries from every type of lockdown were studied by applying time-series and machine learning models, named as random forests, K-nearest neighbors, SVM, decision trees (DTs), polynomial regression, Holt winter, ARIMA, and SARIMA to forecast daily confirm infected cases and deaths due to COVID-19. The models' accuracy and effectiveness were evaluated by error based on three performance criteria. Actually, a single forecasting model could not capture all data sets' trends due to the varying nature of data sets and lockdown types. Three top-ranked models were used to predict the confirmed infected cases and deaths, the outperformed models were also adopted for the out-of-sample prediction and obtained very close results to the actual values of cumulative infected cases and deaths due to COVID-19. This study has proposed the auspicious models for forecasting and the best lockdown strategy to mitigate the causalities of COVID-19.
Collapse
Affiliation(s)
- Tanzila Saba
- Artificial Intelligence and Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Ibrahim Abunadi
- Artificial Intelligence and Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | | | - Amjad Rehman Khan
- Artificial Intelligence and Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
20
|
Rehman A. Light microscopic iris classification using ensemble multi-class support vector machine. Microsc Res Tech 2021; 84:982-991. [PMID: 33438285 DOI: 10.1002/jemt.23659] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Revised: 10/24/2020] [Accepted: 11/06/2020] [Indexed: 02/04/2023]
Abstract
Similar to other biometric systems such as fingerprint, face, DNA, iris classification could assist law enforcement agencies in identifying humans. Iris classification technology helps law-enforcement agencies to recognize humans by matching their iris with iris data sets. However, iris classification is challenging in the real environment due to its invertible and complex texture variations in the human iris. Accordingly, this article presents an improved Oriented FAST and Rotated BRIEF with Bag-of-Words model to extract distinct and robust features from the iris image, followed by ensemble multi-class-SVM to classify iris. The proposed methodology consists of four main steps; first, iris image normalization and enhancement; second, localizing iris region; third, iris feature extraction; finally, iris classification using ensemble multi-class support vector machine. For preprocessing of input images, histogram equalization, Gaussian mask and median filters are applied. The proposed technique is tested on two benchmark databases, that is, CASIA-v1 and iris image database, and achieved higher accuracy than other existing techniques reported in state of the art.
Collapse
Affiliation(s)
- Amjad Rehman
- Artificial Intelligence and Data Analytics (AIDA) Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
21
|
Sadad T, Rehman A, Munir A, Saba T, Tariq U, Ayesha N, Abbasi R. Brain tumor detection and multi-classification using advanced deep learning techniques. Microsc Res Tech 2021; 84:1296-1308. [PMID: 33400339 DOI: 10.1002/jemt.23688] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 10/14/2020] [Accepted: 12/06/2020] [Indexed: 11/11/2022]
Abstract
A brain tumor is an uncontrolled development of brain cells in brain cancer if not detected at an early stage. Early brain tumor diagnosis plays a crucial role in treatment planning and patients' survival rate. There are distinct forms, properties, and therapies of brain tumors. Therefore, manual brain tumor detection is complicated, time-consuming, and vulnerable to error. Hence, automated computer-assisted diagnosis at high precision is currently in demand. This article presents segmentation through Unet architecture with ResNet50 as a backbone on the Figshare data set and achieved a level of 0.9504 of the intersection over union (IoU). The preprocessing and data augmentation concept were introduced to enhance the classification rate. The multi-classification of brain tumors is performed using evolutionary algorithms and reinforcement learning through transfer learning. Other deep learning methods such as ResNet50, DenseNet201, MobileNet V2, and InceptionV3 are also applied. Results thus obtained exhibited that the proposed research framework performed better than reported in state of the art. Different CNN, models applied for tumor classification such as MobileNet V2, Inception V3, ResNet50, DenseNet201, NASNet and attained accuracy 91.8, 92.8, 92.9, 93.1, 99.6%, respectively. However, NASNet exhibited the highest accuracy.
Collapse
Affiliation(s)
- Tariq Sadad
- Department of Computer Science, University of Central Punjab, Lahore, Pakistan
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Asim Munir
- Department of Computer Science and Software Engineering, International Islamic University, Islamabad, Pakistan
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam bin Abdulaziz University, Alkharj, Saudi Arabia
| | - Noor Ayesha
- School of Clinical Medicine, Zhengzhou University, Zhengzhou, Henan, China
| | - Rashid Abbasi
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| |
Collapse
|
22
|
Saba T. Computer vision for microscopic skin cancer diagnosis using handcrafted and non-handcrafted features. Microsc Res Tech 2021; 84:1272-1283. [PMID: 33399251 DOI: 10.1002/jemt.23686] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Revised: 11/15/2020] [Accepted: 11/30/2020] [Indexed: 12/31/2022]
Abstract
Skin covers the entire body and is the largest organ. Skin cancer is one of the most dreadful cancers that is primarily triggered by sensitivity to ultraviolet rays from the sun. However, the riskiest is melanoma, although it starts in a few different ways. The patient is extremely unaware of recognizing skin malignant growth at the initial stage. Literature is evident that various handcrafted and automatic deep learning features are employed to diagnose skin cancer using the traditional machine and deep learning techniques. The current research presents a comparison of skin cancer diagnosis techniques using handcrafted and non-handcrafted features. Additionally, clinical features such as Menzies method, seven-point detection, asymmetry, border color and diameter, visual textures (GRC), local binary patterns, Gabor filters, random fields of Markov, fractal dimension, and an oriental histography are also explored in the process of skin cancer detection. Several parameters, such as jacquard index, accuracy, dice efficiency, preciseness, sensitivity, and specificity, are compared on benchmark data sets to assess reported techniques. Finally, publicly available skin cancer data sets are described and the remaining issues are highlighted.
Collapse
Affiliation(s)
- Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
23
|
Rehman A, Khan MA, Saba T, Mehmood Z, Tariq U, Ayesha N. Microscopic brain tumor detection and classification using 3D CNN and feature selection architecture. Microsc Res Tech 2020; 84:133-149. [PMID: 32959422 DOI: 10.1002/jemt.23597] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 08/10/2020] [Accepted: 08/31/2020] [Indexed: 12/20/2022]
Abstract
Brain tumor is one of the most dreadful natures of cancer and caused a huge number of deaths among kids and adults from the past few years. According to WHO standard, the 700,000 humans are being with a brain tumor and around 86,000 are diagnosed since 2019. While the total number of deaths due to brain tumors is 16,830 since 2019 and the average survival rate is 35%. Therefore, automated techniques are needed to grade brain tumors precisely from MRI scans. In this work, a new deep learning-based method is proposed for microscopic brain tumor detection and tumor type classification. A 3D convolutional neural network (CNN) architecture is designed at the first step to extract brain tumor and extracted tumors are passed to a pretrained CNN model for feature extraction. The extracted features are transferred to the correlation-based selection method and as the output, the best features are selected. These selected features are validated through feed-forward neural network for final classification. Three BraTS datasets 2015, 2017, and 2018 are utilized for experiments, validation, and accomplished an accuracy of 98.32, 96.97, and 92.67%, respectively. A comparison with existing techniques shows the proposed design yields comparable accuracy.
Collapse
Affiliation(s)
- Amjad Rehman
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | | | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Zahid Mehmood
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam bin Abdulaziz University, Saudi Arabia
| | - Noor Ayesha
- School of Clinical Medicine, Zhengzhou University, Zhengzhou, China
| |
Collapse
|
24
|
Saba T. Recent advancement in cancer detection using machine learning: Systematic survey of decades, comparisons and challenges. J Infect Public Health 2020; 13:1274-1289. [DOI: 10.1016/j.jiph.2020.06.033] [Citation(s) in RCA: 73] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Revised: 06/21/2020] [Accepted: 06/28/2020] [Indexed: 12/24/2022] Open
|
25
|
|
26
|
Rehman A, Khan MA, Mehmood Z, Saba T, Sardaraz M, Rashid M. Microscopic melanoma detection and classification: A framework of pixel-based fusion and multilevel features reduction. Microsc Res Tech 2020; 83:410-423. [PMID: 31898863 DOI: 10.1002/jemt.23429] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2018] [Revised: 11/26/2019] [Accepted: 12/15/2019] [Indexed: 11/06/2022]
Abstract
The numbers of diagnosed patients by melanoma are drastic and contribute more deaths annually among young peoples. An approximately 192,310 new cases of skin cancer are diagnosed in 2019, which shows the importance of automated systems for the diagnosis process. Accordingly, this article presents an automated method for skin lesions detection and recognition using pixel-based seed segmented images fusion and multilevel features reduction. The proposed method involves four key steps: (a) mean-based function is implemented and fed input to top-hat and bottom-hat filters which later fused for contrast stretching, (b) seed region growing and graph-cut method-based lesion segmentation and fused both segmented lesions through pixel-based fusion, (c) multilevel features such as histogram oriented gradient (HOG), speeded up robust features (SURF), and color are extracted and simple concatenation is performed, and (d) finally variance precise entropy-based features reduction and classification through SVM via cubic kernel function. Two different experiments are performed for the evaluation of this method. The segmentation performance is evaluated on PH2, ISBI2016, and ISIC2017 with an accuracy of 95.86, 94.79, and 94.92%, respectively. The classification performance is evaluated on PH2 and ISBI2016 dataset with an accuracy of 98.20 and 95.42%, respectively. The results of the proposed automated systems are outstanding as compared to the current techniques reported in state of art, which demonstrate the validity of the proposed method.
Collapse
Affiliation(s)
- Amjad Rehman
- Artificial Intelligence and Data Analytics (AIDA) Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | | | - Zahid Mehmood
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
| | - Tanzila Saba
- Artificial Intelligence and Data Analytics (AIDA) Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Muhammad Sardaraz
- Department of Computer Science, COMSATS University Islamabad, Attock, Pakistan
| | - Muhammad Rashid
- Department of Computer Engineering, Umm Al-Qura University, Makkah, Saudi Arabia
| |
Collapse
|
27
|
Saba T. Automated lung nodule detection and classification based on multiple classifiers voting. Microsc Res Tech 2019; 82:1601-1609. [DOI: 10.1002/jemt.23326] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2018] [Revised: 03/30/2019] [Accepted: 06/08/2019] [Indexed: 01/06/2023]
Affiliation(s)
- Tanzila Saba
- College of Computer and Information SciencesPrince Sultan University Riyadh Saudi Arabia
| |
Collapse
|
28
|
Khan MA, Lali IU, Rehman A, Ishaq M, Sharif M, Saba T, Zahoor S, Akram T. Brain tumor detection and classification: A framework of marker-based watershed algorithm and multilevel priority features selection. Microsc Res Tech 2019; 82:909-922. [PMID: 30801840 DOI: 10.1002/jemt.23238] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2018] [Revised: 01/20/2019] [Accepted: 01/28/2019] [Indexed: 08/25/2024]
Abstract
Brain tumor identification using magnetic resonance images (MRI) is an important research domain in the field of medical imaging. Use of computerized techniques helps the doctors for the diagnosis and treatment against brain cancer. In this article, an automated system is developed for tumor extraction and classification from MRI. It is based on marker-based watershed segmentation and features selection. Five primary steps are involved in the proposed system including tumor contrast, tumor extraction, multimodel features extraction, features selection, and classification. A gamma contrast stretching approach is implemented to improve the contrast of a tumor. Then, segmentation is done using marker-based watershed algorithm. Shape, texture, and point features are extracted in the next step and high ranked 70% features are only selected through chi-square max conditional priority features approach. In the later step, selected features are fused using a serial-based concatenation method before classifying using support vector machine. All the experiments are performed on three data sets including Harvard, BRATS 2013, and privately collected MR images data set. Simulation results clearly reveal that the proposed system outperforms existing methods with greater precision and accuracy.
Collapse
Affiliation(s)
- Muhammad A Khan
- Department of Computer Science and Engineering, HITEC University Museum Road, Taxila, Pakistan
| | - Ikram U Lali
- Department of Computer Science, University of Gujrat, Gujrat, Pakistan
| | - Amjad Rehman
- College of Business Administration, Al Yamamah University, Riyadh 11512, Saudi Arabia
| | - Mubashar Ishaq
- Department of Computer Science, University of Gujrat, Gujrat, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | - Tanzila Saba
- College of Computer and Information Sciences, Prince Sultan University, Riyadh, Saudi Arabia
| | - Saliha Zahoor
- Department of Computer Science, University of Gujrat, Gujrat, Pakistan
| | - Tallha Akram
- Department of EE, COMSATS University Islamabad, Wah Cantt, Pakistan
| |
Collapse
|
29
|
Iqbal S, Ghani Khan MU, Saba T, Mehmood Z, Javaid N, Rehman A, Abbasi R. Deep learning model integrating features and novel classifiers fusion for brain tumor segmentation. Microsc Res Tech 2019; 82:1302-1315. [DOI: 10.1002/jemt.23281] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 03/24/2019] [Accepted: 04/12/2019] [Indexed: 01/09/2023]
Affiliation(s)
- Sajid Iqbal
- Department of Computer ScienceBahauddin Zakariya University Multan Pakistan
- Department of Computer Science and EngineeringUniversity of Engineering and Technology Lahore Pakistan
| | - Muhammad U. Ghani Khan
- Department of Computer Science and EngineeringUniversity of Engineering and Technology Lahore Pakistan
| | - Tanzila Saba
- College of Computer and Information SciencesPrince Sultan University Riyadh Saudi Arabia
| | - Zahid Mehmood
- Department of Computer EngineeringUniversity of Engineering and Technology Taxila Pakistan
| | - Nadeem Javaid
- Department of Computer ScienceCOMSATS University Islamabad Pakistan
| | - Amjad Rehman
- College of Computer and Information SystemsAl Yamamah University Riyadh Saudi Arabia
| | - Rashid Abbasi
- School of Computer and TechnologyAnhui University Hefei China
| |
Collapse
|
30
|
Abbas N, Saba T, Rehman A, Mehmood Z, Javaid N, Tahir M, Khan NU, Ahmed KT, Shah R. Plasmodium
species aware based quantification of malaria parasitemia in light microscopy thin blood smear. Microsc Res Tech 2019; 82:1198-1214. [DOI: 10.1002/jemt.23269] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Revised: 02/19/2019] [Accepted: 03/15/2019] [Indexed: 01/03/2023]
Affiliation(s)
- Naveed Abbas
- Department of Computer ScienceIslamia College Peshawar KPK Pakistan
| | - Tanzila Saba
- College of Computer and Information SciencesPrince Sultan University Riyadh Saudi Arabia
| | - Amjad Rehman
- College of Business AdministrationAl Yamamah University Riyadh Saudi Arabia
| | - Zahid Mehmood
- Department of Computer EngineeringUniversity of Engineering and Technology Taxila Pakistan
| | - Nadeem Javaid
- Department of Computer ScienceCOMSATS University Islamabad Pakistan
| | - Muhammad Tahir
- Department of Computer ScienceCOMSATS University Islamabad, Attock Campus Pakistan
| | | | | | - Roaider Shah
- Department of Computer ScienceIslamia College Peshawar KPK Pakistan
| |
Collapse
|
31
|
Tahir B, Iqbal S, Usman Ghani Khan M, Saba T, Mehmood Z, Anjum A, Mahmood T. Feature enhancement framework for brain tumor segmentation and classification. Microsc Res Tech 2019; 82:803-811. [DOI: 10.1002/jemt.23224] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Revised: 11/20/2018] [Accepted: 12/29/2018] [Indexed: 11/08/2022]
Affiliation(s)
- Bilal Tahir
- Department of Computer Science and EngineeringUniversity of Engineering and Technology Lahore Pakistan
| | - Sajid Iqbal
- Department of Computer Science and EngineeringUniversity of Engineering and Technology Lahore Pakistan
- Department of Computer ScienceBahauddin Zakariya University Multan Pakistan
| | - M. Usman Ghani Khan
- Department of Computer Science and EngineeringUniversity of Engineering and Technology Lahore Pakistan
| | - Tanzila Saba
- Department of Information Systems, College of Computer and Information Sciences, Prince Sultan University Riyadh Saudi Arabia
| | - Zahid Mehmood
- Department of Software EngineeringUniversity of Engineering and Technology Taxila Pakistan
| | - Adeel Anjum
- Department of Computer ScienceCOMSATS University Islamabad Pakistan
| | - Toqeer Mahmood
- Department of Computer ScienceUniversity of Engineering and Technology Taxila Pakistan
| |
Collapse
|
32
|
Khan MA, Akram T, Sharif M, Saba T, Javed K, Lali IU, Tanik UJ, Rehman A. Construction of saliency map and hybrid set of features for efficient segmentation and classification of skin lesion. Microsc Res Tech 2019; 82:741-763. [PMID: 30768826 DOI: 10.1002/jemt.23220] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Revised: 11/09/2018] [Accepted: 12/29/2018] [Indexed: 01/22/2023]
Affiliation(s)
- Muhammad Attique Khan
- Department of Computer Science and EngineeringHITEC University Museum Road, Taxila Pakistan
| | - Tallha Akram
- Department of Electrical EngineeringCOMSATS University Islamabad Wah Campus Pakistan
| | - Muhammad Sharif
- Department of Computer ScienceCOMSATS University Islamabad Wah Campus Pakistan
| | - Tanzila Saba
- College of Computer and Information SciencesPrince Sultan University Riyadh SA
| | - Kashif Javed
- Department of RoboticsSMME NUST Islamabad Pakistan
| | - Ikram Ullah Lali
- Department of Computer ScienceUniversity of Gujrat Gujrat Pakistan
| | - Urcun John Tanik
- Computer Science and Information Systems Texas A&M University‐Commerce USA
| | - Amjad Rehman
- Department of Information SystemsAl Yamamah University Riyadh KSA
| |
Collapse
|
33
|
Rehman A, Abbas N, Saba T, Rahman SIU, Mehmood Z, Kolivand H. Classification of acute lymphoblastic leukemia using deep learning. Microsc Res Tech 2018; 81:1310-1317. [DOI: 10.1002/jemt.23139] [Citation(s) in RCA: 128] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2018] [Revised: 08/25/2018] [Accepted: 09/01/2018] [Indexed: 11/11/2022]
Affiliation(s)
- Amjad Rehman
- College of Computer and Information SystemsAl Yamamah University Riyadh Saudi Arabia
| | - Naveed Abbas
- Department of Computer ScienceIslamia College University Peshawar Pakistan
| | - Tanzila Saba
- College of Computer and Information SciencesPrince Sultan University Riyadh Saudi Arabia
| | | | - Zahid Mehmood
- Department of Software EngineeringUniversity of Engineering and Technology Taxila Pakistan
| | - Hoshang Kolivand
- Department of Computer ScienceLiverpool John Moores University Liverpool United Kingdom
| |
Collapse
|
34
|
Iqbal S, Ghani MU, Saba T, Rehman A. Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN). Microsc Res Tech 2018; 81:419-427. [DOI: 10.1002/jemt.22994] [Citation(s) in RCA: 116] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2017] [Revised: 12/14/2017] [Accepted: 01/03/2018] [Indexed: 11/12/2022]
Affiliation(s)
- Sajid Iqbal
- Department of Computer Science and Engineering; University of Engineering and Technology; Lahore Pakistan
- Department of Computer Science Bahauddin Zakariya University Multan Pakistan
| | - M. Usman Ghani
- Department of Computer Science and Engineering; University of Engineering and Technology; Lahore Pakistan
| | - Tanzila Saba
- College of Computer and Information Sciences; Prince Sultan University; Riyadh, 11586 Saudi Arabia
| | - Amjad Rehman
- College of Computer and Information Systems; Al Yamamah University; Riyadh, 11512 Saudi Arabia
| |
Collapse
|