1
|
Khan S, Khan MA, Noor A, Fareed K. SASAN: ground truth for the effective segmentation and classification of skin cancer using biopsy images. Diagnosis (Berl) 2024; 11:283-294. [PMID: 38487874 DOI: 10.1515/dx-2024-0012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Accepted: 02/27/2024] [Indexed: 08/09/2024]
Abstract
OBJECTIVES Early skin cancer diagnosis can save lives; however, traditional methods rely on expert knowledge and can be time-consuming. This calls for automated systems using machine learning and deep learning. However, existing datasets often focus on flat skin surfaces, neglecting more complex cases on organs or with nearby lesions. METHODS This work addresses this gap by proposing a skin cancer diagnosis methodology using a dataset named ASAN that covers diverse skin cancer cases but suffers from noisy features. To overcome the noisy feature problem, a segmentation dataset named SASAN is introduced, focusing on Region of Interest (ROI) extraction-based classification. This allows models to concentrate on critical areas within the images while ignoring learning the noisy features. RESULTS Various deep learning segmentation models such as UNet, LinkNet, PSPNet, and FPN were trained on the SASAN dataset to perform segmentation-based ROI extraction. Classification was then performed using the dataset with and without ROI extraction. The results demonstrate that ROI extraction significantly improves the performance of these models in classification. This implies that SASAN is effective in evaluating performance metrics for complex skin cancer cases. CONCLUSIONS This study highlights the importance of expanding datasets to include challenging scenarios and developing better segmentation methods to enhance automated skin cancer diagnosis. The SASAN dataset serves as a valuable tool for researchers aiming to improve such systems and ultimately contribute to better diagnostic outcomes.
Collapse
Affiliation(s)
- Sajid Khan
- Department of Computer Science, School of Engineering, Central Asian University, Tashkent, Uzbekistan
| | - Muhammad Asif Khan
- Department of Computer Science, 66972 Sukkur IBA University , Sukkur, Pakistan
| | - Adeeb Noor
- Department of Information Technology, Faculty of Computing and Information Technology, 37848 King Abdulaziz University , Jeddah, Saudi Arabia
| | - Kainat Fareed
- Department of Computer Science, 66972 Sukkur IBA University , Sukkur, Pakistan
| |
Collapse
|
2
|
Rai HM, Yoo J, Atif Moqurrab S, Dashkevych S. Advancements in traditional machine learning techniques for detection and diagnosis of fatal cancer types: Comprehensive review of biomedical imaging datasets. MEASUREMENT 2024; 225:114059. [DOI: 10.1016/j.measurement.2023.114059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
|
3
|
Rai HM, Yoo J. A comprehensive analysis of recent advancements in cancer detection using machine learning and deep learning models for improved diagnostics. J Cancer Res Clin Oncol 2023; 149:14365-14408. [PMID: 37540254 DOI: 10.1007/s00432-023-05216-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 07/26/2023] [Indexed: 08/05/2023]
Abstract
PURPOSE There are millions of people who lose their life due to several types of fatal diseases. Cancer is one of the most fatal diseases which may be due to obesity, alcohol consumption, infections, ultraviolet radiation, smoking, and unhealthy lifestyles. Cancer is abnormal and uncontrolled tissue growth inside the body which may be spread to other body parts other than where it has originated. Hence it is very much required to diagnose the cancer at an early stage to provide correct and timely treatment. Also, manual diagnosis and diagnostic error may cause of the death of many patients hence much research are going on for the automatic and accurate detection of cancer at early stage. METHODS In this paper, we have done the comparative analysis of the diagnosis and recent advancement for the detection of various cancer types using traditional machine learning (ML) and deep learning (DL) models. In this study, we have included four types of cancers, brain, lung, skin, and breast and their detection using ML and DL techniques. In extensive review we have included a total of 130 pieces of literature among which 56 are of ML-based and 74 are from DL-based cancer detection techniques. Only the peer reviewed research papers published in the recent 5-year span (2018-2023) have been included for the analysis based on the parameters, year of publication, feature utilized, best model, dataset/images utilized, and best accuracy. We have reviewed ML and DL-based techniques for cancer detection separately and included accuracy as the performance evaluation metrics to maintain the homogeneity while verifying the classifier efficiency. RESULTS Among all the reviewed literatures, DL techniques achieved the highest accuracy of 100%, while ML techniques achieved 99.89%. The lowest accuracy achieved using DL and ML approaches were 70% and 75.48%, respectively. The difference in accuracy between the highest and lowest performing models is about 28.8% for skin cancer detection. In addition, the key findings, and challenges for each type of cancer detection using ML and DL techniques have been presented. The comparative analysis between the best performing and worst performing models, along with overall key findings and challenges, has been provided for future research purposes. Although the analysis is based on accuracy as the performance metric and various parameters, the results demonstrate a significant scope for improvement in classification efficiency. CONCLUSION The paper concludes that both ML and DL techniques hold promise in the early detection of various cancer types. However, the study identifies specific challenges that need to be addressed for the widespread implementation of these techniques in clinical settings. The presented results offer valuable guidance for future research in cancer detection, emphasizing the need for continued advancements in ML and DL-based approaches to improve diagnostic accuracy and ultimately save more lives.
Collapse
Affiliation(s)
- Hari Mohan Rai
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea.
| | - Joon Yoo
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea
| |
Collapse
|
4
|
White Blood Cells Classification Using Entropy-Controlled Deep Features Optimization. Diagnostics (Basel) 2023; 13:diagnostics13030352. [PMID: 36766457 PMCID: PMC9914384 DOI: 10.3390/diagnostics13030352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 01/13/2023] [Accepted: 01/13/2023] [Indexed: 01/19/2023] Open
Abstract
White blood cells (WBCs) constitute an essential part of the human immune system. The correct identification of WBC subtypes is critical in the diagnosis of leukemia, a kind of blood cancer defined by the aberrant proliferation of malignant leukocytes in the bone marrow. The traditional approach of classifying WBCs, which involves the visual analysis of blood smear images, is labor-intensive and error-prone. Modern approaches based on deep convolutional neural networks provide significant results for this type of image categorization, but have high processing and implementation costs owing to very large feature sets. This paper presents an improved hybrid approach for efficient WBC subtype classification. First, optimum deep features are extracted from enhanced and segmented WBC images using transfer learning on pre-trained deep neural networks, i.e., DenseNet201 and Darknet53. The serially fused feature vector is then filtered using an entropy-controlled marine predator algorithm (ECMPA). This nature-inspired meta-heuristic optimization algorithm selects the most dominant features while discarding the weak ones. The reduced feature vector is classified with multiple baseline classifiers with various kernel settings. The proposed methodology is validated on a public dataset of 5000 synthetic images that correspond to five different subtypes of WBCs. The system achieves an overall average accuracy of 99.9% with more than 95% reduction in the size of the feature vector. The feature selection algorithm also demonstrates better convergence performance as compared to classical meta-heuristic algorithms. The proposed method also demonstrates a comparable performance with several existing works on WBC classification.
Collapse
|
5
|
Naz Z, Khan MUG, Saba T, Rehman A, Nobanee H, Bahaj SA. An Explainable AI-Enabled Framework for Interpreting Pulmonary Diseases from Chest Radiographs. Cancers (Basel) 2023; 15:cancers15010314. [PMID: 36612309 PMCID: PMC9818469 DOI: 10.3390/cancers15010314] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/22/2022] [Accepted: 12/23/2022] [Indexed: 01/05/2023] Open
Abstract
Explainable Artificial Intelligence is a key component of artificially intelligent systems that aim to explain the classification results. The classification results explanation is essential for automatic disease diagnosis in healthcare. The human respiration system is badly affected by different chest pulmonary diseases. Automatic classification and explanation can be used to detect these lung diseases. In this paper, we introduced a CNN-based transfer learning-based approach for automatically explaining pulmonary diseases, i.e., edema, tuberculosis, nodules, and pneumonia from chest radiographs. Among these pulmonary diseases, pneumonia, which COVID-19 causes, is deadly; therefore, radiographs of COVID-19 are used for the explanation task. We used the ResNet50 neural network and trained the network on extensive training with the COVID-CT dataset and the COVIDNet dataset. The interpretable model LIME is used for the explanation of classification results. Lime highlights the input image's important features for generating the classification result. We evaluated the explanation using radiologists' highlighted images and identified that our model highlights and explains the same regions. We achieved improved classification results with our fine-tuned model with an accuracy of 93% and 97%, respectively. The analysis of our results indicates that this research not only improves the classification results but also provides an explanation of pulmonary diseases with advanced deep-learning methods. This research would assist radiologists with automatic disease detection and explanations, which are used to make clinical decisions and assist in diagnosing and treating pulmonary diseases in the early stage.
Collapse
Affiliation(s)
- Zubaira Naz
- Department of Computer Science, University of Engineering and Technology Lahore, Lahore 54890, Pakistan
| | - Muhammad Usman Ghani Khan
- Department of Computer Science, University of Engineering and Technology Lahore, Lahore 54890, Pakistan
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh 11586, Saudi Arabia
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh 11586, Saudi Arabia
- Correspondence: (A.R.); (H.N.)
| | - Haitham Nobanee
- College of Business, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
- Oxford Center for Islamic Studies, University of Oxford, Oxford OX3 0EE, UK
- Faculty of Humanities & Social Sciences, University of Liverpool, Liverpool L69 7WZ, UK
- Correspondence: (A.R.); (H.N.)
| | - Saeed Ali Bahaj
- MIS Department, College of Business Administration, Prince Sattam bin Abdulaziz University, Alkharj 11942, Saudi Arabia
| |
Collapse
|
6
|
Attique Khan M, Alhaisoni M, Nazir M, Alqahtani A, Binbusayyis A, Alsubai S, Nam Y, Kang BG. A Healthcare System for COVID19 Classification Using Multi-Type Classical Features Selection. COMPUTERS, MATERIALS & CONTINUA 2023; 74:1393-1412. [DOI: 10.32604/cmc.2023.032064] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2024]
|
7
|
Clustering based lung lobe segmentation and optimization based lung cancer classification using CT images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
8
|
Benign-malignant classification of pulmonary nodule with deep feature optimization framework. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103701] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
9
|
Painuli D, Bhardwaj S, Köse U. Recent advancement in cancer diagnosis using machine learning and deep learning techniques: A comprehensive review. Comput Biol Med 2022; 146:105580. [PMID: 35551012 DOI: 10.1016/j.compbiomed.2022.105580] [Citation(s) in RCA: 41] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Revised: 04/14/2022] [Accepted: 04/30/2022] [Indexed: 02/07/2023]
Abstract
Being a second most cause of mortality worldwide, cancer has been identified as a perilous disease for human beings, where advance stage diagnosis may not help much in safeguarding patients from mortality. Thus, efforts to provide a sustainable architecture with proven cancer prevention estimate and provision for early diagnosis of cancer is the need of hours. Advent of machine learning methods enriched cancer diagnosis area with its overwhelmed efficiency & low error-rate then humans. A significant revolution has been witnessed in the development of machine learning & deep learning assisted system for segmentation & classification of various cancers during past decade. This research paper includes a review of various types of cancer detection via different data modalities using machine learning & deep learning-based methods along with different feature extraction techniques and benchmark datasets utilized in the recent six years studies. The focus of this study is to review, analyse, classify, and address the recent development in cancer detection and diagnosis of six types of cancers i.e., breast, lung, liver, skin, brain and pancreatic cancer, using machine learning & deep learning techniques. Various state-of-the-art technique are clustered into same group and results are examined through key performance indicators like accuracy, area under the curve, precision, sensitivity, dice score on benchmark datasets and concluded with future research work challenges.
Collapse
Affiliation(s)
- Deepak Painuli
- Department of Computer Science and Engineering, Gurukula Kangri Vishwavidyalaya, Haridwar, India.
| | - Suyash Bhardwaj
- Department of Computer Science and Engineering, Gurukula Kangri Vishwavidyalaya, Haridwar, India
| | - Utku Köse
- Department of Computer Engineering, Suleyman Demirel University, Isparta, Turkey
| |
Collapse
|
10
|
Alyami J, Khan AR, Bahaj SA, Fati SM. Microscopic handcrafted features selection from computed tomography scans for
early stage
lungs cancer diagnosis using hybrid classifiers. Microsc Res Tech 2022; 85:2181-2191. [DOI: 10.1002/jemt.24075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 12/26/2021] [Accepted: 01/07/2022] [Indexed: 11/09/2022]
Affiliation(s)
- Jabar Alyami
- Department of Diagnostic Radiology, Faculty of Applied Medical Sciences King Abdulaziz University Jeddah Saudi Arabia
- Imaging Unit, King Fahd Medical Research Center King Abdulaziz University Jeddah Saudi Arabia
| | - Amjad Rehman Khan
- Artificial Intelligence & Data Analytics Lab CCIS Prince Sultan University Riyadh Riyadh Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department College of Business Administration Prince Sattam bin Abdulaziz University Alkharj Saudi Arabia
| | - Suliman Mohamed Fati
- Artificial Intelligence & Data Analytics Lab CCIS Prince Sultan University Riyadh Riyadh Saudi Arabia
| |
Collapse
|
11
|
Rehman A, Harouni M, Karimi M, Saba T, Bahaj SA, Awan MJ. Microscopic retinal blood vessels detection and segmentation using support vector machine and K-nearest neighbors. Microsc Res Tech 2022; 85:1899-1914. [PMID: 35037735 DOI: 10.1002/jemt.24051] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 11/14/2021] [Accepted: 12/12/2021] [Indexed: 01/08/2023]
Abstract
The retina is the deepest layer of texture covering the rear of the eye, recorded by fundus images. Vessel detection and segmentation are useful in disease diagnosis. The retina's blood vessels could help diagnose maladies such as glaucoma, diabetic retinopathy, and blood pressure. A mix of supervised and unsupervised strategies exists for the detection and segmentation of blood vessels images. The tree structure of retinal blood vessels, their random area, and different thickness have caused vessel detection difficulties at machine learning calculations. Since the green band of retinal images conveys more information about the vessels, they are utilized for microscopic vessels detection. The current research proposes an administered calculation for segmentation of retinal vessels, where two upgrading stages depending on filtering and comparative histogram were applied after pre-processing and image quality improvement. At that point, statistical features of vessel tracking, maximum curvature and curvelet coefficient are extracted for each pixel. The extracted features are classified by support vector machine and the k-nearest neighbors. The morphological operators then enhance the classified image at the final stage to segment with higher accuracy. The dice coefficient is utilized for the evaluation of the proposed method. The proposed approach is concluded to be better than different strategies with a normal of 92%.
Collapse
Affiliation(s)
- Amjad Rehman
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Majid Harouni
- Department of Computer Engineering, Dolatabad Branch, Islamic Azad University, Isfahan, Iran
| | - Mohsen Karimi
- Department of Bioelectrics and Biomedical Engineering, School of Advanced Technologies in Medicine, Isfahan, University of Medical Sciences, Isfahan, Iran
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department College of Business Administration, Prince Sattam Bin Abdulaziz University, Alkharj, Saudi Arabia
| | - Mazar Javed Awan
- Department of Software Engineering, University of Management and Technology, Lahore, Pakistan
| |
Collapse
|
12
|
Lin FY, Chang YC, Huang HY, Li CC, Chen YC, Chen CM. A radiomics approach for lung nodule detection in thoracic CT images based on the dynamic patterns of morphological variation. Eur Radiol 2022; 32:3767-3777. [PMID: 35020016 DOI: 10.1007/s00330-021-08456-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2021] [Revised: 09/20/2021] [Accepted: 11/02/2021] [Indexed: 11/28/2022]
Abstract
OBJECTIVES To propose and evaluate a set of radiomic features, called morphological dynamics features, for pulmonary nodule detection, which were rooted in the dynamic patterns of morphological variation and needless precise lesion segmentation. MATERIALS AND METHODS Two datasets were involved, namely, university hospital (UH) and LIDC datasets, comprising 72 CT scans (360 nodules) and 888 CT scans (2230 nodules), respectively. Each nodule was annotated by multiple radiologists. Denoted the category of nodules identified by at least k radiologists as ALk. A nodule detection algorithm, called CAD-MD algorithm, was proposed based on the morphological dynamics radiomic features, characterizing a lesion by ten sets of the same features with different values extracted from ten different thresholding results. Each nodule candidate was classified by a two-level classifier, including ten decision trees and a random forest, respectively. The CAD-MD algorithm was compared with a deep learning approach, the N-Net, using the UH dataset. RESULTS On the AL1 and AL2 of the UH dataset, the AUC of the AFROC curves were 0.777 and 0.851 for the CAD-MD algorithm and 0.478 and 0.472 for the N-Net, respectively. The CAD-MD algorithm achieved the sensitivities of 84.4% and 91.4% with 2.98 and 3.69 FPs/scan and the N-Net 74.4% and 80.7% with 3.90 and 4.49 FPs/scan, respectively. On the LIDC dataset, the CAD-MD algorithm attained the sensitivities of 87.6%, 89.2%, 92.2%, and 95.0% with 4 FPs/scan for AL1-AL4, respectively. CONCLUSION The morphological dynamics radiomic features might serve as an effective set of radiomic features for lung nodule detection. KEY POINTS • Texture features varied with such CT system settings as reconstruction kernels of CT images, CT scanner models, and parameter settings, and so on. • Shape and first-order statistics were shown to be the most robust features against variation in CT imaging parameters. • The morphological dynamics radiomic features, which mainly characterized the dynamic patterns of morphological variation, were shown to be effective for lung nodule detection.
Collapse
Affiliation(s)
- Fan-Ya Lin
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, No. 1, Sec. 1, Jen-Ai Road, Taipei, 100, Taiwan
| | - Yeun-Chung Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | | | - Chia-Chen Li
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, No. 1, Sec. 1, Jen-Ai Road, Taipei, 100, Taiwan
| | - Yi-Chang Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, No. 1, Sec. 1, Jen-Ai Road, Taipei, 100, Taiwan.,Department of Medical Imaging, Cardinal Tien Hospital, New Taipei City, Taiwan
| | - Chung-Ming Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, No. 1, Sec. 1, Jen-Ai Road, Taipei, 100, Taiwan.
| |
Collapse
|
13
|
Ren Z, Zhang Y, Wang S. LCDAE: Data Augmented Ensemble Framework for Lung Cancer Classification. Technol Cancer Res Treat 2022; 21:15330338221124372. [PMID: 36148908 PMCID: PMC9511553 DOI: 10.1177/15330338221124372] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 07/15/2022] [Accepted: 08/02/2022] [Indexed: 11/15/2022] Open
Abstract
Objective: The only possible solution to increase the patients' fatality rate is lung cancer early-stage detection. Recently, deep learning techniques became the most promising methods in medical image analysis compared with other numerous computer-aided diagnostic techniques. However, deep learning models always get lower performance when the model is overfitting. Methods: We present a Lung Cancer Data Augmented Ensemble (LCDAE) framework to solve the overfitting and lower performance problems in the lung cancer classification tasks. The LCDAE has 3 parts: The Lung Cancer Deep Convolutional GAN, which can synthesize images of lung cancer; A Data Augmented Ensemble model (DA-ENM), which ensembled 6 fine-tuned transfer learning models for training, testing, and validating on a lung cancer dataset; The third part is a Hybrid Data Augmentation (HDA) which combines all the data augmentation techniques in the LCDAE. Results: By comparing with existing state-of-the-art methods, the LCDAE obtains the best accuracy of 99.99%, the precision of 99.99%, and the F1-score of 99.99%. Conclusion: Our proposed LCDAE can overcome the overfitting issue for the lung cancer classification tasks by applying different data augmentation techniques, our method also has the best performance compared to state-of-the-art approaches.
Collapse
Affiliation(s)
- Zeyu Ren
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| |
Collapse
|
14
|
Saba T, Abunadi I, Sadad T, Khan AR, Bahaj SA. Optimizing the transfer-learning with pretrained deep convolutional neural networks for first stage breast tumor diagnosis using breast ultrasound visual images. Microsc Res Tech 2021; 85:1444-1453. [PMID: 34908213 DOI: 10.1002/jemt.24008] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 09/09/2021] [Accepted: 10/26/2021] [Indexed: 11/10/2022]
Abstract
Female accounts for approximately 50% of the total population worldwide and many of them had breast cancer. Computer-aided diagnosis frameworks could reduce the number of needless biopsies and the workload of radiologists. This research aims to detect benign and malignant tumors automatically using breast ultrasound (BUS) images. Accordingly, two pretrained deep convolutional neural network (CNN) models were employed for transfer learning using BUS images like AlexNet and DenseNet201. A total of 697 BUS images containing benign and malignant tumors are preprocessed and performed classification tasks using the transfer learning-based CNN models. The classification accuracy of the benign and malignant tasks is completed and achieved 92.8% accuracy using the DensNet201 model. The results thus achieved compared in state of the art using benchmark data set and concluded proposed model outperforms in accuracy from first stage breast tumor diagnosis. Finally, the proposed model could help radiologists diagnose benign and malignant tumors swiftly by screening suspected patients.
Collapse
Affiliation(s)
- Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Ibrahim Abunadi
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Tariq Sadad
- Department of Computer Science and Software Engineering, International Islamic University, Islamabad, 44000, Pakistan
| | - Amjad Rehman Khan
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department, College of Business Administration, Prince Sattam bin Abdulaziz University, Alkharj, 11942, Saudi Arabia
| |
Collapse
|
15
|
Rehman A, Harouni M, Karchegani NHS, Saba T, Bahaj SA, Roy S. Identity verification using palm print microscopic images based on median robust extended local binary pattern features and k-nearest neighbor classifier. Microsc Res Tech 2021; 85:1224-1237. [PMID: 34904758 DOI: 10.1002/jemt.23989] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Accepted: 08/17/2021] [Indexed: 11/10/2022]
Abstract
Automatic identity verification is one of the most critical and research-demanding areas. One of the most effective and reliable identity verification methods is using unique human biological characteristics and biometrics. Among all types of biometrics, palm print is recognized as one of the most accurate and reliable identity verification methods. However, this biometrics domain also has several critical challenges: image rotation, image displacement, change in image scaling, presence of noise in the image due to devices, region of interest (ROI) detection, or user error. For this purpose, a new method of identity verification based on median robust extended local binary pattern (MRELBP) is introduced in this study. In this system, after normalizing the images and extracting the ROI from the microscopic input image, the images enter the feature extraction step with the MRELBP algorithm. Next, these features are reduced by the dimensionality reduction step, and finally, feature vectors are classified using the k-nearest neighbor classifier. The microscopic images used in this study were selected from IITD and CASIA data sets, and the identity verification rate for these two data sets without challenge was 97.2% and 96.6%, respectively. In addition, computed detection rates have been broadly stable against changes such as salt-and-pepper noise up to 0.16, rotation up to 5°, displacement up to 6 pixels, and scale change up to 94%.
Collapse
Affiliation(s)
- Amjad Rehman
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Majid Harouni
- Department of Computer Engineering, Dolatabad Branch, Islamic Azad University, Isfahan, Iran
| | | | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department College of Business Administration, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Sudipta Roy
- Artificial Intelligence & Data Science Programme, JIO Institute, Navi Mumbai, Maharashtra, India
| |
Collapse
|
16
|
Arshad M, Khan MA, Tariq U, Armghan A, Alenezi F, Younus Javed M, Aslam SM, Kadry S. A Computer-Aided Diagnosis System Using Deep Learning for Multiclass Skin Lesion Classification. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:9619079. [PMID: 34912449 PMCID: PMC8668359 DOI: 10.1155/2021/9619079] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/07/2021] [Revised: 10/28/2021] [Accepted: 11/10/2021] [Indexed: 11/28/2022]
Abstract
In the USA, each year, almost 5.4 million people are diagnosed with skin cancer. Melanoma is one of the most dangerous types of skin cancer, and its survival rate is 5%. The development of skin cancer has risen over the last couple of years. Early identification of skin cancer can help reduce the human mortality rate. Dermoscopy is a technology used for the acquisition of skin images. However, the manual inspection process consumes more time and required much cost. The recent development in the area of deep learning showed significant performance for classification tasks. In this research work, a new automated framework is proposed for multiclass skin lesion classification. The proposed framework consists of a series of steps. In the first step, augmentation is performed. For the augmentation process, three operations are performed: rotate 90, right-left flip, and up and down flip. In the second step, deep models are fine-tuned. Two models are opted, such as ResNet-50 and ResNet-101, and updated their layers. In the third step, transfer learning is applied to train both fine-tuned deep models on augmented datasets. In the succeeding stage, features are extracted and performed fusion using a modified serial-based approach. Finally, the fused vector is further enhanced by selecting the best features using the skewness-controlled SVR approach. The final selected features are classified using several machine learning algorithms and selected based on the accuracy value. In the experimental process, the augmented HAM10000 dataset is used and achieved an accuracy of 91.7%. Moreover, the performance of the augmented dataset is better as compared to the original imbalanced dataset. In addition, the proposed method is compared with some recent studies and shows improved performance.
Collapse
Affiliation(s)
- Mehak Arshad
- Department of Computer Science, HITEC University Taxila, Taxila, Pakistan
| | | | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam Bin Abdulaziz University, Al-Kharaj, Saudi Arabia
| | - Ammar Armghan
- Department of Electrical Engineering, Jouf University, Sakaka 75471, Saudi Arabia
| | - Fayadh Alenezi
- Department of Electrical Engineering, Jouf University, Sakaka 75471, Saudi Arabia
| | | | - Shabnam Mohamed Aslam
- Department of Information Technology, College of Computer and Information Sciences, Majmaah University, Al-Majmaah 11952, Saudi Arabia
| | - Seifedine Kadry
- Faculty of Applied Computing and Technology, Noroff University College, Kristiansand, Norway
| |
Collapse
|
17
|
Awan MJ, Rahim MSM, Salim N, Rehman A, Nobanee H, Shabir H. Improved Deep Convolutional Neural Network to Classify Osteoarthritis from Anterior Cruciate Ligament Tear Using Magnetic Resonance Imaging. J Pers Med 2021; 11:jpm11111163. [PMID: 34834515 PMCID: PMC8617867 DOI: 10.3390/jpm11111163] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Revised: 11/01/2021] [Accepted: 11/03/2021] [Indexed: 12/14/2022] Open
Abstract
Anterior cruciate ligament (ACL) tear is caused by partially or completely torn ACL ligament in the knee, especially in sportsmen. There is a need to classify the ACL tear before it fully ruptures to avoid osteoarthritis. This research aims to identify ACL tears automatically and efficiently with a deep learning approach. A dataset was gathered, consisting of 917 knee magnetic resonance images (MRI) from Clinical Hospital Centre Rijeka, Croatia. The dataset we used consists of three classes: non-injured, partial tears, and fully ruptured knee MRI. The study compares and evaluates two variants of convolutional neural networks (CNN). We first tested the standard CNN model of five layers and then a customized CNN model of eleven layers. Eight different hyper-parameters were adjusted and tested on both variants. Our customized CNN model showed good results after a 25% random split using RMSprop and a learning rate of 0.001. The average evaluations are measured by accuracy, precision, sensitivity, specificity, and F1-score in the case of the standard CNN using the Adam optimizer with a learning rate of 0.001, i.e., 96.3%, 95%, 96%, 96.9%, and 95.6%, respectively. In the case of the customized CNN model, using the same evaluation measures, the model performed at 98.6%, 98%, 98%, 98.5%, and 98%, respectively, using an RMSprop optimizer with a learning rate of 0.001. Moreover, we also present our results on the receiver operating curve and area under the curve (ROC AUC). The customized CNN model with the Adam optimizer and a learning rate of 0.001 achieved 0.99 over three classes was highest among all. The model showed good results overall, and in the future, we can improve it to apply other CNN architectures to detect and segment other ligament parts like meniscus and cartilages.
Collapse
Affiliation(s)
- Mazhar Javed Awan
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Skudai 81310, Malaysia; (M.S.M.R.); (N.S.)
- Department of Software Engineering, University of Management and Technology, Lahore 54770, Pakistan;
- Correspondence: (M.J.A.); or or or (H.N.)
| | - Mohd Shafry Mohd Rahim
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Skudai 81310, Malaysia; (M.S.M.R.); (N.S.)
| | - Naomie Salim
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Skudai 81310, Malaysia; (M.S.M.R.); (N.S.)
| | - Amjad Rehman
- Artificial Intelligence and Data Analytics Research Laboratory, CCIS, Prince Sultan University, Riyadh 11586, Saudi Arabia;
| | - Haitham Nobanee
- College of Business, Abu Dhabi University, P.O. Box 59911, Abu Dhabi 59911, United Arab Emirates
- Oxford Centre for Islamic Studies, University of Oxford, Oxford OX1 2J, UK
- School of Histories, Languages and Cultures, The University of Liverpool, Liverpool L69 3BX, UK
- Correspondence: (M.J.A.); or or or (H.N.)
| | - Hassan Shabir
- Department of Software Engineering, University of Management and Technology, Lahore 54770, Pakistan;
| |
Collapse
|
18
|
Wang HJ, Lin MW, Chen YC, Chen LW, Hsieh MS, Yang SM, Chen HF, Wang CW, Chen JS, Chang YC, Chen CM. A radiomics model can distinguish solitary pulmonary capillary haemangioma from lung adenocarcinoma. Interact Cardiovasc Thorac Surg 2021; 34:369-377. [PMID: 34648631 PMCID: PMC8860424 DOI: 10.1093/icvts/ivab271] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 08/22/2021] [Accepted: 08/27/2021] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVES Solitary pulmonary capillary haemangioma (SPCH) is a benign lung tumour that presents as ground-glass nodules on computed tomography (CT) images and mimics lepidic-predominant adenocarcinoma. This study aimed to establish a discriminant model using a radiomic feature analysis to distinguish SPCH from lepidic-predominant adenocarcinoma. METHODS In the adenocarcinoma group, all tumours were of the lepidic-predominant subtype with high purity (>70%). A classification model was proposed based on a two-level decision tree and 26 radiomic features extracted from each segmented lesion. For comparison, a baseline model was built with the same 26 features using a support vector machine as the classifier. Both models were assessed by the leave-one-out cross-validation method. RESULTS This study included 13 and 49 patients who underwent complete resection for SPCH and adenocarcinoma, respectively. Two sets of features were identified for discrimination between the 2 different histology types. The first set included 2 principal components corresponding to the 2 largest eigenvalues for the root node of the two-level decision tree. The second set comprised 4 selected radiomic features. The area under the receiver operating characteristic curve, accuracy, sensitivity, specificity were 0.954, 91.9%, 92.3% and 91.8% in the proposed classification model, and were 0.805, 85.5%, 61.5% and 91.8% in the baseline model, respectively. The proposed classification model significantly outperformed the baseline model (P < 0.05). CONCLUSIONS The proposed model could differentiate the 2 different histology types on CT images, and this may help surgeons to preoperatively discriminate SPCH from adenocarcinoma.
Collapse
Affiliation(s)
- Hao-Jen Wang
- Institute of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei, Taiwan
| | - Mong-Wei Lin
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Yi-Chang Chen
- Institute of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei, Taiwan.,Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Li-Wei Chen
- Institute of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei, Taiwan
| | - Min-Shu Hsieh
- Department of Pathology, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Shun-Mao Yang
- Institute of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei, Taiwan.,Department of Surgery, National Taiwan University Hospital Hsin-Chu Branch, Hsin-Chu City, Taiwan
| | - Ho-Feng Chen
- Institute of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei, Taiwan
| | - Chuan-Wei Wang
- Institute of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei, Taiwan
| | - Jin-Shing Chen
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan.,Department of Surgical Oncology, National Taiwan University Cancer Center, Taipei, Taiwan
| | - Yeun-Chung Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Chung-Ming Chen
- Institute of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei, Taiwan
| |
Collapse
|
19
|
Subhalakshmi RT, Appavu Alias Balamurugan S, Sasikala S. Automatic Segmentation and Classification of COVID-19 CT Image Using Deep Learning and Multi-Scale Recurrent Neural Network Based Classifier. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
In recent times, the COVID-19 epidemic turn out to be increased in an extreme manner, by the accessibility of an inadequate amount of rapid testing kits. Consequently, it is essential to develop the automated techniques for Covid-19 detection to recognize the existence of disease from
the radiological images. The most ordinary symptoms of COVID-19 are sore throat, fever, and dry cough. Symptoms are able to progress to a rigorous type of pneumonia with serious impediment. As medical imaging is not recommended currently in Canada for crucial COVID-19 diagnosis, systems of
computer-aided diagnosis might aid in early COVID-19 abnormalities detection and help out to observe the disease progression, reduce mortality rates potentially. In this approach, a deep learning based design for feature extraction and classification is employed for automatic COVID-19 diagnosis
from computed tomography (CT) images. The proposed model operates on three main processes based pre-processing, feature extraction, and classification. The proposed design incorporates the fusion of deep features using GoogLe Net models. Finally, Multi-scale Recurrent Neural network (RNN)
based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the proposed model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior
performance with the maximum sensitivity, specificity, and accuracy.
Collapse
Affiliation(s)
- R. T. Subhalakshmi
- Department of Information Technology, Sethu Institute of Technology, Virudhunagar 626115, India
| | | | - S. Sasikala
- Department of Computer Science and Engineering, Velammal College of Engineering and Technology, Madurai 625009, Tamil Nadu, India
| |
Collapse
|
20
|
|
21
|
Amin J, Anjum MA, Sharif M, Rehman A, Saba T, Zahra R. Microscopic segmentation and classification of COVID-19 infection with ensemble convolutional neural network. Microsc Res Tech 2021; 85:385-397. [PMID: 34435702 PMCID: PMC8646237 DOI: 10.1002/jemt.23913] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 07/10/2021] [Accepted: 08/11/2021] [Indexed: 01/19/2023]
Abstract
The detection of biological RNA from sputum has a comparatively poor positive rate in the initial/early stages of discovering COVID‐19, as per the World Health Organization. It has a different morphological structure as compared to healthy images, manifested by computer tomography (CT). COVID‐19 diagnosis at an early stage can aid in the timely cure of patients, lowering the mortality rate. In this reported research, three‐phase model is proposed for COVID‐19 detection. In Phase I, noise is removed from CT images using a denoise convolutional neural network (DnCNN). In the Phase II, the actual lesion region is segmented from the enhanced CT images by using deeplabv3 and ResNet‐18. In Phase III, segmented images are passed to the stack sparse autoencoder (SSAE) deep learning model having two stack auto‐encoders (SAE) with the selected hidden layers. The designed SSAE model is based on both SAE and softmax layers for COVID19 classification. The proposed method is evaluated on actual patient data of Pakistan Ordinance Factories and other public benchmark data sets with different scanners/mediums. The proposed method achieved global segmentation accuracy of 0.96 and 0.97 for classification.
Collapse
Affiliation(s)
- Javeria Amin
- Department of Computer Science, University of Wah, Wah Cantt, Pakistan
| | - Muhammad Almas Anjum
- Dean of University, National University of Technology (NUTECH), Islamabad, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad Wah Campus, Wah Cantt, Pakistan
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Rida Zahra
- Department of Computer Science, University of Wah, Wah Cantt, Pakistan
| |
Collapse
|
22
|
MRI Image Segmentation Model with Support Vector Machine Algorithm in Diagnosis of Solitary Pulmonary Nodule. CONTRAST MEDIA & MOLECULAR IMAGING 2021; 2021:9668836. [PMID: 34377105 PMCID: PMC8318753 DOI: 10.1155/2021/9668836] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Accepted: 07/12/2021] [Indexed: 12/02/2022]
Abstract
This study focused on the application value of MRI images processed by a Support Vector Machine (SVM) algorithm-based model in diagnosis of benign and malignant solitary pulmonary nodule (SPN). The SVM algorithm was constrained by a self-paced regularization item and gradient value to establish the MRI image segmentation model (SVM-L) for lung. Its performance was compared factoring into the Dice index (DI), sensitivity (SE), specificity (SP), and Mean Square Error (MSE). 28 SPN patients who underwent the parallel MRI examination were selected as research subjects and were divided into the benign group (11 patients) and malignant group (17 patients) according to different plans for diagnosis and treatment. The apparent diffusion coefficient (ADC) at different b values was analyzed, and the steepest slope (SS) and washout ratio (WR) values in the two groups were calculated. The result showed that the MSE, DI, SE, SP values, and operation time of the SVM-L model were (0.41 ± 0.02), (0.84 ± 0.13), (0.89 ± 0.04), (0.993 ± 0.004), and (30.69 ± 2.60)s, respectively, apparently superior to those of the other algorithms, but there were no statistic differences (P > 0.05) in the WR value between the two groups of patients. The SS values of the time-signal curve in the benign and malignant groups were (2.52 ± 0.69) %/s and (3.34 ± 00.41) %/s, respectively. Obviously, the SS value of the benign group was significantly lower than that of the malignant group (P < 0.01). The ADC value with different b values in the benign group was significantly lower than that of the malignant group (P < 0.01). It suggested that the SVM-L model significantly improved the quality of lung MRI images and increased the accuracy to differentiate benign and malignant SPN, providing reference for the diagnosis and treatment of SPN patients.
Collapse
|
23
|
Sajjad M, Ramzan F, Khan MUG, Rehman A, Kolivand M, Fati SM, Bahaj SA. Deep convolutional generative adversarial network for Alzheimer's disease classification using positron emission tomography (PET) and synthetic data augmentation. Microsc Res Tech 2021; 84:3023-3034. [PMID: 34245203 DOI: 10.1002/jemt.23861] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2020] [Revised: 05/13/2021] [Accepted: 06/15/2021] [Indexed: 11/09/2022]
Abstract
With the evolution of deep learning technologies, computer vision-related tasks achieved tremendous success in the biomedical domain. For supervised deep learning training, we need a large number of labeled datasets. The task of achieving a large number of label dataset is a challenging. The availability of data makes it difficult to achieve and enhance an automated disease diagnosis model's performance. To synthesize data and improve the disease diagnosis model's accuracy, we proposed a novel approach for the generation of images for three different stages of Alzheimer's disease using deep convolutional generative adversarial networks. The proposed model out-perform in synthesis of brain positron emission tomography images for all three stages of Alzheimer disease. The three-stage of Alzheimer's disease is normal control, mild cognitive impairment, and Alzheimer's disease. The model performance is measured using a classification model that achieved an accuracy of 72% against synthetic images. We also experimented with quantitative measures, that is, peak signal-to-noise (PSNR) and structural similarity index measure (SSIM). We achieved average PSNR score values of 82 for AD, 72 for CN, and 73 for MCI and SSIM average score values of 25.6 for AD, 22.6 for CN, and 22.8 for MCI.
Collapse
Affiliation(s)
- Muhammad Sajjad
- National Center of Artificial Intelligence (NCAI), Al-Khawarizmi Institute of Computer Science (KICS), University of Engineering and Technology (UET), Lahore, Pakistan
| | - Farheen Ramzan
- Department of Computer Science, University of Engineering and Technology (UET), Lahore, Pakistan
| | - Muhammad Usman Ghani Khan
- National Center of Artificial Intelligence (NCAI), Al-Khawarizmi Institute of Computer Science (KICS), University of Engineering and Technology (UET), Lahore, Pakistan.,Department of Computer Science, University of Engineering and Technology (UET), Lahore, Pakistan
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics (AIDA) Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Mahyar Kolivand
- Department of Medicine, University of Liverpool, Liverpool, UK
| | - Suliman Mohamed Fati
- Artificial Intelligence & Data Analytics (AIDA) Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department College of Business Administration, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| |
Collapse
|
24
|
Saba T, Akbar S, Kolivand H, Ali Bahaj S. Automatic detection of papilledema through fundus retinal images using deep learning. Microsc Res Tech 2021; 84:3066-3077. [PMID: 34236733 DOI: 10.1002/jemt.23865] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2021] [Revised: 04/22/2021] [Accepted: 05/29/2021] [Indexed: 11/09/2022]
Abstract
Papilledema is a syndrome of the retina in which retinal optic nerve is inflated by elevation of intracranial pressure. The papilledema abnormalities such as retinal nerve fiber layer (RNFL) opacification may lead to blindness. These abnormalities could be seen through capturing of retinal images by means of fundus camera. This paper presents a deep learning-based automated system that detects and grades the papilledema through U-Net and Dense-Net architectures. The proposed approach has two main stages. First, optic disc and its surrounding area in fundus retinal image are localized and cropped for input to Dense-Net which classifies the optic disc as papilledema or normal. Second, consists of preprocessing of Dense-Net classified papilledema fundus image by Gabor filter. The preprocessed papilledema image is input to U-Net to achieve the segmented vascular network from which the vessel discontinuity index (VDI) and vessel discontinuity index to disc proximity (VDIP) are calculated for grading of papilledema. The VDI and VDIP are standard parameter to check the severity and grading of papilledema. The proposed system is evaluated on 60 papilledema and 40 normal fundus images taken from STARE dataset. The experimental results for classification of papilledema through Dense-Net are much better in terms of sensitivity 98.63%, specificity 97.83%, and accuracy 99.17%. Similarly, the grading results for mild and severe papilledema classification through U-Net are also much better in terms of sensitivity 99.82%, specificity 98.65%, and accuracy 99.89%. The deep learning-based automated detection and grading of papilledema for clinical purposes is first effort in state of art.
Collapse
Affiliation(s)
- Tanzila Saba
- Artificial Intelligence & Data Analytics (AIDA) Lab CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Shahzad Akbar
- Department of Computing, Riphah International University, Faisalabad Campus, Faisalabad, 38000, Pakistan
| | - Hoshang Kolivand
- School of Computer Science and Mathematics, Liverpool John Moores University, Liverpool, United Kingdom.,School of Computing and Digital Technologies, Staffordshire University, Staffordshire, United Kingdom
| | - Saeed Ali Bahaj
- MIS Department, College of Business Administration, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| |
Collapse
|
25
|
Khan AR, Doosti F, Karimi M, Harouni M, Tariq U, Fati SM, Ali Bahaj S. Authentication through gender classification from iris images using support vector machine. Microsc Res Tech 2021; 84:2666-2676. [PMID: 33991003 DOI: 10.1002/jemt.23816] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 04/03/2021] [Accepted: 04/24/2021] [Indexed: 11/07/2022]
Abstract
Soft biometric information, such as gender, iris, and voice, can be helpful in various applications, such as security, authentication, and validation. Iris is secure biometrics with low forgery and error rates due to its highly certain features are being used in the last few decades. Iris recognition could be used both independently and in part for secure recognition and authentication systems. Existing iris-based gender classification techniques have low accuracy rates as well as high computational complexity. Accordingly, this paper presents an authentication approach through gender classification from iris images using support vector machine (SVM) that has an excellent response to sustained changes using the Zernike, Legendre invariant moments, and Gradient-oriented histogram. In this study, invariant moments are used as feature extraction from iris images. After extracting these descriptors' attributes, the attributes are categorized through keycode fusion. SVM is employed for gender classification using a fused feature vector. The proposed approach is evaluated on the CVBL data set and results are compared in state of the art based on local binary patterns and Gabor filters. The proposed approach came out with 98% gender classification rate with low computational complexity that could be used as an authentication measure.
Collapse
Affiliation(s)
- Amjad Rehman Khan
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Fatemeh Doosti
- Department of Computer Engineering, Asharfi Isfahani University, Isfahan, Iran
| | - Mohsen Karimi
- Department of Computer Engineering, Dolatabad Branch, Islamic Azad University, Isfahan, Iran
| | - Majid Harouni
- Department of Computer Engineering, Dolatabad Branch, Islamic Azad University, Isfahan, Iran
| | - Usman Tariq
- College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Alkharj, Saudi Arabia
| | - Suliman Mohamed Fati
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department College of Business Administration, Prince Sattam bin Abdulaziz University, Alkharj, Saudi Arabia
| |
Collapse
|
26
|
Karthikeyan A, Garg A, Vinod PK, Priyakumar UD. Machine Learning Based Clinical Decision Support System for Early COVID-19 Mortality Prediction. Front Public Health 2021; 9:626697. [PMID: 34055710 PMCID: PMC8149622 DOI: 10.3389/fpubh.2021.626697] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Accepted: 04/06/2021] [Indexed: 12/14/2022] Open
Abstract
The coronavirus disease 2019 (COVID-19), caused by the virus SARS-CoV-2, is an acute respiratory disease that has been classified as a pandemic by the World Health Organization (WHO). The sudden spike in the number of infections and high mortality rates have put immense pressure on the public healthcare systems. Hence, it is crucial to identify the key factors for mortality prediction to optimize patient treatment strategy. Different routine blood test results are widely available compared to other forms of data like X-rays, CT-scans, and ultrasounds for mortality prediction. This study proposes machine learning (ML) methods based on blood tests data to predict COVID-19 mortality risk. A powerful combination of five features: neutrophils, lymphocytes, lactate dehydrogenase (LDH), high-sensitivity C-reactive protein (hs-CRP), and age helps to predict mortality with 96% accuracy. Various ML models (neural networks, logistic regression, XGBoost, random forests, SVM, and decision trees) have been trained and performance compared to determine the model that achieves consistently high accuracy across the days that span the disease. The best performing method using XGBoost feature importance and neural network classification, predicts with an accuracy of 90% as early as 16 days before the outcome. Robust testing with three cases based on days to outcome confirms the strong predictive performance and practicality of the proposed model. A detailed analysis and identification of trends was performed using these key biomarkers to provide useful insights for intuitive application. This study provide solutions that would help accelerate the decision-making process in healthcare systems for focused medical treatments in an accurate, early, and reliable manner.
Collapse
Affiliation(s)
| | | | - P. K. Vinod
- Center for Computational Natural Sciences and Bioinformatics, International Institute of Information Technology, Hyderabad, India
| | - U. Deva Priyakumar
- Center for Computational Natural Sciences and Bioinformatics, International Institute of Information Technology, Hyderabad, India
| |
Collapse
|
27
|
Sadad T, Khan AR, Hussain A, Tariq U, Fati SM, Bahaj SA, Munir A. Internet of medical things embedding deep learning with data augmentation for mammogram density classification. Microsc Res Tech 2021; 84:2186-2194. [PMID: 33908111 DOI: 10.1002/jemt.23773] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 03/14/2021] [Accepted: 03/29/2021] [Indexed: 11/09/2022]
Abstract
Females are approximately half of the total population worldwide, and most of them are victims of breast cancer (BC). Computer-aided diagnosis (CAD) frameworks can help radiologists to find breast density (BD), which further helps in BC detection precisely. This research detects BD automatically using mammogram images based on Internet of Medical Things (IoMT) supported devices. Two pretrained deep convolutional neural network models called DenseNet201 and ResNet50 were applied through a transfer learning approach. A total of 322 mammogram images containing 106 fatty, 112 dense, and 104 glandular cases were obtained from the Mammogram Image Analysis Society dataset. The pruning out irrelevant regions and enhancing target regions is performed in preprocessing. The overall classification accuracy of the BD task is performed and accomplished 90.47% through DensNet201 model. Such a framework is beneficial in identifying BD more rapidly to assist radiologists and patients without delay.
Collapse
Affiliation(s)
- Tariq Sadad
- Department of Computer Science & Software Engineering, International Islamic University, Islamabad, Pakistan
| | - Amjad Rehman Khan
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Ayyaz Hussain
- Department of Computer Science, Quaid-i-Azam University, Islamabad, Pakistan
| | - Usman Tariq
- College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Alkharj, Saudi Arabia
| | - Suliman Mohamed Fati
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department College of Business Administration, Prince Sattam bin Abdulaziz University, Alkharj, Saudi Arabia
| | - Asim Munir
- Department of Computer Science & Software Engineering, International Islamic University, Islamabad, Pakistan
| |
Collapse
|
28
|
Khan MA, Kadry S, Zhang YD, Akram T, Sharif M, Rehman A, Saba T. Prediction of COVID-19 - Pneumonia based on Selected Deep Features and One Class Kernel Extreme Learning Machine. COMPUTERS & ELECTRICAL ENGINEERING : AN INTERNATIONAL JOURNAL 2021; 90:106960. [PMID: 33518824 PMCID: PMC7832028 DOI: 10.1016/j.compeleceng.2020.106960] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 06/22/2020] [Accepted: 12/28/2020] [Indexed: 05/21/2023]
Abstract
In this work, we propose a deep learning framework for the classification of COVID-19 pneumonia infection from normal chest CT scans. In this regard, a 15-layered convolutional neural network architecture is developed which extracts deep features from the selected image samples - collected from the Radiopeadia. Deep features are collected from two different layers, global average pool and fully connected layers, which are later combined using the max-layer detail (MLD) approach. Subsequently, a Correntropy technique is embedded in the main design to select the most discriminant features from the pool of features. One-class kernel extreme learning machine classifier is utilized for the final classification to achieving an average accuracy of 95.1%, and the sensitivity, specificity & precision rate of 95.1%, 95%, & 94% respectively. To further verify our claims, detailed statistical analyses based on standard error mean (SEM) is also provided, which proves the effectiveness of our proposed prediction design.
Collapse
Affiliation(s)
| | - Seifedine Kadry
- Department of Mathematics and Computer Science, Faculty of Science, Beirut Arab University, Lebanon
| | - Yu-Dong Zhang
- Department of Informatics, University of Leicester, Leicester, UK
| | - Tallha Akram
- Department of Electrical & Computer Engr. COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Amjad Rehman
- College of Computer and Information Sciences, Prince Sultan University, SA
| | - Tanzila Saba
- College of Computer and Information Sciences, Prince Sultan University, SA
| |
Collapse
|
29
|
Khan AR, Khan S, Harouni M, Abbasi R, Iqbal S, Mehmood Z. Brain tumor segmentation using K-means clustering and deep learning with synthetic data augmentation for classification. Microsc Res Tech 2021; 84:1389-1399. [PMID: 33524220 DOI: 10.1002/jemt.23694] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2020] [Revised: 11/11/2020] [Accepted: 11/27/2020] [Indexed: 12/19/2022]
Abstract
Image processing plays a major role in neurologists' clinical diagnosis in the medical field. Several types of imagery are used for diagnostics, tumor segmentation, and classification. Magnetic resonance imaging (MRI) is favored among all modalities due to its noninvasive nature and better representation of internal tumor information. Indeed, early diagnosis may increase the chances of being lifesaving. However, the manual dissection and classification of brain tumors based on MRI is vulnerable to error, time-consuming, and formidable task. Consequently, this article presents a deep learning approach to classify brain tumors using an MRI data analysis to assist practitioners. The recommended method comprises three main phases: preprocessing, brain tumor segmentation using k-means clustering, and finally, classify tumors into their respective categories (benign/malignant) using MRI data through a finetuned VGG19 (i.e., 19 layered Visual Geometric Group) model. Moreover, for better classification accuracy, the synthetic data augmentation concept i s introduced to increase available data size for classifier training. The proposed approach was evaluated on BraTS 2015 benchmarks data sets through rigorous experiments. The results endorse the effectiveness of the proposed strategy and it achieved better accuracy compared to the previously reported state of the art techniques.
Collapse
Affiliation(s)
- Amjad Rehman Khan
- Artificial Intelligence and Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Siraj Khan
- Department of Computer Science, Islamia College University, Peshawar, Pakistan
| | - Majid Harouni
- Department of Computer Engineering, Dolatabad Branch, Islamic Azad University, Isfahan, Iran
| | - Rashid Abbasi
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Sichuan, China
| | - Sajid Iqbal
- Department of Computer Science, Bahauddin Zakariya University, Multan, Pakistan
| | - Zahid Mehmood
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
| |
Collapse
|
30
|
Saba T, Abunadi I, Shahzad MN, Khan AR. Machine learning techniques to detect and forecast the daily total COVID-19 infected and deaths cases under different lockdown types. Microsc Res Tech 2021; 84:1462-1474. [PMID: 33522669 PMCID: PMC8014446 DOI: 10.1002/jemt.23702] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 11/27/2020] [Accepted: 12/27/2020] [Indexed: 12/13/2022]
Abstract
COVID-19 has impacted the world in many ways, including loss of lives, economic downturn and social isolation. COVID-19 was emerged due to the SARS-CoV-2 that is highly infectious pandemic. Every country tried to control the COVID-19 spread by imposing different types of lockdowns. Therefore, there is an urgent need to forecast the daily confirmed infected cases and deaths in different types of lockdown to select the most appropriate lockdown strategies to control the intensity of this pandemic and reduce the burden in hospitals. Currently are imposed three types of lockdown (partial, herd, complete) in different countries. In this study, three countries from every type of lockdown were studied by applying time-series and machine learning models, named as random forests, K-nearest neighbors, SVM, decision trees (DTs), polynomial regression, Holt winter, ARIMA, and SARIMA to forecast daily confirm infected cases and deaths due to COVID-19. The models' accuracy and effectiveness were evaluated by error based on three performance criteria. Actually, a single forecasting model could not capture all data sets' trends due to the varying nature of data sets and lockdown types. Three top-ranked models were used to predict the confirmed infected cases and deaths, the outperformed models were also adopted for the out-of-sample prediction and obtained very close results to the actual values of cumulative infected cases and deaths due to COVID-19. This study has proposed the auspicious models for forecasting and the best lockdown strategy to mitigate the causalities of COVID-19.
Collapse
Affiliation(s)
- Tanzila Saba
- Artificial Intelligence and Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Ibrahim Abunadi
- Artificial Intelligence and Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | | | - Amjad Rehman Khan
- Artificial Intelligence and Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
31
|
Rehman A. Light microscopic iris classification using ensemble multi-class support vector machine. Microsc Res Tech 2021; 84:982-991. [PMID: 33438285 DOI: 10.1002/jemt.23659] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Revised: 10/24/2020] [Accepted: 11/06/2020] [Indexed: 02/04/2023]
Abstract
Similar to other biometric systems such as fingerprint, face, DNA, iris classification could assist law enforcement agencies in identifying humans. Iris classification technology helps law-enforcement agencies to recognize humans by matching their iris with iris data sets. However, iris classification is challenging in the real environment due to its invertible and complex texture variations in the human iris. Accordingly, this article presents an improved Oriented FAST and Rotated BRIEF with Bag-of-Words model to extract distinct and robust features from the iris image, followed by ensemble multi-class-SVM to classify iris. The proposed methodology consists of four main steps; first, iris image normalization and enhancement; second, localizing iris region; third, iris feature extraction; finally, iris classification using ensemble multi-class support vector machine. For preprocessing of input images, histogram equalization, Gaussian mask and median filters are applied. The proposed technique is tested on two benchmark databases, that is, CASIA-v1 and iris image database, and achieved higher accuracy than other existing techniques reported in state of the art.
Collapse
Affiliation(s)
- Amjad Rehman
- Artificial Intelligence and Data Analytics (AIDA) Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
32
|
Liaqat A, Khan MA, Sharif M, Mittal M, Saba T, Manic KS, Al Attar FNH. Gastric Tract Infections Detection and Classification from Wireless Capsule Endoscopy using Computer Vision Techniques: A Review. Curr Med Imaging 2021; 16:1229-1242. [PMID: 32334504 DOI: 10.2174/1573405616666200425220513] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2019] [Revised: 01/14/2020] [Accepted: 01/30/2020] [Indexed: 11/22/2022]
Abstract
Recent facts and figures published in various studies in the US show that approximately
27,510 new cases of gastric infections are diagnosed. Furthermore, it has also been reported that
the mortality rate is quite high in diagnosed cases. The early detection of these infections can save
precious human lives. As the manual process of these infections is time-consuming and expensive,
therefore automated Computer-Aided Diagnosis (CAD) systems are required which helps the endoscopy
specialists in their clinics. Generally, an automated method of gastric infection detections
using Wireless Capsule Endoscopy (WCE) is comprised of the following steps such as contrast preprocessing,
feature extraction, segmentation of infected regions, and classification into their relevant
categories. These steps consist of various challenges that reduce the detection and recognition
accuracy as well as increase the computation time. In this review, authors have focused on the importance
of WCE in medical imaging, the role of endoscopy for bleeding-related infections, and
the scope of endoscopy. Further, the general steps and highlighting the importance of each step
have been presented. A detailed discussion and future directions have been provided at the end.
Collapse
Affiliation(s)
- Amna Liaqat
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | | | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | - Mamta Mittal
- Department of Computer Science & Engineering, G.B. Pant Govt. Engineering College, New Delhi, India
| | - Tanzila Saba
- Department of Computer and Information Sciences, Prince Sultan University, Riyadh, Saudi Arabia
| | - K. Suresh Manic
- Department of Electrical & Computer Engineering, National University of Science & Technology, Muscat, Oman
| | | |
Collapse
|
33
|
Zahoor S, Lali IU, Khan MA, Javed K, Mehmood W. Breast Cancer Detection and Classification using Traditional Computer Vision Techniques: A Comprehensive Review. Curr Med Imaging 2021; 16:1187-1200. [PMID: 32250226 DOI: 10.2174/1573405616666200406110547] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 12/25/2019] [Accepted: 01/03/2020] [Indexed: 11/22/2022]
Abstract
Breast Cancer is a common dangerous disease for women. Around the world, many women have died due to Breast cancer. However, in the initial stage, the diagnosis of breast cancer can save women's life. To diagnose cancer in the breast tissues, there are several techniques and methods. The image processing, machine learning, and deep learning methods and techniques are presented in this paper to diagnose the breast cancer. This work will be helpful to adopt better choices and reliable methods to diagnose breast cancer in an initial stage to save a women's life. To detect the breast masses, microcalcifications, and malignant cells,different techniques are used in the Computer-Aided Diagnosis (CAD) systems phases like preprocessing, segmentation, feature extraction, and classification. We have reported a detailed analysis of different techniques or methods with their usage and performance measurement. From the reported results, it is concluded that for breast cancer survival, it is essential to improve the methods or techniques to diagnose it at an initial stage by improving the results of the Computer-Aided Diagnosis systems. Furthermore, segmentation and classification phases are also challenging for researchers for the diagnosis of breast cancer accurately. Therefore, more advanced tools and techniques are still essential for the accurate diagnosis and classification of breast cancer.
Collapse
Affiliation(s)
- Saliha Zahoor
- Department of Computer Science, University of Gujrat, Gujrat, Pakistan
| | - Ikram Ullah Lali
- Department of Information Technology, University of Education, Lahore, Pakistan
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University, Museum Road Taxila, Rawalpindi, Pakistan
| | - Kashif Javed
- Department of Robotics, SMME NUST, Islamabad, Pakistan
| | - Waqar Mehmood
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| |
Collapse
|
34
|
Sadad T, Rehman A, Munir A, Saba T, Tariq U, Ayesha N, Abbasi R. Brain tumor detection and multi-classification using advanced deep learning techniques. Microsc Res Tech 2021; 84:1296-1308. [PMID: 33400339 DOI: 10.1002/jemt.23688] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 10/14/2020] [Accepted: 12/06/2020] [Indexed: 11/11/2022]
Abstract
A brain tumor is an uncontrolled development of brain cells in brain cancer if not detected at an early stage. Early brain tumor diagnosis plays a crucial role in treatment planning and patients' survival rate. There are distinct forms, properties, and therapies of brain tumors. Therefore, manual brain tumor detection is complicated, time-consuming, and vulnerable to error. Hence, automated computer-assisted diagnosis at high precision is currently in demand. This article presents segmentation through Unet architecture with ResNet50 as a backbone on the Figshare data set and achieved a level of 0.9504 of the intersection over union (IoU). The preprocessing and data augmentation concept were introduced to enhance the classification rate. The multi-classification of brain tumors is performed using evolutionary algorithms and reinforcement learning through transfer learning. Other deep learning methods such as ResNet50, DenseNet201, MobileNet V2, and InceptionV3 are also applied. Results thus obtained exhibited that the proposed research framework performed better than reported in state of the art. Different CNN, models applied for tumor classification such as MobileNet V2, Inception V3, ResNet50, DenseNet201, NASNet and attained accuracy 91.8, 92.8, 92.9, 93.1, 99.6%, respectively. However, NASNet exhibited the highest accuracy.
Collapse
Affiliation(s)
- Tariq Sadad
- Department of Computer Science, University of Central Punjab, Lahore, Pakistan
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Asim Munir
- Department of Computer Science and Software Engineering, International Islamic University, Islamabad, Pakistan
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam bin Abdulaziz University, Alkharj, Saudi Arabia
| | - Noor Ayesha
- School of Clinical Medicine, Zhengzhou University, Zhengzhou, Henan, China
| | - Rashid Abbasi
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| |
Collapse
|
35
|
Saba T. Computer vision for microscopic skin cancer diagnosis using handcrafted and non-handcrafted features. Microsc Res Tech 2021; 84:1272-1283. [PMID: 33399251 DOI: 10.1002/jemt.23686] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Revised: 11/15/2020] [Accepted: 11/30/2020] [Indexed: 12/31/2022]
Abstract
Skin covers the entire body and is the largest organ. Skin cancer is one of the most dreadful cancers that is primarily triggered by sensitivity to ultraviolet rays from the sun. However, the riskiest is melanoma, although it starts in a few different ways. The patient is extremely unaware of recognizing skin malignant growth at the initial stage. Literature is evident that various handcrafted and automatic deep learning features are employed to diagnose skin cancer using the traditional machine and deep learning techniques. The current research presents a comparison of skin cancer diagnosis techniques using handcrafted and non-handcrafted features. Additionally, clinical features such as Menzies method, seven-point detection, asymmetry, border color and diameter, visual textures (GRC), local binary patterns, Gabor filters, random fields of Markov, fractal dimension, and an oriental histography are also explored in the process of skin cancer detection. Several parameters, such as jacquard index, accuracy, dice efficiency, preciseness, sensitivity, and specificity, are compared on benchmark data sets to assess reported techniques. Finally, publicly available skin cancer data sets are described and the remaining issues are highlighted.
Collapse
Affiliation(s)
- Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
36
|
Bhargava A, Bansal A. Novel coronavirus (COVID-19) diagnosis using computer vision and artificial intelligence techniques: a review. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:19931-19946. [PMID: 33686333 PMCID: PMC7928188 DOI: 10.1007/s11042-021-10714-5] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2020] [Revised: 10/23/2020] [Accepted: 02/10/2021] [Indexed: 05/07/2023]
Abstract
The universal transmission of pandemic COVID-19 (Coronavirus) causes an immediate need to commit in the fight across the whole human population. The emergencies for human health care are limited for this abrupt outbreak and abandoned environment. In this situation, inventive automation like computer vision (machine learning, deep learning, artificial intelligence), medical imaging (computed tomography, X-Ray) has developed an encouraging solution against COVID-19. In recent months, different techniques using image processing are done by various researchers. In this paper, a major review on image acquisition, segmentation, diagnosis, avoidance, and management are presented. An analytical comparison of the various proposed algorithm by researchers for coronavirus has been carried out. Also, challenges and motivation for research in the future to deal with coronavirus are indicated. The clinical impact and use of computer vision and deep learning were discussed and we hope that dermatologists may have better understanding of these areas from the study.
Collapse
|
37
|
Sadad T, Rehman A, Hussain A, Abbasi AA, Khan MQ. A Review on Multi-organ Cancer Detection Using Advanced Machine Learning Techniques. Curr Med Imaging 2020; 17:686-694. [PMID: 33334293 DOI: 10.2174/1573405616666201217112521] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 07/07/2020] [Accepted: 07/23/2020] [Indexed: 12/24/2022]
Abstract
Abnormal behaviors of tumors pose a risk to human survival. Thus, the detection of cancers at their initial stage is beneficial for patients and lowers the mortality rate. However, this can be difficult due to various factors related to imaging modalities, such as complex background, low contrast, brightness issues, poorly defined borders and the shape of the affected area. Recently, computer-aided diagnosis (CAD) models have been used to accurately diagnose tumors in different parts of the human body, especially breast, brain, lung, liver, skin and colon cancers. These cancers are diagnosed using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), colonoscopy, mammography, dermoscopy and histopathology. The aim of this review was to investigate existing approaches for the diagnosis of breast, brain, lung, liver, skin and colon tumors. The review focuses on decision-making systems, including handcrafted features and deep learning architectures for tumor detection.
Collapse
Affiliation(s)
- Tariq Sadad
- Department of Computer Science and Software Engineering, International Islamic University, Islamabad, Pakistan
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab CCIS Prince Sultan University, Riyadh 11586, Saudi Arabia
| | - Ayyaz Hussain
- Department of Computer Science, Quaid-i-Azam University, Islamabad, Pakistan
| | - Aaqif Afzaal Abbasi
- Department of Software Engineering, Foundation University, Islamabad, Pakistan
| | - Muhammad Qasim Khan
- Department of Computer Science, COMSATS University (Attock Campus) Islamabad, Pakistan
| |
Collapse
|
38
|
Lu SY, Wang SH, Zhang YD. A classification method for brain MRI via MobileNet and feedforward network with random weights. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2020.10.017] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
39
|
Wang L, Liu Z, Xie J, Chen Y, Zhao X, You Z, Yang M, Qian W, Tian J, Yeom K, Song J. Decoding and Systematization of Medical Imaging Features of Multiple Human Malignancies. Radiol Imaging Cancer 2020; 2:e190079. [PMID: 33778732 PMCID: PMC7983692 DOI: 10.1148/rycan.2020190079] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 03/18/2020] [Accepted: 04/21/2020] [Indexed: 12/12/2022]
Abstract
Purpose To summarize the data of previously reported medical imaging features on human malignancies to provide a scientific basis for more credible imaging feature selection for future studies. Materials and Methods A search was performed in PubMed from database inception through March 23, 2018, for studies clearly stating the decoding of medical imaging features for malignancy-related objectives and/or hypotheses. The Newcastle-Ottawa scale was used for quality assessment of the included studies. Unsupervised hierarchical clustering was performed on the manually extracted features from each included study to identify the application rules of medical imaging features across human malignancies. CT images of 1000 retrospective patients with non–small cell lung cancer were used to reveal a pattern for the value distribution of complex texture features. Results A total of 5026 imaging features of malignancies affecting 20 parts of the human body from 930 original articles were collated and assessed in this study. A meta-feature construct was proposed to facilitate the investigation of details of any high-dimensional complex imaging features of malignancy. A correlation atlas was constructed to clarify the general rules of applying medical imaging features to the analysis of human malignancy. Assessment of this data revealed a pattern of value distributions of the most commonly reported texture features across human malignancies. Furthermore, the significant expression of the gene mutational signature 1B across human cancer was highly consistent with the presence of the run length imaging feature across different human malignancy types. Conclusion The results of this study may facilitate more credible imaging feature selection in all oncology tasks across a wide spectrum of human malignancies and help to reduce bias and redundancies in future medical imaging studies. Keywords: Computer Aided Diagnosis (CAD), Computer Applications-General (Informatics), Evidence Based Medicine, Informatics, Research Design, Statistics, Technology Assessment Supplemental material is available for this article. Published under a CC BY 4.0 license.
Collapse
Affiliation(s)
- Lu Wang
- School of Medical Informatics, China Medical University, Shenyang, Liaoning, China (L.W., M.Y., J.S.); Department of Radiology, Shenjing Hospital of China Medical University, Shenyang, Liaoning, China (Z.L.); Department of Radiology, China Medical University, Shenyang, Liaoning, China (J.X., Y.C., X.Z., Z.Y.); Department of Electric and Computer Engineering, University of Texas-El Paso, El Paso, Tex (W.Q.); CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China (J.T.); and Department of Radiology, Stanford University School of Medicine, 1201 Welch Rd Lucas Center PS055, Palo Alto, CA 94305 (K.Y., J.S.)
| | - Zhaoyu Liu
- School of Medical Informatics, China Medical University, Shenyang, Liaoning, China (L.W., M.Y., J.S.); Department of Radiology, Shenjing Hospital of China Medical University, Shenyang, Liaoning, China (Z.L.); Department of Radiology, China Medical University, Shenyang, Liaoning, China (J.X., Y.C., X.Z., Z.Y.); Department of Electric and Computer Engineering, University of Texas-El Paso, El Paso, Tex (W.Q.); CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China (J.T.); and Department of Radiology, Stanford University School of Medicine, 1201 Welch Rd Lucas Center PS055, Palo Alto, CA 94305 (K.Y., J.S.)
| | - Jiayi Xie
- School of Medical Informatics, China Medical University, Shenyang, Liaoning, China (L.W., M.Y., J.S.); Department of Radiology, Shenjing Hospital of China Medical University, Shenyang, Liaoning, China (Z.L.); Department of Radiology, China Medical University, Shenyang, Liaoning, China (J.X., Y.C., X.Z., Z.Y.); Department of Electric and Computer Engineering, University of Texas-El Paso, El Paso, Tex (W.Q.); CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China (J.T.); and Department of Radiology, Stanford University School of Medicine, 1201 Welch Rd Lucas Center PS055, Palo Alto, CA 94305 (K.Y., J.S.)
| | - Yuheng Chen
- School of Medical Informatics, China Medical University, Shenyang, Liaoning, China (L.W., M.Y., J.S.); Department of Radiology, Shenjing Hospital of China Medical University, Shenyang, Liaoning, China (Z.L.); Department of Radiology, China Medical University, Shenyang, Liaoning, China (J.X., Y.C., X.Z., Z.Y.); Department of Electric and Computer Engineering, University of Texas-El Paso, El Paso, Tex (W.Q.); CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China (J.T.); and Department of Radiology, Stanford University School of Medicine, 1201 Welch Rd Lucas Center PS055, Palo Alto, CA 94305 (K.Y., J.S.)
| | - Xiaoqi Zhao
- School of Medical Informatics, China Medical University, Shenyang, Liaoning, China (L.W., M.Y., J.S.); Department of Radiology, Shenjing Hospital of China Medical University, Shenyang, Liaoning, China (Z.L.); Department of Radiology, China Medical University, Shenyang, Liaoning, China (J.X., Y.C., X.Z., Z.Y.); Department of Electric and Computer Engineering, University of Texas-El Paso, El Paso, Tex (W.Q.); CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China (J.T.); and Department of Radiology, Stanford University School of Medicine, 1201 Welch Rd Lucas Center PS055, Palo Alto, CA 94305 (K.Y., J.S.)
| | - Zifan You
- School of Medical Informatics, China Medical University, Shenyang, Liaoning, China (L.W., M.Y., J.S.); Department of Radiology, Shenjing Hospital of China Medical University, Shenyang, Liaoning, China (Z.L.); Department of Radiology, China Medical University, Shenyang, Liaoning, China (J.X., Y.C., X.Z., Z.Y.); Department of Electric and Computer Engineering, University of Texas-El Paso, El Paso, Tex (W.Q.); CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China (J.T.); and Department of Radiology, Stanford University School of Medicine, 1201 Welch Rd Lucas Center PS055, Palo Alto, CA 94305 (K.Y., J.S.)
| | - Mingshu Yang
- School of Medical Informatics, China Medical University, Shenyang, Liaoning, China (L.W., M.Y., J.S.); Department of Radiology, Shenjing Hospital of China Medical University, Shenyang, Liaoning, China (Z.L.); Department of Radiology, China Medical University, Shenyang, Liaoning, China (J.X., Y.C., X.Z., Z.Y.); Department of Electric and Computer Engineering, University of Texas-El Paso, El Paso, Tex (W.Q.); CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China (J.T.); and Department of Radiology, Stanford University School of Medicine, 1201 Welch Rd Lucas Center PS055, Palo Alto, CA 94305 (K.Y., J.S.)
| | - Wei Qian
- School of Medical Informatics, China Medical University, Shenyang, Liaoning, China (L.W., M.Y., J.S.); Department of Radiology, Shenjing Hospital of China Medical University, Shenyang, Liaoning, China (Z.L.); Department of Radiology, China Medical University, Shenyang, Liaoning, China (J.X., Y.C., X.Z., Z.Y.); Department of Electric and Computer Engineering, University of Texas-El Paso, El Paso, Tex (W.Q.); CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China (J.T.); and Department of Radiology, Stanford University School of Medicine, 1201 Welch Rd Lucas Center PS055, Palo Alto, CA 94305 (K.Y., J.S.)
| | - Jie Tian
- School of Medical Informatics, China Medical University, Shenyang, Liaoning, China (L.W., M.Y., J.S.); Department of Radiology, Shenjing Hospital of China Medical University, Shenyang, Liaoning, China (Z.L.); Department of Radiology, China Medical University, Shenyang, Liaoning, China (J.X., Y.C., X.Z., Z.Y.); Department of Electric and Computer Engineering, University of Texas-El Paso, El Paso, Tex (W.Q.); CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China (J.T.); and Department of Radiology, Stanford University School of Medicine, 1201 Welch Rd Lucas Center PS055, Palo Alto, CA 94305 (K.Y., J.S.)
| | - Kristen Yeom
- School of Medical Informatics, China Medical University, Shenyang, Liaoning, China (L.W., M.Y., J.S.); Department of Radiology, Shenjing Hospital of China Medical University, Shenyang, Liaoning, China (Z.L.); Department of Radiology, China Medical University, Shenyang, Liaoning, China (J.X., Y.C., X.Z., Z.Y.); Department of Electric and Computer Engineering, University of Texas-El Paso, El Paso, Tex (W.Q.); CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China (J.T.); and Department of Radiology, Stanford University School of Medicine, 1201 Welch Rd Lucas Center PS055, Palo Alto, CA 94305 (K.Y., J.S.)
| | - Jiangdian Song
- School of Medical Informatics, China Medical University, Shenyang, Liaoning, China (L.W., M.Y., J.S.); Department of Radiology, Shenjing Hospital of China Medical University, Shenyang, Liaoning, China (Z.L.); Department of Radiology, China Medical University, Shenyang, Liaoning, China (J.X., Y.C., X.Z., Z.Y.); Department of Electric and Computer Engineering, University of Texas-El Paso, El Paso, Tex (W.Q.); CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China (J.T.); and Department of Radiology, Stanford University School of Medicine, 1201 Welch Rd Lucas Center PS055, Palo Alto, CA 94305 (K.Y., J.S.)
| |
Collapse
|
40
|
Khan MA, Qasim M, Lodhi HMJ, Nazir M, Javed K, Rubab S, Din A, Habib U. Automated design for recognition of blood cells diseases from hematopathology using classical features selection and ELM. Microsc Res Tech 2020; 84:202-216. [PMID: 32893918 DOI: 10.1002/jemt.23578] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Revised: 07/31/2020] [Accepted: 08/09/2020] [Indexed: 12/18/2022]
Abstract
In the human immune system, the white blood cells (WBC) creates bone and lymphoid masses. These cells defend the human body toward several infections, such as fungi and bacteria. The popular WBC types are Eosinophils, Lymphocytes, Neutrophils, and Monocytes, which are manually diagnosis by the experts. The manual diagnosis process is complicated and time-consuming; therefore, an automated system is required to classify these WBC. In this article, a new method is presented for WBC classification using feature selection and extreme learning machine (ELM). At the very first step, data augmentation is performed to increases the number of images and then implement a new contrast stretching technique name pixel stretch (PS). In the next step, color and gray level size zone matrix (GLSZM) features are calculated from PS images and fused in one vector based on the level of high similarity. However, few redundant features are also included that affect the classification performance. For handling this problem, a maximum relevance probability (MRP) based feature selection technique is implemented. The best-selected features computed from a fitness function are ELM in this work. All maximum relevance features are put to ELM, and this process is continued until the error rate is minimized. In the end, the final selected features are classified through Cubic SVM. For validation of the proposed method, LISC and Dhruv datasets are used, and it achieved the highest accuracy of 96.60%. From the results, it is clearly shown that the proposed method results are improved as compared to other implemented techniques.
Collapse
Affiliation(s)
| | - Muhammad Qasim
- Department of Computer Science, HITEC University, Museum Road, Taxila, Pakistan
| | | | - Muhammad Nazir
- Department of Computer Science, HITEC University, Museum Road, Taxila, Pakistan
| | - Kashif Javed
- Department of Robotics, SMME NUST, Islamabad, Pakistan
| | - Saddaf Rubab
- Military College of Signals, NUST, Islamabad, Pakistan
| | - Ahmad Din
- Department of CS, COMSATS University Islamabad, Abbottabad, Pakistan
| | - Usman Habib
- Department of Computer Science, FAST- National University of Computer & Emerging Sciences (NUCES), Chiniot-Faisalabad Campus, Faisalabad-Chiniot Road, Faisalabad, Punjab, Pakistan
| |
Collapse
|
41
|
Saba T. Recent advancement in cancer detection using machine learning: Systematic survey of decades, comparisons and challenges. J Infect Public Health 2020; 13:1274-1289. [DOI: 10.1016/j.jiph.2020.06.033] [Citation(s) in RCA: 73] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Revised: 06/21/2020] [Accepted: 06/28/2020] [Indexed: 12/24/2022] Open
|
42
|
Khan MA, Kadry S, Alhaisoni M, Nam Y, Zhang Y, Rajinikanth V, Sarfraz MS. Computer-Aided Gastrointestinal Diseases Analysis From Wireless Capsule Endoscopy: A Framework of Best Features Selection. IEEE ACCESS 2020; 8:132850-132859. [DOI: 10.1109/access.2020.3010448] [Citation(s) in RCA: 61] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2024]
|
43
|
Sharif MI, Li JP, Khan MA, Saleem MA. Active deep neural network features selection for segmentation and recognition of brain tumors using MRI images. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2019.11.019] [Citation(s) in RCA: 108] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
44
|
Khan MA, Rubab S, Kashif A, Sharif MI, Muhammad N, Shah JH, Zhang YD, Satapathy SC. Lungs cancer classification from CT images: An integrated design of contrast based classical features fusion and selection. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2019.11.014] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
|
45
|
A comparative study of features selection for skin lesion detection from dermoscopic images. ACTA ACUST UNITED AC 2019. [DOI: 10.1007/s13721-019-0209-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
46
|
Adeel A, Khan MA, Sharif M, Azam F, Shah JH, Umer T, Wan S. Diagnosis and recognition of grape leaf diseases: An automated system based on a novel saliency approach and canonical correlation analysis based multiple features fusion. SUSTAINABLE COMPUTING: INFORMATICS AND SYSTEMS 2019; 24:100349. [DOI: 10.1016/j.suscom.2019.08.002] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2024]
|
47
|
Khan MA, Sharif M, Akram T, Yasmin M, Nayak RS. Stomach Deformities Recognition Using Rank-Based Deep Features Selection. J Med Syst 2019; 43:329. [PMID: 31676931 DOI: 10.1007/s10916-019-1466-3] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2019] [Accepted: 09/26/2019] [Indexed: 12/22/2022]
Abstract
Doctor utilizes various kinds of clinical technologies like MRI, endoscopy, CT scan, etc., to identify patient's deformity during the review time. Among set of clinical technologies, wireless capsule endoscopy (WCE) is an advanced procedures used for digestive track malformation. During this complete process, more than 57,000 frames are captured and doctors need to examine a complete video frame by frame which is a tedious task even for an experienced gastrologist. In this article, a novel computerized automated method is proposed for the classification of abdominal infections of gastrointestinal track from WCE images. Three core steps of the suggested system belong to the category of segmentation, deep features extraction and fusion followed by robust features selection. The ulcer abnormalities from WCE videos are initially extracted through a proposed color features based low level and high-level saliency (CFbLHS) estimation method. Later, DenseNet CNN model is utilized and through transfer learning (TL) features are computed prior to feature optimization using Kapur's entropy. A parallel fusion methodology is opted for the selection of maximum feature value (PMFV). For feature selection, Tsallis entropy is calculated later sorted into descending order. Finally, top 50% high ranked features are selected for classification using multilayered feedforward neural network classifier for recognition. Simulation is performed on collected WCE dataset and achieved maximum accuracy of 99.5% in 21.15 s.
Collapse
Affiliation(s)
| | - Muhammad Sharif
- Department of E&CE, COMSATS University Islamabad, Wah Campus, Islamabad, Pakistan.
| | - Tallha Akram
- Information Science, Canara Engineering College, Mangaluru, Karnataka, India
| | - Mussarat Yasmin
- Department of E&CE, COMSATS University Islamabad, Wah Campus, Islamabad, Pakistan
| | - Ramesh Sunder Nayak
- Department of CS, COMSATS University Islamabad, Wah Campus, Islamabad, Pakistan
| |
Collapse
|
48
|
Khan MA, Rashid M, Sharif M, Javed K, Akram T. Classification of gastrointestinal diseases of stomach from WCE using improved saliency-based method and discriminant features selection. MULTIMEDIA TOOLS AND APPLICATIONS 2019; 78:27743-27770. [DOI: 10.1007/s11042-019-07875-9] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Revised: 04/22/2019] [Accepted: 06/10/2019] [Indexed: 08/25/2024]
|
49
|
Saba T, Khan MA, Rehman A, Marie-Sainte SL. Region Extraction and Classification of Skin Cancer: A Heterogeneous framework of Deep CNN Features Fusion and Reduction. J Med Syst 2019; 43:289. [PMID: 31327058 DOI: 10.1007/s10916-019-1413-3] [Citation(s) in RCA: 90] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Accepted: 07/03/2019] [Indexed: 01/12/2023]
Abstract
Cancer is one of the leading causes of deaths in the last two decades. It is either diagnosed malignant or benign - depending upon the severity of the infection and the current stage. The conventional methods require a detailed physical inspection by an expert dermatologist, which is time-consuming and imprecise. Therefore, several computer vision methods are introduced lately, which are cost-effective and somewhat accurate. In this work, we propose a new automated approach for skin lesion detection and recognition using a deep convolutional neural network (DCNN). The proposed cascaded design incorporates three fundamental steps including; a) contrast enhancement through fast local Laplacian filtering (FlLpF) along HSV color transformation; b) lesion boundary extraction using color CNN approach by following XOR operation; c) in-depth features extraction by applying transfer learning using Inception V3 model prior to feature fusion using hamming distance (HD) approach. An entropy controlled feature selection method is also introduced for the selection of the most discriminant features. The proposed method is tested on PH2 and ISIC 2017 datasets, whereas the recognition phase is validated on PH2, ISBI 2016, and ISBI 2017 datasets. From the results, it is concluded that the proposed method outperforms several existing methods and attained accuracy 98.4% on PH2 dataset, 95.1% on ISBI dataset and 94.8% on ISBI 2017 dataset.
Collapse
Affiliation(s)
- Tanzila Saba
- College of Computer and Information Sciences, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Muhammad Attique Khan
- Department of Computer Science and Engineering, HITEC Universit, Museum Road, Taxila, Pakistan
| | - Amjad Rehman
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University, Riyadh, Saudi Arabia.
| | | |
Collapse
|
50
|
Safdar A, Khan MA, Shah JH, Sharif M, Saba T, Rehman A, Javed K, Khan JA. Intelligent microscopic approach for identification and recognition of citrus deformities. Microsc Res Tech 2019; 82:1542-1556. [PMID: 31209970 DOI: 10.1002/jemt.23320] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Revised: 04/24/2019] [Accepted: 05/13/2019] [Indexed: 11/08/2022]
Affiliation(s)
| | - Muhammad A. Khan
- Department of Computer Science and EngineeringHITEC University Taxila Pakistan
| | | | | | - Tanzila Saba
- College of Computer and Information Sciences Prince Sultan University Riyadh Saudi Arabia
| | - Amjad Rehman
- Faculty of Computing, Universiti Teknologi Malaysia Malaysia
| | - Kashif Javed
- Department of RoboticsSMME NUST Islamabad Pakistan
| | - Junaid A. Khan
- Department of Computer Science and EngineeringHITEC University Taxila Pakistan
| |
Collapse
|