1
|
Demirbaş AA, Üzen H, Fırat H. Spatial-attention ConvMixer architecture for classification and detection of gastrointestinal diseases using the Kvasir dataset. Health Inf Sci Syst 2024; 12:32. [PMID: 38685985 PMCID: PMC11056348 DOI: 10.1007/s13755-024-00290-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 04/12/2024] [Indexed: 05/02/2024] Open
Abstract
Gastrointestinal (GI) disorders, encompassing conditions like cancer and Crohn's disease, pose a significant threat to public health. Endoscopic examinations have become crucial for diagnosing and treating these disorders efficiently. However, the subjective nature of manual evaluations by gastroenterologists can lead to potential errors in disease classification. In addition, the difficulty of diagnosing diseased tissues in GI and the high similarity between classes made the subject a difficult area. Automated classification systems that use artificial intelligence to solve these problems have gained traction. Automatic detection of diseases in medical images greatly benefits in the diagnosis of diseases and reduces the time of disease detection. In this study, we suggested a new architecture to enable research on computer-assisted diagnosis and automated disease detection in GI diseases. This architecture, called Spatial-Attention ConvMixer (SAC), further developed the patch extraction technique used as the basis of the ConvMixer architecture with a spatial attention mechanism (SAM). The SAM enables the network to concentrate selectively on the most informative areas, assigning importance to each spatial location within the feature maps. We employ the Kvasir dataset to assess the accuracy of classifying GI illnesses using the SAC architecture. We compare our architecture's results with Vanilla ViT, Swin Transformer, ConvMixer, MLPMixer, ResNet50, and SqueezeNet models. Our SAC method gets 93.37% accuracy, while the other architectures get respectively 79.52%, 74.52%, 92.48%, 63.04%, 87.44%, and 85.59%. The proposed spatial attention block improves the accuracy of the ConvMixer architecture on the Kvasir, outperforming the state-of-the-art methods with an accuracy rate of 93.37%.
Collapse
Affiliation(s)
| | - Hüseyin Üzen
- Department of Computer Engineering, Faculty of Engineering, Bingol University, Bingol, Turkey
| | - Hüseyin Fırat
- Department of Computer Engineering, Faculty of Engineering, Dicle University, Diyarbakır, Turkey
| |
Collapse
|
2
|
Hossain T, Shamrat FMJM, Zhou X, Mahmud I, Mazumder MSA, Sharmin S, Gururajan R. Development of a multi-fusion convolutional neural network (MF-CNN) for enhanced gastrointestinal disease diagnosis in endoscopy image analysis. PeerJ Comput Sci 2024; 10:e1950. [PMID: 38660192 PMCID: PMC11041948 DOI: 10.7717/peerj-cs.1950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 02/29/2024] [Indexed: 04/26/2024]
Abstract
Gastrointestinal (GI) diseases are prevalent medical conditions that require accurate and timely diagnosis for effective treatment. To address this, we developed the Multi-Fusion Convolutional Neural Network (MF-CNN), a deep learning framework that strategically integrates and adapts elements from six deep learning models, enhancing feature extraction and classification of GI diseases from endoscopic images. The MF-CNN architecture leverages truncated and partially frozen layers from existing models, augmented with novel components such as Auxiliary Fusing Layers (AuxFL), Fusion Residual Block (FuRB), and Alpha Dropouts (αDO) to improve precision and robustness. This design facilitates the precise identification of conditions such as ulcerative colitis, polyps, esophagitis, and healthy colons. Our methodology involved preprocessing endoscopic images sourced from open databases, including KVASIR and ETIS-Larib Polyp DB, using adaptive histogram equalization (AHE) to enhance their quality. The MF-CNN framework supports detailed feature mapping for improved interpretability of the model's internal workings. An ablation study was conducted to validate the contribution of each component, demonstrating that the integration of AuxFL, αDO, and FuRB played a crucial part in reducing overfitting and efficiency saturation and enhancing overall model performance. The MF-CNN demonstrated outstanding performance in terms of efficacy, achieving an accuracy rate of 99.25%. It also excelled in other key performance metrics with a precision of 99.27%, a recall of 99.25%, and an F1-score of 99.25%. These metrics confirmed the model's proficiency in accurate classification and its capability to minimize false positives and negatives across all tested GI disease categories. Furthermore, the AUC values were exceptional, averaging 1.00 for both test and validation sets, indicating perfect discriminative ability. The findings of the P-R curve analysis and confusion matrix further confirmed the robust classification performance of the MF-CNN. This research introduces a technique for medical imaging that can potentially transform diagnostics in gastrointestinal healthcare facilities worldwide.
Collapse
Affiliation(s)
- Tanzim Hossain
- Department of Software Engineering, Daffodil International University, Dhaka, Bangladesh
| | | | - Xujuan Zhou
- School of Business, University of Southern Queensland, Springfield, Australia
| | - Imran Mahmud
- Department of Software Engineering, Daffodil International University, Dhaka, Bangladesh
| | - Md. Sakib Ali Mazumder
- Department of Software Engineering, Daffodil International University, Dhaka, Bangladesh
| | - Sharmin Sharmin
- Department of Computer System and Technology, University of Malaya, Kuala Lumpur, Malaysia
| | - Raj Gururajan
- School of Business, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
3
|
Jiang B, Dorosan M, Leong JWH, Ong MEH, Lam SSW, Ang TL. Development and validation of a deep learning system for detection of small bowel pathologies in capsule endoscopy: a pilot study in a Singapore institution. Singapore Med J 2024; 65:133-140. [PMID: 38527297 PMCID: PMC11060635 DOI: 10.4103/singaporemedj.smj-2023-187] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 12/10/2023] [Indexed: 03/27/2024]
Abstract
INTRODUCTION Deep learning models can assess the quality of images and discriminate among abnormalities in small bowel capsule endoscopy (CE), reducing fatigue and the time needed for diagnosis. They serve as a decision support system, partially automating the diagnosis process by providing probability predictions for abnormalities. METHODS We demonstrated the use of deep learning models in CE image analysis, specifically by piloting a bowel preparation model (BPM) and an abnormality detection model (ADM) to determine frame-level view quality and the presence of abnormal findings, respectively. We used convolutional neural network-based models pretrained on large-scale open-domain data to extract spatial features of CE images that were then used in a dense feed-forward neural network classifier. We then combined the open-source Kvasir-Capsule dataset (n = 43) and locally collected CE data (n = 29). RESULTS Model performance was compared using averaged five-fold and two-fold cross-validation for BPMs and ADMs, respectively. The best BPM model based on a pre-trained ResNet50 architecture had an area under the receiver operating characteristic and precision-recall curves of 0.969±0.008 and 0.843±0.041, respectively. The best ADM model, also based on ResNet50, had top-1 and top-2 accuracies of 84.03±0.051 and 94.78±0.028, respectively. The models could process approximately 200-250 images per second and showed good discrimination on time-critical abnormalities such as bleeding. CONCLUSION Our pilot models showed the potential to improve time to diagnosis in CE workflows. To our knowledge, our approach is unique to the Singapore context. The value of our work can be further evaluated in a pragmatic manner that is sensitive to existing clinician workflow and resource constraints.
Collapse
Affiliation(s)
- Bochao Jiang
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
| | - Michael Dorosan
- Health Services Research Centre, Singapore Health Services Pte Ltd, Singapore
| | - Justin Wen Hao Leong
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
| | - Marcus Eng Hock Ong
- Health Services and Systems Research, Duke-NUS Medical School, Singapore
- Department of Emergency Medicine, Singapore General Hospital, Singapore
| | - Sean Shao Wei Lam
- Health Services Research Centre, Singapore Health Services Pte Ltd, Singapore
| | - Tiing Leong Ang
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
| |
Collapse
|
4
|
Naz J, Sharif MI, Sharif MI, Kadry S, Rauf HT, Ragab AE. A Comparative Analysis of Optimization Algorithms for Gastrointestinal Abnormalities Recognition and Classification Based on Ensemble XcepNet23 and ResNet18 Features. Biomedicines 2023; 11:1723. [PMID: 37371819 DOI: 10.3390/biomedicines11061723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 05/23/2023] [Accepted: 06/09/2023] [Indexed: 06/29/2023] Open
Abstract
Esophagitis, cancerous growths, bleeding, and ulcers are typical symptoms of gastrointestinal disorders, which account for a significant portion of human mortality. For both patients and doctors, traditional diagnostic methods can be exhausting. The major aim of this research is to propose a hybrid method that can accurately diagnose the gastrointestinal tract abnormalities and promote early treatment that will be helpful in reducing the death cases. The major phases of the proposed method are: Dataset Augmentation, Preprocessing, Features Engineering (Features Extraction, Fusion, Optimization), and Classification. Image enhancement is performed using hybrid contrast stretching algorithms. Deep Learning features are extracted through transfer learning from the ResNet18 model and the proposed XcepNet23 model. The obtained deep features are ensembled with the texture features. The ensemble feature vector is optimized using the Binary Dragonfly algorithm (BDA), Moth-Flame Optimization (MFO) algorithm, and Particle Swarm Optimization (PSO) algorithm. In this research, two datasets (Hybrid dataset and Kvasir-V1 dataset) consisting of five and eight classes, respectively, are utilized. Compared to the most recent methods, the accuracy achieved by the proposed method on both datasets was superior. The Q_SVM's accuracies on the Hybrid dataset, which was 100%, and the Kvasir-V1 dataset, which was 99.24%, were both promising.
Collapse
Affiliation(s)
- Javeria Naz
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah 47040, Pakistan
| | - Muhammad Imran Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah 47040, Pakistan
| | - Muhammad Irfan Sharif
- Department of Computer Science, University of Education Lahore, Jauharabad Campus, Lahore 54770, Pakistan
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 13-5053, Lebanon
- Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman 346, United Arab Emirates
- MEU Research Unit, Middle East University, Amman 11831, Jordan
| | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
| | - Adham E Ragab
- Industrial Engineering Department, College of Engineering, King Saud University, P.O. Box 800, Riyadh 11421, Saudi Arabia
| |
Collapse
|
5
|
Ahmad N, Shah JH, Khan MA, Baili J, Ansari GJ, Tariq U, Kim YJ, Cha JH. A novel framework of multiclass skin lesion recognition from dermoscopic images using deep learning and explainable AI. Front Oncol 2023; 13:1151257. [PMID: 37346069 PMCID: PMC10281646 DOI: 10.3389/fonc.2023.1151257] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 05/19/2023] [Indexed: 06/23/2023] Open
Abstract
Skin cancer is a serious disease that affects people all over the world. Melanoma is an aggressive form of skin cancer, and early detection can significantly reduce human mortality. In the United States, approximately 97,610 new cases of melanoma will be diagnosed in 2023. However, challenges such as lesion irregularities, low-contrast lesions, intraclass color similarity, redundant features, and imbalanced datasets make improved recognition accuracy using computerized techniques extremely difficult. This work presented a new framework for skin lesion recognition using data augmentation, deep learning, and explainable artificial intelligence. In the proposed framework, data augmentation is performed at the initial step to increase the dataset size, and then two pretrained deep learning models are employed. Both models have been fine-tuned and trained using deep transfer learning. Both models (Xception and ShuffleNet) utilize the global average pooling layer for deep feature extraction. The analysis of this step shows that some important information is missing; therefore, we performed the fusion. After the fusion process, the computational time was increased; therefore, we developed an improved Butterfly Optimization Algorithm. Using this algorithm, only the best features are selected and classified using machine learning classifiers. In addition, a GradCAM-based visualization is performed to analyze the important region in the image. Two publicly available datasets-ISIC2018 and HAM10000-have been utilized and obtained improved accuracy of 99.3% and 91.5%, respectively. Comparing the proposed framework accuracy with state-of-the-art methods reveals improved and less computational time.
Collapse
Affiliation(s)
- Naveed Ahmad
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | - Jamal Hussain Shah
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University, Taxila, Pakistan
- Department of Informatics, University of Leicester, Leicester, United Kingdom
| | - Jamel Baili
- College of Computer Science, King Khalid University, Abha, Saudi Arabia
| | | | - Usman Tariq
- Department of Management Information Systems, CoBA, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Ye Jin Kim
- Department of Computer Science, Hanyang University, Seoul, Republic of Korea
| | - Jae-Hyuk Cha
- Department of Computer Science, Hanyang University, Seoul, Republic of Korea
| |
Collapse
|
6
|
Ghaleb Al-Mekhlafi Z, Mohammed Senan E, Sulaiman Alshudukhi J, Abdulkarem Mohammed B. Hybrid Techniques for Diagnosing Endoscopy Images for Early Detection of Gastrointestinal Disease Based on Fusion Features. INT J INTELL SYST 2023. [DOI: 10.1155/2023/8616939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
Gastrointestinal (GI) diseases, particularly tumours, are considered one of the most widespread and dangerous diseases and thus need timely health care for early detection to reduce deaths. Endoscopy technology is an effective technique for diagnosing GI diseases, thus producing a video containing thousands of frames. However, it is difficult to analyse all the images by a gastroenterologist, and it takes a long time to keep track of all the frames. Thus, artificial intelligence systems provide solutions to this challenge by analysing thousands of images with high speed and effective accuracy. Hence, systems with different methodologies are developed in this work. The first methodology for diagnosing endoscopy images of GI diseases is by using VGG-16 + SVM and DenseNet-121 + SVM. The second methodology for diagnosing endoscopy images of gastrointestinal diseases by artificial neural network (ANN) is based on fused features between VGG-16 and DenseNet-121 before and after high-dimensionality reduction by the principal component analysis (PCA). The third methodology is by ANN and is based on the fused features between VGG-16 and handcrafted features and features fused between DenseNet-121 and the handcrafted features. Herein, handcrafted features combine the features of gray level cooccurrence matrix (GLCM), discrete wavelet transform (DWT), fuzzy colour histogram (FCH), and local binary pattern (LBP) methods. All systems achieved promising results for diagnosing endoscopy images of the gastroenterology data set. The ANN network reached an accuracy, sensitivity, precision, specificity, and an AUC of 98.9%, 98.70%, 98.94%, 99.69%, and 99.51%, respectively, based on fused features of the VGG-16 and the handcrafted.
Collapse
Affiliation(s)
- Zeyad Ghaleb Al-Mekhlafi
- Department of Information and Computer Science, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana’a, Yemen
| | - Jalawi Sulaiman Alshudukhi
- Department of Information and Computer Science, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| | - Badiea Abdulkarem Mohammed
- Department of Computer Engineering, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| |
Collapse
|
7
|
Atasever S, Azginoglu N, Terzi DS, Terzi R. A comprehensive survey of deep learning research on medical image analysis with focus on transfer learning. Clin Imaging 2023; 94:18-41. [PMID: 36462229 DOI: 10.1016/j.clinimag.2022.11.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 10/17/2022] [Accepted: 11/01/2022] [Indexed: 11/13/2022]
Abstract
This survey aims to identify commonly used methods, datasets, future trends, knowledge gaps, constraints, and limitations in the field to provide an overview of current solutions used in medical image analysis in parallel with the rapid developments in transfer learning (TL). Unlike previous studies, this survey grouped the last five years of current studies for the period between January 2017 and February 2021 according to different anatomical regions and detailed the modality, medical task, TL method, source data, target data, and public or private datasets used in medical imaging. Also, it provides readers with detailed information on technical challenges, opportunities, and future research trends. In this way, an overview of recent developments is provided to help researchers to select the most effective and efficient methods and access widely used and publicly available medical datasets, research gaps, and limitations of the available literature.
Collapse
Affiliation(s)
- Sema Atasever
- Computer Engineering Department, Nevsehir Hacı Bektas Veli University, Nevsehir, Turkey.
| | - Nuh Azginoglu
- Computer Engineering Department, Kayseri University, Kayseri, Turkey.
| | | | - Ramazan Terzi
- Computer Engineering Department, Amasya University, Amasya, Turkey.
| |
Collapse
|
8
|
Zahid M, Khan MA, Azam F, Sharif M, Kadry S, Mohanty JR. Pedestrian identification using motion-controlled deep neural network in real-time visual surveillance. Soft comput 2023; 27:453-469. [DOI: 10.1007/s00500-021-05701-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/15/2021] [Indexed: 10/21/2022]
|
9
|
Ahsan T, Khalid S, Najam S, Attique Khan M, Jin Kim Y, Chang B. HRNetO: Human Action Recognition Using Unified Deep Features Optimization Framework. COMPUTERS, MATERIALS & CONTINUA 2023; 75:1089-1105. [DOI: 10.32604/cmc.2023.034563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 12/08/2022] [Indexed: 08/25/2024]
|
10
|
Manic KS, Rajinikanth V, Al-Bimani AS, Taniar D, Kadry S. Framework to Detect Schizophrenia in Brain MRI Slices with Mayfly Algorithm-Selected Deep and Handcrafted Features. SENSORS (BASEL, SWITZERLAND) 2022; 23:280. [PMID: 36616876 PMCID: PMC9823879 DOI: 10.3390/s23010280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 12/21/2022] [Accepted: 12/24/2022] [Indexed: 06/17/2023]
Abstract
Brain abnormality causes severe human problems, and thorough screening is necessary to identify the disease. In clinics, bio-image-supported brain abnormality screening is employed mainly because of its investigative accuracy compared with bio-signal (EEG)-based practice. This research aims to develop a reliable disease screening framework for the automatic identification of schizophrenia (SCZ) conditions from brain MRI slices. This scheme consists following phases: (i) MRI slices collection and pre-processing, (ii) implementation of VGG16 to extract deep features (DF), (iii) collection of handcrafted features (HF), (iv) mayfly algorithm-supported optimal feature selection, (v) serial feature concatenation, and (vi) binary classifier execution and validation. The performance of the proposed scheme was independently tested with DF, HF, and concatenated features (DF+HF), and the achieved outcome of this study verifies that the schizophrenia screening accuracy with DF+HF is superior compared with other methods. During this work, 40 patients’ brain MRI images (20 controlled and 20 SCZ class) were considered for the investigation, and the following accuracies were achieved: DF provided >91%, HF obtained >85%, and DF+HF achieved >95%. Therefore, this framework is clinically significant, and in the future, it can be used to inspect actual patients’ brain MRI slices.
Collapse
Affiliation(s)
- K. Suresh Manic
- National University of Science and Technology, Muscat P.O. Box 112, Oman
| | - Venkatesan Rajinikanth
- Department of Computer Science and Engineering, Division of Research and Innovation, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai 602105, India
| | - Ali Saud Al-Bimani
- National University of Science and Technology, Muscat P.O. Box 112, Oman
| | - David Taniar
- Faculty of Information Technology, Monash University, Wellington Rd, Clayton, VIC 3800, Australia
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Artificial Intelligence Research Center (AIRC), Ajman University, Ajman P.O. Box 346, United Arab Emirates
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 36, Lebanon
| |
Collapse
|
11
|
Mohan R, Kadry S, Rajinikanth V, Majumdar A, Thinnukool O. Automatic Detection of Tuberculosis Using VGG19 with Seagull-Algorithm. LIFE (BASEL, SWITZERLAND) 2022; 12:life12111848. [PMID: 36430983 PMCID: PMC9692667 DOI: 10.3390/life12111848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 10/28/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022]
Abstract
Due to various reasons, the incidence rate of communicable diseases in humans is steadily rising, and timely detection and handling will reduce the disease distribution speed. Tuberculosis (TB) is a severe communicable illness caused by the bacterium Mycobacterium-Tuberculosis (M. tuberculosis), which predominantly affects the lungs and causes severe respiratory problems. Due to its significance, several clinical level detections of TB are suggested, including lung diagnosis with chest X-ray images. The proposed work aims to develop an automatic TB detection system to assist the pulmonologist in confirming the severity of the disease, decision-making, and treatment execution. The proposed system employs a pre-trained VGG19 with the following phases: (i) image pre-processing, (ii) mining of deep features, (iii) enhancing the X-ray images with chosen procedures and mining of the handcrafted features, (iv) feature optimization using Seagull-Algorithm and serial concatenation, and (v) binary classification and validation. The classification is executed with 10-fold cross-validation in this work, and the proposed work is investigated using MATLAB® software. The proposed research work was executed using the concatenated deep and handcrafted features, which provided a classification accuracy of 98.6190% with the SVM-Medium Gaussian (SVM-MG) classifier.
Collapse
Affiliation(s)
- Ramya Mohan
- Department of Computer Science and Engineering, Division of Research and Innovation, Saveetha School of Engineering, SIMATS, Chennai 602105, India
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman 346, United Arab Emirates
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos 1401, Lebanon
| | - Venkatesan Rajinikanth
- Department of Computer Science and Engineering, Division of Research and Innovation, Saveetha School of Engineering, SIMATS, Chennai 602105, India
| | - Arnab Majumdar
- Faculty of Engineering, Imperial College London, London SW7 2AZ, UK
| | - Orawit Thinnukool
- Faculty of Engineering, Imperial College London, London SW7 2AZ, UK
- College of Arts, Media, and Technology, Chiang Mai University, Chiang Mai 50200, Thailand
- Correspondence:
| |
Collapse
|
12
|
Montalbo FJP. Fusing Compressed Deep ConvNets with a Self-Normalizing Residual Block and Alpha Dropout for a Cost-Efficient Classification and Diagnosis of Gastrointestinal Tract Diseases. MethodsX 2022; 9:101925. [DOI: 10.1016/j.mex.2022.101925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Accepted: 11/08/2022] [Indexed: 11/16/2022] Open
|
13
|
Narasimha Raju AS, Jayavel K, Rajalakshmi T. Dexterous Identification of Carcinoma through ColoRectalCADx with Dichotomous Fusion CNN and UNet Semantic Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4325412. [PMID: 36262620 PMCID: PMC9576362 DOI: 10.1155/2022/4325412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 08/16/2022] [Accepted: 08/20/2022] [Indexed: 11/18/2022]
Abstract
Human colorectal disorders in the digestive tract are recognized by reference colonoscopy. The current system recognizes cancer through a three-stage system that utilizes two sets of colonoscopy data. However, identifying polyps by visualization has not been addressed. The proposed system is a five-stage system called ColoRectalCADx, which provides three publicly accessible datasets as input data for cancer detection. The three main datasets are CVC Clinic DB, Kvasir2, and Hyper Kvasir. After the image preprocessing stages, system experiments were performed with the seven prominent convolutional neural networks (CNNs) (end-to-end) and nine fusion CNN models to extract the spatial features. Afterwards, the end-to-end CNN and fusion features are executed. These features are derived from Discrete Wavelet Transform (DWT) and Vector Support Machine (SVM) classification, that was used to retrieve time and spatial frequency features. Experimentally, the results were obtained for five stages. For each of the three datasets, from stage 1 to stage 3, end-to-end CNN, DenseNet-201 obtained the best testing accuracy (98%, 87%, 84%), ((98%, 97%), (87%, 87%), (84%, 84%)), ((99.03%, 99%), (88.45%, 88%), (83.61%, 84%)). For each of the three datasets, from stage 2, CNN DaRD-22 fusion obtained the optimal test accuracy ((93%, 97%) (82%, 84%), (69%, 57%)). And for stage 4, ADaRDEV2-22 fusion achieved the best test accuracy ((95.73%, 94%), (81.20%, 81%), (72.56%, 58%)). For the input image segmentation datasets CVC Clinc-Seg, KvasirSeg, and Hyper Kvasir, malignant polyps were identified with the UNet CNN model. Here, the loss score datasets (CVC clinic DB was 0.7842, Kvasir2 was 0.6977, and Hyper Kvasir was 0.6910) were obtained.
Collapse
Affiliation(s)
- Akella S. Narasimha Raju
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| | - Kayalvizhi Jayavel
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| | - Thulasi Rajalakshmi
- Department of Electronics and Communication Engineering, School of Electrical and Electronics Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| |
Collapse
|
14
|
Yin TK, Huang KL, Chiu SR, Yang YQ, Chang BR. Endoscopy Artefact Detection by Deep Transfer Learning of Baseline Models. J Digit Imaging 2022; 35:1101-1110. [PMID: 35478060 PMCID: PMC9582060 DOI: 10.1007/s10278-022-00627-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 03/28/2022] [Accepted: 03/30/2022] [Indexed: 10/18/2022] Open
Abstract
To visualise the tumours inside the body on a screen, a long and thin tube is inserted with a light source and a camera at the tip to obtain video frames inside organs in endoscopy. However, multiple artefacts exist in these video frames that cause difficulty during the diagnosis of cancers. In this research, deep learning was applied to detect eight kinds of artefacts: specularity, bubbles, saturation, contrast, blood, instrument, blur, and imaging artefacts. Based on transfer learning with pre-trained parameters and fine-tuning, two state-of-the-art methods were applied for detection: faster region-based convolutional neural networks (Faster R-CNN) and EfficientDet. Experiments were implemented on the grand challenge dataset, Endoscopy Artefact Detection and Segmentation (EAD2020). To validate our approach in this study, we used phase I of 2,200 frames and phase II of 331 frames in the original training dataset with ground-truth annotations as training and testing dataset, respectively. Among the tested methods, EfficientDet-D2 achieves a score of 0.2008 (mAPd[Formula: see text]0.6+mIoUd[Formula: see text]0.4) on the dataset that is better than three other baselines: Faster-RCNN, YOLOv3, and RetinaNet, and competitive to the best non-baseline result scored 0.25123 on the leaderboard although our testing was on phase II of 331 frames instead of the original 200 testing frames. Without extra improvement techniques beyond basic neural networks such as test-time augmentation, we showed that a simple baseline could achieve state-of-the-art performance in detecting artefacts in endoscopy. In conclusion, we proposed the combination of EfficientDet-D2 with suitable data augmentation and pre-trained parameters during fine-tuning training to detect the artefacts in endoscopy.
Collapse
Affiliation(s)
- Tang-Kai Yin
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan.
| | - Kai-Lun Huang
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| | - Si-Rong Chiu
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| | - Yu-Qi Yang
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| | - Bao-Rong Chang
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| |
Collapse
|
15
|
Vajravelu A, Selvan KT, Jamil MMBA, Anitha J, Diez IDLT. Machine learning techniques to detect bleeding frame and area in wireless capsule endoscopy video. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-213099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Wireless Capsule Endoscopy (WCE) allows direct visual inspecting of the full digestive system of the patient without invasion and pain, at the price of a long examination by physicians of a large number of photographs. This research presents a new approach to color extraction to differentiate bleeding frames from normal ones and locate more bleeding areas. We have a dual-system suggestion. We use entire color information on the WCE pictures and the pixel-represented clustering approach to get the clustered centers that characterize WCE pictures as words. Then we evaluate the status of a WCE framework using the nearby SVM and K methods (KNN). The classification performance is 95.75% accurate for the AUC 0.9771% and validates the exciting performance for bleeding classification provided by the suggested approach. Second, we present a two-step approach for extracting saliency maps to emphasize bleeding locations with a distinct color channel mixer to build a first-stage salience map. The second stage salience map was taken with optical contrast.We locate bleeding spots following a suitable fusion approach and threshold. Quantitative and qualitative studies demonstrate that our approaches can correctly distinguish bleeding sites from neighborhoods.
Collapse
Affiliation(s)
- Ashok Vajravelu
- Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein Onn, Malaysia
| | - K.S. Tamil Selvan
- Department of Electronics and Communication Engineering, KPR Institute of Engineering and Technology, Coimbatore, India
| | | | - Jude Anitha
- Department of ECE, Karunya Institute of Technology and Sciences, Coimbatore, India
| | - Isabel de la Torre Diez
- Department of Signal Theory and Communications and Telematics Engineering, University of Valladolid, Spain
| |
Collapse
|
16
|
Raut V, Gunjan R, Shete VV, Eknath UD. Gastrointestinal tract disease segmentation and classification in wireless capsule endoscopy using intelligent deep learning model. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2099298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Affiliation(s)
- Vrushali Raut
- Electronics & Communication Engineering, MIT School of Engineering, MIT Art, Design and Technology University, Pune, India
| | - Reena Gunjan
- Electronics & Communication Engineering, MIT School of Engineering, MIT Art, Design and Technology University, Pune, India
| | - Virendra V. Shete
- Electronics & Communication Engineering, MIT School of Engineering, MIT Art, Design and Technology University, Pune, India
| | - Upasani Dhananjay Eknath
- Electronics & Communication Engineering, MIT School of Engineering, MIT Art, Design and Technology University, Pune, India
| |
Collapse
|
17
|
Time-based self-supervised learning for Wireless Capsule Endoscopy. Comput Biol Med 2022; 146:105631. [DOI: 10.1016/j.compbiomed.2022.105631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 04/17/2022] [Accepted: 04/17/2022] [Indexed: 11/18/2022]
|
18
|
Diagnosing gastrointestinal diseases from endoscopy images through a multi-fused CNN with auxiliary layers, alpha dropouts, and a fusion residual block. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
19
|
Feature Selection to Predict LED Light Energy Consumption with Specific Light Recipes in Closed Plant Production Systems. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12125901] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
The use of closed growth environments, such as greenhouses, plant factories, and vertical farms, represents a sustainable alternative for fresh food production. Closed plant production systems (CPPSs) allow growing of any plant variety, no matter the year’s season. Artificial lighting plays an essential role in CPPSs as it promotes growth by providing optimal conditions for plant development. Nevertheless, it is a model with a high demand for electricity, which is required for artificial radiation systems to enhance the developing plants. A high percentage (40% to 50%) of the costs in CPPSs point to artificial lighting systems. Due to this, lighting strategies are essential to improve sustainability and profitability in closed plant production systems. However, no tools have been applied in the literature to contribute to energy savings in LED-type artificial radiation systems through the configuration of light recipes (wavelengths combination. For CPPS to be cost-effective and sustainable, a pre-evaluation of energy consumption for plant cultivation must consider. Artificial intelligence (AI) methods integrated into the prediction crucial variables such as each input-variable light color or specific wavelengths like red, green, blue, and white along with light intensity (quantity), frequency (pulsed light), and duty cycle. This paper focuses on the feature-selection stage, in which a regression model is trained to predict energy consumption in LED lights with specific light recipes in CPPSs. This stage is critical because it identifies the most representative features for training the model, and the other stages depend on it. These tools can enable further in-depth analysis of the energy savings that can be obtained with light recipes and pulsed and continuous operation light modes in artificial LED lighting systems.
Collapse
|
20
|
Sharif MI, Li JP, Khan MA, Kadry S, Tariq U. M3BTCNet: multi model brain tumor classification using metaheuristic deep neural network features optimization. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07204-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
21
|
Mohammad F, Al-Razgan M. Deep Feature Fusion and Optimization-Based Approach for Stomach Disease Classification. SENSORS (BASEL, SWITZERLAND) 2022; 22:2801. [PMID: 35408415 PMCID: PMC9003289 DOI: 10.3390/s22072801] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 03/26/2022] [Accepted: 04/02/2022] [Indexed: 01/10/2023]
Abstract
Cancer is the deadliest disease among all the diseases and the main cause of human mortality. Several types of cancer sicken the human body and affect organs. Among all the types of cancer, stomach cancer is the most dangerous disease that spreads rapidly and needs to be diagnosed at an early stage. The early diagnosis of stomach cancer is essential to reduce the mortality rate. The manual diagnosis process is time-consuming, requires many tests, and the availability of an expert doctor. Therefore, automated techniques are required to diagnose stomach infections from endoscopic images. Many computerized techniques have been introduced in the literature but due to a few challenges (i.e., high similarity among the healthy and infected regions, irrelevant features extraction, and so on), there is much room to improve the accuracy and reduce the computational time. In this paper, a deep-learning-based stomach disease classification method employing deep feature extraction, fusion, and optimization using WCE images is proposed. The proposed method comprises several phases: data augmentation performed to increase the dataset images, deep transfer learning adopted for deep features extraction, feature fusion performed on deep extracted features, fused feature matrix optimized with a modified dragonfly optimization method, and final classification of the stomach disease was performed. The features extraction phase employed two pre-trained deep CNN models (Inception v3 and DenseNet-201) performing activation on feature derivation layers. Later, the parallel concatenation was performed on deep-derived features and optimized using the meta-heuristic method named the dragonfly algorithm. The optimized feature matrix was classified by employing machine-learning algorithms and achieved an accuracy of 99.8% on the combined stomach disease dataset. A comparison has been conducted with state-of-the-art techniques and shows improved accuracy.
Collapse
Affiliation(s)
- Farah Mohammad
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh 11451, Saudi Arabia
| | - Muna Al-Razgan
- Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11451, Saudi Arabia;
| |
Collapse
|
22
|
Dey N, V. R. Image processing methods to enhance disease information in MRI slices. Magn Reson Imaging 2022. [DOI: 10.1016/b978-0-12-823401-3.00002-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
23
|
Zia Ur Rehman M, Ahmed F, Attique Khan M, Tariq U, Shaukat Jamal S, Ahmad J, Hussain I. Classification of Citrus Plant Diseases Using Deep Transfer Learning. COMPUTERS, MATERIALS & CONTINUA 2022; 70:1401-1417. [DOI: 10.32604/cmc.2022.019046] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 05/05/2021] [Indexed: 08/25/2024]
|
24
|
Ali Shah F, Attique Khan M, Sharif M, Tariq U, Khan A, Kadry S, Thinnukool O. A Cascaded Design of Best Features Selection for Fruit Diseases Recognition. COMPUTERS, MATERIALS & CONTINUA 2022; 70:1491-1507. [DOI: 10.32604/cmc.2022.019490] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 06/05/2021] [Indexed: 08/25/2024]
|
25
|
Rajesh Kannan S, Sivakumar J, Ezhilarasi P. Automatic detection of COVID-19 in chest radiographs using serially concatenated deep and handcrafted features. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:231-244. [PMID: 34924434 DOI: 10.3233/xst-211050] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Since the infectious disease occurrence rate in the human community is gradually rising due to varied reasons, appropriate diagnosis and treatments are essential to control its spread. The recently discovered COVID-19 is one of the contagious diseases, which infected numerous people globally. This contagious disease is arrested by several diagnoses and handling actions. Medical image-supported diagnosis of COVID-19 infection is an approved clinical practice. This research aims to develop a new Deep Learning Method (DLM) to detect the COVID-19 infection using the chest X-ray. The proposed work implemented two methods namely, detection of COVID-19 infection using (i) a Firefly Algorithm (FA) optimized deep-features and (ii) the combined deep and machine features optimized with FA. In this work, a 5-fold cross-validation method is engaged to train and test detection methods. The performance of this system is analyzed individually resulting in the confirmation that the deep feature-based technique helps to achieve a detection accuracy of > 92% with SVM-RBF classifier and combining deep and machine features achieves > 96% accuracy with Fine KNN classifier. In the future, this technique may have potential to play a vital role in testing and validating the X-ray images collected from patients suffering from the infection diseases.
Collapse
Affiliation(s)
| | - J Sivakumar
- St. Joseph's College of Engineering, OMR, Chennai, India
| | - P Ezhilarasi
- St. Joseph's College of Engineering, OMR, Chennai, India
| |
Collapse
|
26
|
Saeed T, Kiong Loo C, Shahreeza Safiruz Kassim M. Ensembles of Deep Learning Framework for Stomach Abnormalities Classification. COMPUTERS, MATERIALS & CONTINUA 2022; 70:4357-4372. [DOI: 10.32604/cmc.2022.019076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 06/18/2021] [Indexed: 09/01/2023]
|
27
|
Hussain N, Attique Khan M, Tariq U, Kadry S, E. Yar M, M. Mostafa A, Ali Alnuaim A, Ahmad S. Multiclass Cucumber Leaf Diseases Recognition Using Best Feature Selection. COMPUTERS, MATERIALS & CONTINUA 2022; 70:3281-3294. [DOI: 10.32604/cmc.2022.019036] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 07/03/2021] [Indexed: 08/25/2024]
|
28
|
M. Alajlan A. Automatic Heart Disease Detection by Classification of Ventricular Arrhythmias on ECG Using Machine Learning. COMPUTERS, MATERIALS & CONTINUA 2022; 71:17-33. [DOI: 10.32604/cmc.2022.018613] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Accepted: 04/18/2021] [Indexed: 08/25/2024]
|
29
|
Imran T, Attique Khan M, Sharif M, Tariq U, Zhang YD, Nam Y, Nam Y, Kang BG. Malaria Blood Smear Classification Using Deep Learning and Best Features Selection. COMPUTERS, MATERIALS & CONTINUA 2022; 70:1875-1891. [DOI: 10.32604/cmc.2022.018946] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2021] [Accepted: 05/18/2021] [Indexed: 08/25/2024]
|
30
|
Zia F, Irum I, Nawaz Qadri N, Nam Y, Khurshid K, Ali M, Ashraf I, Attique Khan M. A Multilevel Deep Feature Selection Framework for Diabetic Retinopathy Image Classification. COMPUTERS, MATERIALS & CONTINUA 2022; 70:2261-2276. [DOI: 10.32604/cmc.2022.017820] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Accepted: 04/19/2021] [Indexed: 08/25/2024]
|
31
|
Syed HH, Khan MA, Tariq U, Armghan A, Alenezi F, Khan JA, Rho S, Kadry S, Rajinikanth V. A Rapid Artificial Intelligence-Based Computer-Aided Diagnosis System for COVID-19 Classification from CT Images. Behav Neurol 2021; 2021:2560388. [PMID: 34966463 PMCID: PMC8712188 DOI: 10.1155/2021/2560388] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 09/16/2021] [Accepted: 11/17/2021] [Indexed: 12/23/2022] Open
Abstract
The excessive number of COVID-19 cases reported worldwide so far, supplemented by a high rate of false alarms in its diagnosis using the conventional polymerase chain reaction method, has led to an increased number of high-resolution computed tomography (CT) examinations conducted. The manual inspection of the latter, besides being slow, is susceptible to human errors, especially because of an uncanny resemblance between the CT scans of COVID-19 and those of pneumonia, and therefore demands a proportional increase in the number of expert radiologists. Artificial intelligence-based computer-aided diagnosis of COVID-19 using the CT scans has been recently coined, which has proven its effectiveness in terms of accuracy and computation time. In this work, a similar framework for classification of COVID-19 using CT scans is proposed. The proposed method includes four core steps: (i) preparing a database of three different classes such as COVID-19, pneumonia, and normal; (ii) modifying three pretrained deep learning models such as VGG16, ResNet50, and ResNet101 for the classification of COVID-19-positive scans; (iii) proposing an activation function and improving the firefly algorithm for feature selection; and (iv) fusing optimal selected features using descending order serial approach and classifying using multiclass supervised learning algorithms. We demonstrate that once this method is performed on a publicly available dataset, this system attains an improved accuracy of 97.9% and the computational time is almost 34 (sec).
Collapse
Affiliation(s)
- Hassaan Haider Syed
- Department of Computer Science, HITEC University Taxila, Museum Road, Taxila, Pakistan
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University Taxila, Museum Road, Taxila, Pakistan
| | - Usman Tariq
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Ammar Armghan
- Department of Electrical Engineering, Jouf University, Sakaka 75471, Saudi Arabia
| | - Fayadh Alenezi
- Department of Electrical Engineering, Jouf University, Sakaka 75471, Saudi Arabia
| | - Junaid Ali Khan
- Department of Computer Science, HITEC University Taxila, Museum Road, Taxila, Pakistan
| | - Seungmin Rho
- Department of Industrial Security, Chung-Ang University, Seoul, Republic of Korea (06974)
| | - Seifedine Kadry
- Faculty of Applied Computing and Technology, Noroff University College, Kristiansand, Norway
| | - Venkatesan Rajinikanth
- Department of Electronics and Instrumentation, St. Joseph's College of Engineering, Chennai 600119, India
| |
Collapse
|
32
|
Ayyaz MS, Lali MIU, Hussain M, Rauf HT, Alouffi B, Alyami H, Wasti S. Hybrid Deep Learning Model for Endoscopic Lesion Detection and Classification Using Endoscopy Videos. Diagnostics (Basel) 2021; 12:diagnostics12010043. [PMID: 35054210 PMCID: PMC8775223 DOI: 10.3390/diagnostics12010043] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 12/22/2021] [Accepted: 12/23/2021] [Indexed: 02/06/2023] Open
Abstract
In medical imaging, the detection and classification of stomach diseases are challenging due to the resemblance of different symptoms, image contrast, and complex background. Computer-aided diagnosis (CAD) plays a vital role in the medical imaging field, allowing accurate results to be obtained in minimal time. This article proposes a new hybrid method to detect and classify stomach diseases using endoscopy videos. The proposed methodology comprises seven significant steps: data acquisition, preprocessing of data, transfer learning of deep models, feature extraction, feature selection, hybridization, and classification. We selected two different CNN models (VGG19 and Alexnet) to extract features. We applied transfer learning techniques before using them as feature extractors. We used a genetic algorithm (GA) in feature selection, due to its adaptive nature. We fused selected features of both models using a serial-based approach. Finally, the best features were provided to multiple machine learning classifiers for detection and classification. The proposed approach was evaluated on a personally collected dataset of five classes, including gastritis, ulcer, esophagitis, bleeding, and healthy. We observed that the proposed technique performed superbly on Cubic SVM with 99.8% accuracy. For the authenticity of the proposed technique, we considered these statistical measures: classification accuracy, recall, precision, False Negative Rate (FNR), Area Under the Curve (AUC), and time. In addition, we provided a fair state-of-the-art comparison of our proposed technique with existing techniques that proves its worthiness.
Collapse
Affiliation(s)
- M Shahbaz Ayyaz
- Department of Computer Science, University of Gujrat, Gujrat 50700, Pakistan; (M.S.A.); (M.H.)
| | - Muhammad Ikram Ullah Lali
- Department of Information Sciences, University of Education Lahore, Lahore 41000, Pakistan; (M.I.U.L.); (S.W.)
| | - Mubbashar Hussain
- Department of Computer Science, University of Gujrat, Gujrat 50700, Pakistan; (M.S.A.); (M.H.)
| | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
- Correspondence:
| | - Bader Alouffi
- Department of Computer Science, College of Computers and Information Technology, Taif University, P. O. Box 11099, Taif 21944, Saudi Arabia; (B.A.); (H.A.)
| | - Hashem Alyami
- Department of Computer Science, College of Computers and Information Technology, Taif University, P. O. Box 11099, Taif 21944, Saudi Arabia; (B.A.); (H.A.)
| | - Shahbaz Wasti
- Department of Information Sciences, University of Education Lahore, Lahore 41000, Pakistan; (M.I.U.L.); (S.W.)
| |
Collapse
|
33
|
Khan MA, Rajinikanth V, Satapathy SC, Taniar D, Mohanty JR, Tariq U, Damaševičius R. VGG19 Network Assisted Joint Segmentation and Classification of Lung Nodules in CT Images. Diagnostics (Basel) 2021; 11:2208. [PMID: 34943443 PMCID: PMC8699868 DOI: 10.3390/diagnostics11122208] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 11/17/2021] [Accepted: 11/24/2021] [Indexed: 12/27/2022] Open
Abstract
Pulmonary nodule is one of the lung diseases and its early diagnosis and treatment are essential to cure the patient. This paper introduces a deep learning framework to support the automated detection of lung nodules in computed tomography (CT) images. The proposed framework employs VGG-SegNet supported nodule mining and pre-trained DL-based classification to support automated lung nodule detection. The classification of lung CT images is implemented using the attained deep features, and then these features are serially concatenated with the handcrafted features, such as the Grey Level Co-Occurrence Matrix (GLCM), Local-Binary-Pattern (LBP) and Pyramid Histogram of Oriented Gradients (PHOG) to enhance the disease detection accuracy. The images used for experiments are collected from the LIDC-IDRI and Lung-PET-CT-Dx datasets. The experimental results attained show that the VGG19 architecture with concatenated deep and handcrafted features can achieve an accuracy of 97.83% with the SVM-RBF classifier.
Collapse
Affiliation(s)
| | - Venkatesan Rajinikanth
- Department of Electronics and Instrumentation Engineering, St. Joseph’s College of Engineering, Chennai, Tamilnadu 600119, India;
| | - Suresh Chandra Satapathy
- School of Computer Engineering, Kalinga Institute of Industrial Technology (Deemed to Be University), Bhubaneswar, Odisha 751024, India;
| | - David Taniar
- Faculty of Information Technology, Monash University, Clayton, VIC 3800, Australia;
| | - Jnyana Ranjan Mohanty
- School of Computer Applications, Kalinga Institute of Industrial Technology (Deemed to Be University), Bhubaneswar, Odisha 751024, India;
| | - Usman Tariq
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia;
| | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
| |
Collapse
|
34
|
Miao F, Yao L, Zhao X. Evolving convolutional neural networks by symbiotic organisms search algorithm for image classification. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107537] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
35
|
Jain S, Seal A, Ojha A, Yazidi A, Bures J, Tacheci I, Krejcar O. A deep CNN model for anomaly detection and localization in wireless capsule endoscopy images. Comput Biol Med 2021; 137:104789. [PMID: 34455302 DOI: 10.1016/j.compbiomed.2021.104789] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Revised: 08/18/2021] [Accepted: 08/18/2021] [Indexed: 12/22/2022]
Abstract
Wireless capsule endoscopy (WCE) is one of the most efficient methods for the examination of gastrointestinal tracts. Computer-aided intelligent diagnostic tools alleviate the challenges faced during manual inspection of long WCE videos. Several approaches have been proposed in the literature for the automatic detection and localization of anomalies in WCE images. Some of them focus on specific anomalies such as bleeding, polyp, lesion, etc. However, relatively fewer generic methods have been proposed to detect all those common anomalies simultaneously. In this paper, a deep convolutional neural network (CNN) based model 'WCENet' is proposed for anomaly detection and localization in WCE images. The model works in two phases. In the first phase, a simple and efficient attention-based CNN classifies an image into one of the four categories: polyp, vascular, inflammatory, or normal. If the image is classified in one of the abnormal categories, it is processed in the second phase for the anomaly localization. Fusion of Grad-CAM++ and a custom SegNet is used for anomalous region segmentation in the abnormal image. WCENet classifier attains accuracy and area under receiver operating characteristic of 98% and 99%. The WCENet segmentation model obtains a frequency weighted intersection over union of 81%, and an average dice score of 56% on the KID dataset. WCENet outperforms nine different state-of-the-art conventional machine learning and deep learning models on the KID dataset. The proposed model demonstrates potential for clinical applications.
Collapse
Affiliation(s)
- Samir Jain
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, 482005, India
| | - Ayan Seal
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, 482005, India.
| | - Aparajita Ojha
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, 482005, India
| | - Anis Yazidi
- Department of Computer Science, OsloMet - Oslo Metropolitan University, Oslo, Norway; Department of Plastic and Reconstructive Surgery, Oslo University Hospital, Oslo, Norway; Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
| | - Jan Bures
- Second Department of Internal Medicine-Gastroenterology, Charles University, Faculty of Medicine in Hradec Kralove and University Hospital Hradec Kralove, Sokolska 581, Hradec Kralove, 50005, Czech Republic
| | - Ilja Tacheci
- Second Department of Internal Medicine-Gastroenterology, Charles University, Faculty of Medicine in Hradec Kralove and University Hospital Hradec Kralove, Sokolska 581, Hradec Kralove, 50005, Czech Republic
| | - Ondrej Krejcar
- Center for Basic and Applied Research, Faculty of Informatics and Management, University of Hradec Kralove, Hradecka 1249, Hradec Kralove, 50003, Czech Republic; Malaysia Japan International Institute of Technology, Universiti Teknologi Malaysia, Jalan Sultan Yahya Petra, 54100, Kuala Lumpur, Malaysia
| |
Collapse
|
36
|
Irshad M, Sharif M, Yasmin M, Rehman A, Khan MA. Discrete light sheet microscopic segmentation of left ventricle using morphological tuning and active contours. Microsc Res Tech 2021; 85:308-323. [PMID: 34418197 DOI: 10.1002/jemt.23906] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Revised: 06/06/2021] [Accepted: 08/02/2021] [Indexed: 11/06/2022]
Abstract
Left ventricular segmentation using cardiovascular MR scan is required for the diagnosis and further cure of cardiac diseases. Automatic systems for left ventricle segmentation are being studied for attaining more accurate results in a shorter period of time. A novel algorithm introducing discrete segmentation of left ventricle achieves an independent processing of images swiftly. The workflow consists of four segments; first, automated localization is performed on the MR image. Second, performing preprocessing intimately improves and enhances the quality of image using mean contrast adjustment. Central segmentation of endocardium and epicardium layers includes novel MTAC (Morphological tuning using active contours) segmentation algorithm that provides a perfect combination of active contours and morphological tuning to bring an adequate and desirable segmentation. The prospective snake model is a restrained progression, which takes iterations for an impulse throughout the left ventricle contours. At the end, contrast based refining overcomes minor edge problems for both outer and inner boundaries. Proposed algorithm is evaluated via Sunnybrook cardiac MR images by producing an overall average perpendicular distance 2.45 mm, an average dice matrix (endo: 91.3%; epi: 92.16%) and 91.7% dice matrix of overall endocardium and epicardium contours from ground truth contours.
Collapse
Affiliation(s)
- Mehreen Irshad
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | - Amjad Rehman
- College of Computer and Information Sciences, Prince Sultan University, Riyadh, Saudi Arabia
| | | |
Collapse
|
37
|
Dong N, Zhang Y, Ding M, Xu S, Bai Y. One-stage object detection knowledge distillation via adversarial learning. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02634-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
38
|
Masood H, Zafar A, Ali MU, Khan MA, Iqbal K, Tariq U, Kadry S. Optimization of Correlation Filters Using Extended Particle Swarm Optimization Technique. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:6321860. [PMID: 34306177 PMCID: PMC8279855 DOI: 10.1155/2021/6321860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 05/16/2021] [Accepted: 06/23/2021] [Indexed: 11/17/2022]
Abstract
In the past few decades, the field of image processing has seen a rapid advancement in the correlation filters, which serves as a very promising tool for object detection and recognition. Mostly, complex filter equations are used for deriving the correlation filters, leading to a filter solution in a closed loop. Selection of optimal tradeoff (OT) parameters is crucial for the effectiveness of correlation filters. This paper proposes extended particle swarm optimization (EPSO) technique for the optimal selection of OT parameters. The optimal solution is proposed based on two cost functions. The best result for each target is obtained by applying the optimization technique separately. The obtained results are compared with the conventional particle swarm optimization method for various test images belonging from different state-of-the-art datasets. The obtained results depict the performance of filters improved significantly using the proposed optimization method.
Collapse
Affiliation(s)
- Haris Masood
- Wah Engineering College, University of Wah, Wah Cantt, Pakistan
| | - Amad Zafar
- Department of Electrical Engineering, University of Lahore, Islamabad Campus, Pakistan
| | - Muhammad Umair Ali
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Republic of Korea
| | | | - Kashif Iqbal
- Wah Engineering College, University of Wah, Wah Cantt, Pakistan
| | - Usman Tariq
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Khraj, Saudi Arabia
| | - Seifedine Kadry
- Faculty of Applied Computing and Technology, Noroff University College, Kristiansand, Norway
| |
Collapse
|
39
|
Imani M. Automatic diagnosis of coronavirus (COVID-19) using shape and texture characteristics extracted from X-Ray and CT-Scan images. Biomed Signal Process Control 2021; 68:102602. [PMID: 33824681 PMCID: PMC8017558 DOI: 10.1016/j.bspc.2021.102602] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2020] [Revised: 02/28/2021] [Accepted: 03/26/2021] [Indexed: 12/24/2022]
Abstract
Automatic diagnosis of coronavirus (COVID-19) is studied in this research. Deep learning methods especially convolutional neural networks (CNNs) have shown great success in COVID-19 diagnosis in recent works. But they are efficient when the depth of network is high enough. However, the use of a deep network requires a sufficiently large training set, which is not available in practice. From the other hand, the use of a shallow CNN may not provide superior results because it is not able to rich feature extraction due to lacking enough convolutional layers. To deal with this difficulty, the contextual features reduced by convolutional filters (CFRCF) is proposed in this work. CFRCF extracts shape and textural features as contextual feature maps from the chest X-ray radiographs and abdominal computed tomography (CT) images. Morphological operators, Gabor filter banks and attribute filters are used for contextual feature extraction. Then, two convolutional filters are applied to the contextual feature cube to extract the nonlinear sub-features and hidden relationships among the contextual features. Finally, a fully connected layer is used to produce a reduced feature vector which is fed to a classifier. Support vector machine and random forest are used as classifier. The experimental results show the superior performance of the proposed method from the recognition accuracy and running time point of view using limited training samples. More than 76% and 94% overall classification accuracy is obtained by the proposed method in CT scan and X-ray images datasets, respectively.
Collapse
Affiliation(s)
- Maryam Imani
- Faculty of Electrical and Computer Engineering, Tarbiat Modares University, Tehran, Iran
| |
Collapse
|
40
|
Khan MA, Zhang YD, Allison M, Kadry S, Wang SH, Saba T, Iqbal T. A Fused Heterogeneous Deep Neural Network and Robust Feature Selection Framework for Human Actions Recognition. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2021. [DOI: 10.1007/s13369-021-05881-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
41
|
Khan MA, Mittal M, Goyal LM, Roy S. A deep survey on supervised learning based human detection and activity classification methods. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:27867-27923. [DOI: 10.1007/s11042-021-10811-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 03/03/2021] [Accepted: 03/10/2021] [Indexed: 08/25/2024]
|
42
|
Lang Q, Zhong C, Liang Z, Zhang Y, Wu B, Xu F, Cong L, Wu S, Tian Y. Six application scenarios of artificial intelligence in the precise diagnosis and treatment of liver cancer. Artif Intell Rev 2021. [DOI: 10.1007/s10462-021-10023-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
43
|
Attallah O, Sharkas M. GASTRO-CADx: a three stages framework for diagnosing gastrointestinal diseases. PeerJ Comput Sci 2021; 7:e423. [PMID: 33817058 PMCID: PMC7959662 DOI: 10.7717/peerj-cs.423] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 02/11/2021] [Indexed: 05/04/2023]
Abstract
Gastrointestinal (GI) diseases are common illnesses that affect the GI tract. Diagnosing these GI diseases is quite expensive, complicated, and challenging. A computer-aided diagnosis (CADx) system based on deep learning (DL) techniques could considerably lower the examination cost processes and increase the speed and quality of diagnosis. Therefore, this article proposes a CADx system called Gastro-CADx to classify several GI diseases using DL techniques. Gastro-CADx involves three progressive stages. Initially, four different CNNs are used as feature extractors to extract spatial features. Most of the related work based on DL approaches extracted spatial features only. However, in the following phase of Gastro-CADx, features extracted in the first stage are applied to the discrete wavelet transform (DWT) and the discrete cosine transform (DCT). DCT and DWT are used to extract temporal-frequency and spatial-frequency features. Additionally, a feature reduction procedure is performed in this stage. Finally, in the third stage of the Gastro-CADx, several combinations of features are fused in a concatenated manner to inspect the effect of feature combination on the output results of the CADx and select the best-fused feature set. Two datasets referred to as Dataset I and II are utilized to evaluate the performance of Gastro-CADx. Results indicated that Gastro-CADx has achieved an accuracy of 97.3% and 99.7% for Dataset I and II respectively. The results were compared with recent related works. The comparison showed that the proposed approach is capable of classifying GI diseases with higher accuracy compared to other work. Thus, it can be used to reduce medical complications, death-rates, in addition to the cost of treatment. It can also help gastroenterologists in producing more accurate diagnosis while lowering inspection time.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| | - Maha Sharkas
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
44
|
Rajinikanth V, Sivakumar R, Hemanth DJ, Kadry S, Mohanty JR, Arunmozhi S, Raja NSM, Nhu NG. Automated classification of retinal images into AMD/non-AMD Class—a study using multi-threshold and Gassian-filter enhanced images. EVOLUTIONARY INTELLIGENCE 2021. [DOI: 10.1007/s12065-021-00581-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
45
|
|
46
|
Oyelade ON, Ezugwu AE. A deep learning model using data augmentation for detection of architectural distortion in whole and patches of images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102366] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
47
|
Khan MA, Akram T, Zhang YD, Sharif M. Attributes based skin lesion detection and recognition: A mask RCNN and transfer learning-based deep learning framework. Pattern Recognit Lett 2021. [DOI: 10.1016/j.patrec.2020.12.015] [Citation(s) in RCA: 58] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
48
|
Bora K, Bhuyan MK, Kasugai K, Mallik S, Zhao Z. Computational learning of features for automated colonic polyp classification. Sci Rep 2021; 11:4347. [PMID: 33623086 PMCID: PMC7902635 DOI: 10.1038/s41598-021-83788-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Accepted: 02/04/2021] [Indexed: 12/24/2022] Open
Abstract
Shape, texture, and color are critical features for assessing the degree of dysplasia in colonic polyps. A comprehensive analysis of these features is presented in this paper. Shape features are extracted using generic Fourier descriptor. The nonsubsampled contourlet transform is used as texture and color feature descriptor, with different combinations of filters. Analysis of variance (ANOVA) is applied to measure statistical significance of the contribution of different descriptors between two colonic polyps: non-neoplastic and neoplastic. Final descriptors selected after ANOVA are optimized using the fuzzy entropy-based feature ranking algorithm. Finally, classification is performed using Least Square Support Vector Machine and Multi-layer Perceptron with five-fold cross-validation to avoid overfitting. Evaluation of our analytical approach using two datasets suggested that the feature descriptors could efficiently designate a colonic polyp, which subsequently can help the early detection of colorectal carcinoma. Based on the comparison with four deep learning models, we demonstrate that the proposed approach out-performs the existing feature-based methods of colonic polyp identification.
Collapse
Affiliation(s)
- Kangkana Bora
- Department of Computer Science and IT, Cotton University, Pan Bazar, Guwahati, Assam, 781001, India
| | - M K Bhuyan
- Department of Electrical and Electronics Engineering, Indian Institute of Technology Guwahati (IITG), Guwahati, Assam, 781039, India
| | - Kunio Kasugai
- Department of Gastroenterology, Aichi Medical University, Nagakute, 480-1195, Japan
| | - Saurav Mallik
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, 77030, USA
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, 77030, USA. .,Human Genetics Center, School of Public Health, The University of Texas Health Science Center at Houston, Houston, TX, USA. .,Department of Pathology and Laboratory Medicine, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, USA.
| |
Collapse
|
49
|
Veena H, Muruganandham A, Senthil Kumaran T. A novel optic disc and optic cup segmentation technique to diagnose glaucoma using deep learning convolutional neural network over retinal fundus images. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2021. [DOI: 10.1016/j.jksuci.2021.02.003] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
50
|
Sub-classification of invasive and non-invasive cancer from magnification independent histopathological images using hybrid neural networks. EVOLUTIONARY INTELLIGENCE 2021. [DOI: 10.1007/s12065-021-00564-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|