1
|
S S, V S. FACNN: fuzzy-based adaptive convolution neural network for classifying COVID-19 in noisy CXR images. Med Biol Eng Comput 2024:10.1007/s11517-024-03107-x. [PMID: 38710960 DOI: 10.1007/s11517-024-03107-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 04/22/2024] [Indexed: 05/08/2024]
Abstract
COVID-19 detection using chest X-rays (CXR) has evolved as a significant method for early diagnosis of the pandemic disease. Clinical trials and methods utilize X-ray images with computer and intelligent algorithms to improve detection and classification precision. This article thus proposes a fuzzy-based adaptive convolution neural network (FACNN) model to improve the detection precision by confining the false rates. The feature extraction process between the successive regions is validated using a fuzzy process that classifies labeled and unknown pixels. The membership functions are derived based on high precision features for detection and false rate suppression process. The convolution neural network process is responsible for increasing detection precision through recurrent training based on feature availability. This availability analysis is verified using fuzzy derivatives under local variances. Based on variance-reduced features, the appropriate regions with labeled and unknown features are used for normal or infected classification. Thus, the proposed FACNN improves accuracy, precision, and feature extraction by 14.36%, 8.74%, and 12.35%, respectively. This model reduces the false rate and extraction time by 10.35% and 10.66%, respectively.
Collapse
Affiliation(s)
- Suganyadevi S
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, 641 407, India.
| | - Seethalakshmi V
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, 641 407, India
| |
Collapse
|
2
|
Abubakar H, Al-Turjman F, Ameen ZS, Mubarak AS, Altrjman C. A hybridized feature extraction for COVID-19 multi-class classification on computed tomography images. Heliyon 2024; 10:e26939. [PMID: 38463848 PMCID: PMC10920381 DOI: 10.1016/j.heliyon.2024.e26939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 02/20/2024] [Accepted: 02/21/2024] [Indexed: 03/12/2024] Open
Abstract
COVID-19 has killed more than 5 million individuals worldwide within a short time. It is caused by SARS-CoV-2 which continuously mutates and produces more transmissible new different strains. It is therefore of great significance to diagnose COVID-19 early to curb its spread and reduce the death rate. Owing to the COVID-19 pandemic, traditional diagnostic methods such as reverse-transcription polymerase chain reaction (RT-PCR) are ineffective for diagnosis. Medical imaging is among the most effective techniques of respiratory disorders detection through machine learning and deep learning. However, conventional machine learning methods depend on extracted and engineered features, whereby the optimum features influence the classifier's performance. In this study, Histogram of Oriented Gradient (HOG) and eight deep learning models were utilized for feature extraction while K-Nearest Neighbour (KNN) and Support Vector Machines (SVM) were used for classification. A combined feature of HOG and deep learning feature was proposed to improve the performance of the classifiers. VGG-16 + HOG achieved 99.4 overall accuracy with SVM. This indicates that our proposed concatenated feature can enhance the SVM classifier's performance in COVID-19 detection.
Collapse
Affiliation(s)
- Hassana Abubakar
- Biomedical Engineering Department, Faculty of Engineering, Near East University, Mersin 10, Turkey
| | - Fadi Al-Turjman
- Artificial Intelligence Engineering Department, AI and Robotics Institute, Near East University, Mersin 10, Turkey
- Research Center for AI and IoT, Faculty of Engineering, University of Kyrenia, Mersin 10, Turkey
| | - Zubaida S. Ameen
- Operational Research Center in Healthcare, Near East University, Mersin 10, Turkey
| | - Auwalu S. Mubarak
- Operational Research Center in Healthcare, Near East University, Mersin 10, Turkey
| | - Chadi Altrjman
- Waterloo University, 200 University Avenue West. Waterloo, ON, Canada
| |
Collapse
|
3
|
Tan Z, Yu Y, Meng J, Liu S, Li W. Self-supervised learning with self-distillation on COVID-19 medical image classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107876. [PMID: 37875036 DOI: 10.1016/j.cmpb.2023.107876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2023] [Revised: 10/11/2023] [Accepted: 10/17/2023] [Indexed: 10/26/2023]
Abstract
BACKGROUND AND OBJECTIVE Currently, COVID-19 is a highly infectious disease that can be clinically diagnosed based on diagnostic radiology. Deep learning is capable of mining the rich information implied in inpatient imaging data and accomplishing the classification of different stages of the disease process. However, a large amount of training data is essential to train an excellent deep-learning model. Unfortunately, due to factors such as privacy and labeling difficulties, annotated data for COVID-19 is extremely scarce, which encourages us to propose a more effective deep learning model that can effectively assist specialist physicians in COVID-19 diagnosis. METHODS In this study,we introduce Masked Autoencoder (MAE) for pre-training and fine-tuning directly on small-scale target datasets. Based on this, we propose Self-Supervised Learning with Self-Distillation on COVID-19 medical image classification (SSSD-COVID). In addition to the reconstruction loss computation on the masked image patches, SSSD-COVID further performs self-distillation loss calculations on the latent representation of the encoder and decoder outputs. The additional loss calculation can transfer the knowledge from the global attention of the decoder to the encoder which acquires only local attention. RESULTS Our model achieves 97.78 % recognition accuracy on the SARS-COV-CT dataset containing 2481 images and is further validated on the COVID-CT dataset containing 746 images, which achieves 81.76 % recognition accuracy. Further introduction of external knowledge resulted in experimental accuracies of 99.6% and 95.27 % on these two datasets, respectively. CONCLUSIONS SSSD-COVID can obtain good results on the target dataset alone, and when external information is introduced, the performance of the model can be further improved to significantly outperform other models.Overall, the experimental results show that our method can effectively mine COVID-19 features from rare data and can assist professional physicians in decision-making to improve the efficiency of COVID-19 disease detection.
Collapse
Affiliation(s)
- Zhiyong Tan
- School of Computer Science and Engineering, Dalian Minzu University, Dalian, Liaoning 116600, China
| | - Yuhai Yu
- School of Computer Science and Engineering, Dalian Minzu University, Dalian, Liaoning 116600, China
| | - Jiana Meng
- School of Computer Science and Engineering, Dalian Minzu University, Dalian, Liaoning 116600, China.
| | - Shuang Liu
- School of Computer Science and Engineering, Dalian Minzu University, Dalian, Liaoning 116600, China
| | - Wei Li
- School of Computer Science and Engineering, Dalian Minzu University, Dalian, Liaoning 116600, China
| |
Collapse
|
4
|
Ahoor A, Arif F, Sajid MZ, Qureshi I, Abbas F, Jabbar S, Abbas Q. MixNet-LD: An Automated Classification System for Multiple Lung Diseases Using Modified MixNet Model. Diagnostics (Basel) 2023; 13:3195. [PMID: 37892016 PMCID: PMC10606171 DOI: 10.3390/diagnostics13203195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 10/03/2023] [Accepted: 10/04/2023] [Indexed: 10/29/2023] Open
Abstract
The lungs are critical components of the respiratory system because they allow for the exchange of oxygen and carbon dioxide within our bodies. However, a variety of conditions can affect the lungs, resulting in serious health consequences. Lung disease treatment aims to control its severity, which is usually irrevocable. The fundamental objective of this endeavor is to build a consistent and automated approach for establishing the intensity of lung illness. This paper describes MixNet-LD, a unique automated approach aimed at identifying and categorizing the severity of lung illnesses using an upgraded pre-trained MixNet model. One of the first steps in developing the MixNet-LD system was to build a pre-processing strategy that uses Grad-Cam to decrease noise, highlight irregularities, and eventually improve the classification performance of lung illnesses. Data augmentation strategies were used to rectify the dataset's unbalanced distribution of classes and prevent overfitting. Furthermore, dense blocks were used to improve classification outcomes across the four severity categories of lung disorders. In practice, the MixNet-LD model achieves cutting-edge performance while maintaining model size and manageable complexity. The proposed approach was tested using a variety of datasets gathered from credible internet sources as well as a novel private dataset known as Pak-Lungs. A pre-trained model was used on the dataset to obtain important characteristics from lung disease images. The pictures were then categorized into categories such as normal, COVID-19, pneumonia, tuberculosis, and lung cancer using a linear layer of the SVM classifier with a linear activation function. The MixNet-LD system underwent testing in four distinct tests and achieved a remarkable accuracy of 98.5% on the difficult lung disease dataset. The acquired findings and comparisons demonstrate the MixNet-LD system's improved performance and learning capabilities. These findings show that the proposed approach may effectively increase the accuracy of classification models in medicinal image investigations. This research helps to develop new strategies for effective medical image processing in clinical settings.
Collapse
Affiliation(s)
- Ayesha Ahoor
- Department of Computer Software Engineering, MCS, National University of Science and Technology, Islamabad 44000, Pakistan; (A.A.); (F.A.); (M.Z.S.)
| | - Fahim Arif
- Department of Computer Software Engineering, MCS, National University of Science and Technology, Islamabad 44000, Pakistan; (A.A.); (F.A.); (M.Z.S.)
| | - Muhammad Zaheer Sajid
- Department of Computer Software Engineering, MCS, National University of Science and Technology, Islamabad 44000, Pakistan; (A.A.); (F.A.); (M.Z.S.)
| | - Imran Qureshi
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (S.J.); (Q.A.)
| | - Fakhar Abbas
- Centre for Trusted Internet and Community, National University of Singapore (NUS), Singapore 119228, Singapore;
| | - Sohail Jabbar
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (S.J.); (Q.A.)
| | - Qaisar Abbas
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (S.J.); (Q.A.)
| |
Collapse
|
5
|
Santosh KC, GhoshRoy D, Nakarmi S. A Systematic Review on Deep Structured Learning for COVID-19 Screening Using Chest CT from 2020 to 2022. Healthcare (Basel) 2023; 11:2388. [PMID: 37685422 PMCID: PMC10486542 DOI: 10.3390/healthcare11172388] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 08/16/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
The emergence of the COVID-19 pandemic in Wuhan in 2019 led to the discovery of a novel coronavirus. The World Health Organization (WHO) designated it as a global pandemic on 11 March 2020 due to its rapid and widespread transmission. Its impact has had profound implications, particularly in the realm of public health. Extensive scientific endeavors have been directed towards devising effective treatment strategies and vaccines. Within the healthcare and medical imaging domain, the application of artificial intelligence (AI) has brought significant advantages. This study delves into peer-reviewed research articles spanning the years 2020 to 2022, focusing on AI-driven methodologies for the analysis and screening of COVID-19 through chest CT scan data. We assess the efficacy of deep learning algorithms in facilitating decision making processes. Our exploration encompasses various facets, including data collection, systematic contributions, emerging techniques, and encountered challenges. However, the comparison of outcomes between 2020 and 2022 proves intricate due to shifts in dataset magnitudes over time. The initiatives aimed at developing AI-powered tools for the detection, localization, and segmentation of COVID-19 cases are primarily centered on educational and training contexts. We deliberate on their merits and constraints, particularly in the context of necessitating cross-population train/test models. Our analysis encompassed a review of 231 research publications, bolstered by a meta-analysis employing search keywords (COVID-19 OR Coronavirus) AND chest CT AND (deep learning OR artificial intelligence OR medical imaging) on both the PubMed Central Repository and Web of Science platforms.
Collapse
Affiliation(s)
- KC Santosh
- 2AI: Applied Artificial Intelligence Research Lab, Vermillion, SD 57069, USA
| | - Debasmita GhoshRoy
- School of Automation, Banasthali Vidyapith, Tonk 304022, Rajasthan, India;
| | - Suprim Nakarmi
- Department of Computer Science, University of South Dakota, Vermillion, SD 57069, USA;
| |
Collapse
|
6
|
Zhang N, Liu J, Jin Y, Duan W, Wu Z, Cai Z, Wu M. An adaptive multi-modal hybrid model for classifying thyroid nodules by combining ultrasound and infrared thermal images. BMC Bioinformatics 2023; 24:315. [PMID: 37598159 PMCID: PMC10440038 DOI: 10.1186/s12859-023-05446-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 08/15/2023] [Indexed: 08/21/2023] Open
Abstract
BACKGROUND Two types of non-invasive, radiation-free, and inexpensive imaging technologies that are widely employed in medical applications are ultrasound (US) and infrared thermography (IRT). The ultrasound image obtained by ultrasound imaging primarily expresses the size, shape, contour boundary, echo, and other morphological information of the lesion, while the infrared thermal image obtained by infrared thermography imaging primarily describes its thermodynamic function information. Although distinguishing between benign and malignant thyroid nodules requires both morphological and functional information, present deep learning models are only based on US images, making it possible that some malignant nodules with insignificant morphological changes but significant functional changes will go undetected. RESULTS Given the US and IRT images present thyroid nodules through distinct modalities, we proposed an Adaptive multi-modal Hybrid (AmmH) classification model that can leverage the amalgamation of these two image types to achieve superior classification performance. The AmmH approach involves the construction of a hybrid single-modal encoder module for each modal data, which facilitates the extraction of both local and global features by integrating a CNN module and a Transformer module. The extracted features from the two modalities are then weighted adaptively using an adaptive modality-weight generation network and fused using an adaptive cross-modal encoder module. The fused features are subsequently utilized for the classification of thyroid nodules through the use of MLP. On the collected dataset, our AmmH model respectively achieved 97.17% and 97.38% of F1 and F2 scores, which significantly outperformed the single-modal models. The results of four ablation experiments further show the superiority of our proposed method. CONCLUSIONS The proposed multi-modal model extracts features from various modal images, thereby enhancing the comprehensiveness of thyroid nodules descriptions. The adaptive modality-weight generation network enables adaptive attention to different modalities, facilitating the fusion of features using adaptive weights through the adaptive cross-modal encoder. Consequently, the model has demonstrated promising classification performance, indicating its potential as a non-invasive, radiation-free, and cost-effective screening tool for distinguishing between benign and malignant thyroid nodules. The source code is available at https://github.com/wuliZN2020/AmmH .
Collapse
Affiliation(s)
- Na Zhang
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072 China
| | - Juan Liu
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072 China
| | - Yu Jin
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072 China
| | - Wensi Duan
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072 China
| | - Ziling Wu
- Department of Ultrasound, Zhongnan Hospital, Wuhan University, Wuhan, 430072 China
| | - Zhaohui Cai
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072 China
| | - Meng Wu
- Department of Ultrasound, Zhongnan Hospital, Wuhan University, Wuhan, 430072 China
| |
Collapse
|
7
|
Joloudari JH, Azizi F, Nodehi I, Nematollahi MA, Kamrannejhad F, Hassannatajjeloudari E, Alizadehsani R, Islam SMS. Developing a Deep Neural Network model for COVID-19 diagnosis based on CT scan images. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:16236-16258. [PMID: 37920011 DOI: 10.3934/mbe.2023725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
COVID-19 is most commonly diagnosed using a testing kit but chest X-rays and computed tomography (CT) scan images have a potential role in COVID-19 diagnosis. Currently, CT diagnosis systems based on Artificial intelligence (AI) models have been used in some countries. Previous research studies used complex neural networks, which led to difficulty in network training and high computation rates. Hence, in this study, we developed the 6-layer Deep Neural Network (DNN) model for COVID-19 diagnosis based on CT scan images. The proposed DNN model is generated to improve accurate diagnostics for classifying sick and healthy persons. Also, other classification models, such as decision trees, random forests and standard neural networks, have been investigated. One of the main contributions of this study is the use of the global feature extractor operator for feature extraction from the images. Furthermore, the 10-fold cross-validation technique is utilized for partitioning the data into training, testing and validation. During the DNN training, the model is generated without dropping out of neurons in the layers. The experimental results of the lightweight DNN model demonstrated that this model has the best accuracy of 96.71% compared to the previous classification models for COVID-19 diagnosis.
Collapse
Affiliation(s)
| | - Faezeh Azizi
- Department of Computer Engineering, Faculty of Engineering, University of Birjand, Birjand, Iran
| | - Issa Nodehi
- Department of Computer Engineering, University of Qom, Qom, Iran
| | | | - Fateme Kamrannejhad
- Department of Computer Engineering, Faculty of Engineering, University of Birjand, Birjand, Iran
| | - Edris Hassannatajjeloudari
- Department of Nursing, School of Nursing and Allied Medical Sciences, Maragheh Faculty of Medical Sciences, Maragheh, Iran
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, VIC 3216, Australia
| | - Sheikh Mohammed Shariful Islam
- Institute for Physical Activity and Nutrition, School of Exercise and Nutrition Sciences, Deakin University, Geelong, VIC, Australia
| |
Collapse
|
8
|
Kaur J, Mittal D, Malebary S, Nayak SR, Kumar D, Kumar M, Gagandeep, Singh S. Automated Detection and Segmentation of Exudates for the Screening of Background Retinopathy. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:4537253. [PMID: 37483301 PMCID: PMC10361834 DOI: 10.1155/2023/4537253] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 04/15/2022] [Indexed: 07/25/2023]
Abstract
Exudate, an asymptomatic yellow deposit on retina, is among the primary characteristics of background diabetic retinopathy. Background diabetic retinopathy is a retinopathy related to high blood sugar levels which slowly affects all the organs of the body. The early detection of exudates aids doctors in screening the patients suffering from background diabetic retinopathy. A computer-aided method proposed in the present work detects and then segments the exudates in the images of retina acquired using a digital fundus camera by (i) gradient method to trace the contour of exudates, (ii) marking the connected candidate pixels to remove false exudates pixels, and (iii) linking the edge pixels for the boundary extraction of exudates. The method is tested on 1307 retinal fundus images with varying characteristics. Six hundred and forty-nine images were acquired from hospital and the remaining 658 from open-source benchmark databases, namely, STARE, DRIVE MESSIDOR, DiaretDB1, and e-Ophtha. The exudates segmentation method proposed in this research work results in the retinal fundus image-based (i) accuracy of 98.04%, (ii) sensitivity of 95.345%, and (iii) specificity of 98.63%. The segmentation results for a number of exudates-based evaluations depict the average (i) accuracy of 95.68%, (ii) sensitivity of 93.44%, and (iii) specificity of 97.22%. The substantial combined performance at image and exudates-based evaluations proves the contribution of the proposed method in mass screening as well as treatment process of background diabetic retinopathy.
Collapse
Affiliation(s)
- Jaskirat Kaur
- Department of Electronics and Communication Engineering, Punjab Engineering College (Deemed to be University), Sector 12, Chandigarh 160012, India
| | - Deepti Mittal
- Electrical and Instrumentation Engineering Department, Thapar Institute of Engineering and Technology, Patiala 147004, India
| | - Sharaf Malebary
- Department of Information Technology, Faculty of Computing and Information Technology in Rabigh, King Abdulaziz University, Jeddah 21911, Saudi Arabia
| | - Soumya Ranjan Nayak
- School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, Odisha, India
| | - Devendra Kumar
- Department of Computer Science, Wachemo University, Hosaena, Ethiopia
| | - Manoj Kumar
- Faculty of Engineering and Information Sciences, University of Wollongong in Dubai, Dubai Knowledge Park, UAE
- MEU Research Unit, Middle East University, Amman 11831, Jordan
| | - Gagandeep
- Computer Science Engineering Department, Chandigarh Engineering College, Mohali, India
| | - Simrandeep Singh
- Electronics and Communication Engineering Department, UCRD, Chandigarh University, Mohali, India
| |
Collapse
|
9
|
Liu Y, Chen B, Zhang Z, Yu H, Ru S, Chen X, Lu G. Self-paced Multi-view Learning for CT-based severity assessment of COVID-19. Biomed Signal Process Control 2023; 83:104672. [PMID: 36777556 PMCID: PMC9905104 DOI: 10.1016/j.bspc.2023.104672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 01/30/2023] [Accepted: 02/04/2023] [Indexed: 02/11/2023]
Abstract
Prior studies for the task of severity assessment of COVID-19 (SA-COVID) usually suffer from domain-specific cognitive deficits. They mainly focus on visual cues based on single cognitive functions but fail to reconcile the valuable information from other alternative views. Inspired by the cognitive process of radiologists, this paper shifts naturally from single-symptom measurements to a multi-view analysis, and proposes a novel Self-paced Multi-view Learning (SPML) framework for automated SA-COVID. Specifically, the proposed SPML framework first comprehensively aggregates multi-view contexts in lung infection with different measure paradigms, i.e., Global Feature Branch, Texture Feature Branch, and Volume Feature Branch. In this way, multiple-perspective clues are taken into account to reflect the most essential pathological manifestation on CT images. To alleviate small-sample learning problems, we also introduce an optimization with self-paced learning strategy to cognitively increase the characterization capabilities of training samples by learning from simple to complex. In contrast to traditional batch-wise learning, a pure self-paced way can further guarantee the efficiency and accuracy of SPML when dealing with small and biased samples. Furthermore, we construct a well-established SA-COVID dataset that contains 300 CT images with fine annotations. Extensive experiments on this dataset demonstrate that SPML consistently outperforms the state-of-the-art baselines. The SA-COVID dataset is publicly released at https://github.com/YishuLiu/SA-COVID.
Collapse
Affiliation(s)
- Yishu Liu
- Harbin Institute of Technology, Shenzhen, 518055, China
| | - Bingzhi Chen
- South China Normal University, Guangzhou, 510631, China
| | - Zheng Zhang
- Harbin Institute of Technology, Shenzhen, 518055, China
| | - Hongbing Yu
- Nanshan District Chronic Disease Prevention and Control Hospital, Shenzhen, 518055, China
| | - Shouhang Ru
- Shenzhen Second People's Hospital, Shenzhen, 518000, China
| | - Xiaosheng Chen
- Shenzhen Second People's Hospital, Shenzhen, 518000, China
| | - Guangming Lu
- Harbin Institute of Technology, Shenzhen, 518055, China
| |
Collapse
|
10
|
Wang H, Yao Z, Luo R, Liu J, Wang Z, Zhang G. LaCOme: Learning the latent convolutional patterns among transcriptomic features to improve classifications. Gene 2023; 862:147246. [PMID: 36736509 DOI: 10.1016/j.gene.2023.147246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 12/22/2022] [Accepted: 01/27/2023] [Indexed: 02/04/2023]
Abstract
OMIC is a novel approach that analyses entire genetic or molecular profiles in humans and other organisms. It involves identifying and quantifying biological molecules that contribute to a species' structure, function, and dynamics. Finding the secrets of OMIC is like deciphering the biochemical code, but building data-driven models to mine the hidden phenotypic trait information has been a research hotspot. Transcriptome analysis is a popular biological technology for characterizing living systems' overall health, including cells and tissues. Individual transcript expression levels are known to be correlated with those of other transcripts. Nevertheless, most computational studies do not fully exploit these inter-feature correlations. Differential expression analyses, for example, assume that the expression levels of the transcripts are independent. Thus, we propose extracting these inter-feature correlations using the convolutional neural network (CNN) and transforming the transcriptomic features into a new space of convolutional transcriptomic (LaCOme) features. On most transcriptomic datasets in use, a series of comprehensive experiments have demonstrated that engineered LaCOme features outperform the original transcriptomic features in classification performances. Based on experimental results, OMIC data from biological samples could be further enriched using CNN to enhance computational analysis results. Also, feature rough screening can be used to extract valuable information from OMIC, regardless of the algorithm used to select features. It may always be better to create a novel feature than to keep the original. Furthermore, we investigated the feasibility of the feature construction method through cross-validation and independent verification, hoping to develop a more efficient and effective method.
Collapse
Affiliation(s)
- Hongyu Wang
- Department of Nuclear Medicine, General Hospital of Northern Theater Command, Shenyang, Liaoning 110016, China; College of Software, Jilin University, Changchun, Jilin 130012, China
| | - Zhaomin Yao
- Department of Nuclear Medicine, General Hospital of Northern Theater Command, Shenyang, Liaoning 110016, China; College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning 110167, China
| | - Renli Luo
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning 110167, China
| | - Jiahao Liu
- School of Mathematical Sciences, Chongqing Normal University, Chongqing 401331, China
| | - Zhiguo Wang
- Department of Nuclear Medicine, General Hospital of Northern Theater Command, Shenyang, Liaoning 110016, China; College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning 110167, China.
| | - Guoxu Zhang
- Department of Nuclear Medicine, General Hospital of Northern Theater Command, Shenyang, Liaoning 110016, China; College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning 110167, China.
| |
Collapse
|
11
|
Althaqafi T, Al-Ghamdi ASAM, Ragab M. Artificial Intelligence Based COVID-19 Detection and Classification Model on Chest X-ray Images. Healthcare (Basel) 2023; 11:healthcare11091204. [PMID: 37174746 PMCID: PMC10177894 DOI: 10.3390/healthcare11091204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 04/06/2023] [Accepted: 04/18/2023] [Indexed: 05/15/2023] Open
Abstract
Diagnostic and predictive models of disease have been growing rapidly due to developments in the field of healthcare. Accurate and early diagnosis of COVID-19 is an underlying process for controlling the spread of this deadly disease and its death rates. The chest radiology (CT) scan is an effective device for the diagnosis and earlier management of COVID-19, meanwhile, the virus mainly targets the respiratory system. Chest X-ray (CXR) images are extremely helpful in the effective diagnosis of COVID-19 due to their rapid outcomes, cost-effectiveness, and availability. Although the radiological image-based diagnosis method seems faster and accomplishes a better recognition rate in the early phase of the epidemic, it requires healthcare experts to interpret the images. Thus, Artificial Intelligence (AI) technologies, such as the deep learning (DL) model, play an integral part in developing automated diagnosis process using CXR images. Therefore, this study designs a sine cosine optimization with DL-based disease detection and classification (SCODL-DDC) for COVID-19 on CXR images. The proposed SCODL-DDC technique examines the CXR images to identify and classify the occurrence of COVID-19. In particular, the SCODL-DDC technique uses the EfficientNet model for feature vector generation, and its hyperparameters can be adjusted by the SCO algorithm. Furthermore, the quantum neural network (QNN) model can be employed for an accurate COVID-19 classification process. Finally, the equilibrium optimizer (EO) is exploited for optimum parameter selection of the QNN model, showing the novelty of the work. The experimental results of the SCODL-DDC method exhibit the superior performance of the SCODL-DDC technique over other approaches.
Collapse
Affiliation(s)
- Turki Althaqafi
- Information Systems Department, HECI School, Dar Al-Hekma University, Jeddah 34801, Saudi Arabia
| | - Abdullah S Al-Malaise Al-Ghamdi
- Information Systems Department, HECI School, Dar Al-Hekma University, Jeddah 34801, Saudi Arabia
- Information Systems Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Mahmoud Ragab
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Mathematics Department, Faculty of Science, Al-Azhar University, Naser City 11884, Cairo, Egypt
| |
Collapse
|
12
|
Wali A, Ahmad M, Naseer A, Tamoor M, Gilani S. StynMedGAN: Medical images augmentation using a new GAN model for improved diagnosis of diseases. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2023. [DOI: 10.3233/jifs-223996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2023]
Abstract
Deep networks require a considerable amount of training data otherwise these networks generalize poorly. Data Augmentation techniques help the network generalize better by providing more variety in the training data. Standard data augmentation techniques such as flipping, and scaling, produce new data that is a modified version of the original data. Generative Adversarial networks (GANs) have been designed to generate new data that can be exploited. In this paper, we propose a new GAN model, named StynMedGAN for synthetically generating medical images to improve the performance of classification models. StynMedGAN builds upon the state-of-the-art styleGANv2 that has produced remarkable results generating all kinds of natural images. We introduce a regularization term that is a normalized loss factor in the existing discriminator loss of styleGANv2. It is used to force the generator to produce normalized images and penalize it if it fails. Medical imaging modalities, such as X-Rays, CT-Scans, and MRIs are different in nature, we show that the proposed GAN extends the capacity of styleGANv2 to handle medical images in a better way. This new GAN model (StynMedGAN) is applied to three types of medical imaging: X-Rays, CT scans, and MRI to produce more data for the classification tasks. To validate the effectiveness of the proposed model for the classification, 3 classifiers (CNN, DenseNet121, and VGG-16) are used. Results show that the classifiers trained with StynMedGAN-augmented data outperform other methods that only used the original data. The proposed model achieved 100%, 99.6%, and 100% for chest X-Ray, Chest CT-Scans, and Brain MRI respectively. The results are promising and favor a potentially important resource that can be used by practitioners and radiologists to diagnose different diseases.
Collapse
Affiliation(s)
- Aamir Wali
- Department of Computer Science, National University of Computer and Emerging Science, Faisal Town, Lahore, Pakistan
| | - Muzammil Ahmad
- Department of Computer Science, National University of Computer and Emerging Science, Faisal Town, Lahore, Pakistan
| | - Asma Naseer
- Department of Computer Science, National University of Computer and Emerging Science, Faisal Town, Lahore, Pakistan
| | - Maria Tamoor
- Department of Computer Science, Forman Christian College University, Zahoor Ilahi Road, Lahore, Pakistan
| | - S.A.M. Gilani
- Department of Computer Science, National University of Computer and Emerging Science, Faisal Town, Lahore, Pakistan
| |
Collapse
|
13
|
A deep learning architecture for multi-class lung diseases classification using chest X-ray (CXR) images. ALEXANDRIA ENGINEERING JOURNAL 2023; 64:923-935. [PMCID: PMC9626367 DOI: 10.1016/j.aej.2022.10.053] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 10/10/2022] [Accepted: 10/21/2022] [Indexed: 05/27/2023]
Abstract
In 2019, the world experienced the rapid outbreak of the Covid-19 pandemic creating an alarming situation worldwide. The virus targets the respiratory system causing pneumonia with other symptoms such as fatigue, dry cough, and fever which can be mistakenly diagnosed as pneumonia, lung cancer, or TB. Thus, the early diagnosis of COVID-19 is critical since the disease can provoke patients’ mortality. Chest X-ray (CXR) is commonly employed in healthcare sector where both quick and precise diagnosis can be supplied. Deep learning algorithms have proved extraordinary capabilities in terms of lung diseases detection and classification. They facilitate and expedite the diagnosis process and save time for the medical practitioners. In this paper, a deep learning (DL) architecture for multi-class classification of Pneumonia, Lung Cancer, tuberculosis (TB), Lung Opacity, and most recently COVID-19 is proposed. Tremendous CXR images of 3615 COVID-19, 6012 Lung opacity, 5870 Pneumonia, 20,000 lung cancer, 1400 tuberculosis, and 10,192 normal images were resized, normalized, and randomly split to fit the DL requirements. In terms of classification, we utilized a pre-trained model, VGG19 followed by three blocks of convolutional neural network (CNN) as a feature extraction and fully connected network at the classification stage. The experimental results revealed that our proposed VGG19 + CNN outperformed other existing work with 96.48 % accuracy, 93.75 % recall, 97.56 % precision, 95.62 % F1 score, and 99.82 % area under the curve (AUC). The proposed model delivered superior performance allowing healthcare practitioners to diagnose and treat patients more quickly and efficiently.
Collapse
|
14
|
Chakraborty C, Othman SB, Almalki FA, Sakli H. FC-SEEDA: fog computing-based secure and energy efficient data aggregation scheme for Internet of healthcare Things. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08270-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
|
15
|
Malik H, Naeem A, Naqvi RA, Loh WK. DMFL_Net: A Federated Learning-Based Framework for the Classification of COVID-19 from Multiple Chest Diseases Using X-rays. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23020743. [PMID: 36679541 PMCID: PMC9864925 DOI: 10.3390/s23020743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 01/02/2023] [Accepted: 01/03/2023] [Indexed: 05/14/2023]
Abstract
Coronavirus Disease 2019 (COVID-19) is still a threat to global health and safety, and it is anticipated that deep learning (DL) will be the most effective way of detecting COVID-19 and other chest diseases such as lung cancer (LC), tuberculosis (TB), pneumothorax (PneuTh), and pneumonia (Pneu). However, data sharing across hospitals is hampered by patients' right to privacy, leading to unexpected results from deep neural network (DNN) models. Federated learning (FL) is a game-changing concept since it allows clients to train models together without sharing their source data with anybody else. Few studies, however, focus on improving the model's accuracy and stability, whereas most existing FL-based COVID-19 detection techniques aim to maximize secondary objectives such as latency, energy usage, and privacy. In this work, we design a novel model named decision-making-based federated learning network (DMFL_Net) for medical diagnostic image analysis to distinguish COVID-19 from four distinct chest disorders including LC, TB, PneuTh, and Pneu. The DMFL_Net model that has been suggested gathers data from a variety of hospitals, constructs the model using the DenseNet-169, and produces accurate predictions from information that is kept secure and only released to authorized individuals. Extensive experiments were carried out with chest X-rays (CXR), and the performance of the proposed model was compared with two transfer learning (TL) models, i.e., VGG-19 and VGG-16 in terms of accuracy (ACC), precision (PRE), recall (REC), specificity (SPF), and F1-measure. Additionally, the DMFL_Net model is also compared with the default FL configurations. The proposed DMFL_Net + DenseNet-169 model achieves an accuracy of 98.45% and outperforms other approaches in classifying COVID-19 from four chest diseases and successfully protects the privacy of the data among diverse clients.
Collapse
Affiliation(s)
- Hassaan Malik
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Rizwan Ali Naqvi
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Republic of Korea
- Correspondence: (R.A.N.); (W.-K.L.)
| | - Woong-Kee Loh
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
- Correspondence: (R.A.N.); (W.-K.L.)
| |
Collapse
|
16
|
Tuncer I, Barua PD, Dogan S, Baygin M, Tuncer T, Tan RS, Yeong CH, Acharya UR. Swin-textural: A novel textural features-based image classification model for COVID-19 detection on chest computed tomography. INFORMATICS IN MEDICINE UNLOCKED 2023; 36:101158. [PMID: 36618887 PMCID: PMC9804964 DOI: 10.1016/j.imu.2022.101158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 12/30/2022] [Accepted: 12/30/2022] [Indexed: 01/01/2023] Open
Abstract
Background Chest computed tomography (CT) has a high sensitivity for detecting COVID-19 lung involvement and is widely used for diagnosis and disease monitoring. We proposed a new image classification model, swin-textural, that combined swin-based patch division with textual feature extraction for automated diagnosis of COVID-19 on chest CT images. The main objective of this work is to evaluate the performance of the swin architecture in feature engineering. Material and method We used a public dataset comprising 2167, 1247, and 757 (total 4171) transverse chest CT images belonging to 80, 80, and 50 (total 210) subjects with COVID-19, other non-COVID lung conditions, and normal lung findings. In our model, resized 420 × 420 input images were divided using uniform square patches of incremental dimensions, which yielded ten feature extraction layers. At each layer, local binary pattern and local phase quantization operations extracted textural features from individual patches as well as the undivided input image. Iterative neighborhood component analysis was used to select the most informative set of features to form ten selected feature vectors and also used to select the 11th vector from among the top selected feature vectors with accuracy >97.5%. The downstream kNN classifier calculated 11 prediction vectors. From these, iterative hard majority voting generated another nine voted prediction vectors. Finally, the best result among the twenty was determined using a greedy algorithm. Results Swin-textural attained 98.71% three-class classification accuracy, outperforming published deep learning models trained on the same dataset. The model has linear time complexity. Conclusions Our handcrafted computationally lightweight swin-textural model can detect COVID-19 accurately on chest CT images with low misclassification rates. The model can be implemented in hospitals for efficient automated screening of COVID-19 on chest CT images. Moreover, findings demonstrate that our presented swin-textural is a self-organized, highly accurate, and lightweight image classification model and is better than the compared deep learning models for this dataset.
Collapse
Affiliation(s)
- Ilknur Tuncer
- Elazig Governorship, Interior Ministry, Elazig, Turkey
| | - Prabal Datta Barua
- School of Business (Information System), University of Southern Queensland, Toowoomba, QLD, 4350, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, 2007, Australia
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Mehmet Baygin
- Department of Computer Engineering, Faculty of Engineering, Ardahan University, Ardahan, Turkey
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Ru-San Tan
- Department of Cardiology, National Heart Centre Singapore, Singapore
- Duke-NUS Medical School, Singapore
| | - Chai Hong Yeong
- School of Medicine, Faculty of Health and Medical Sciences, Taylor's University, 47500, Subang Jaya, Malaysia
| | - U Rajendra Acharya
- Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, 599489, Singapore
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
17
|
Deepak G, Madiajagan M, Kulkarni S, Ahmed AN, Gopatoti A, Ammisetty V. MCSC-Net: COVID-19 detection using deep-Q-neural network classification with RFNN-based hybrid whale optimization. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023; 31:483-509. [PMID: 36872839 DOI: 10.3233/xst-221360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
BACKGROUND COVID-19 is the most dangerous virus, and its accurate diagnosis saves lives and slows its spread. However, COVID-19 diagnosis takes time and requires trained professionals. Therefore, developing a deep learning (DL) model on low-radiated imaging modalities like chest X-rays (CXRs) is needed. OBJECTIVE The existing DL models failed to diagnose COVID-19 and other lung diseases accurately. This study implements a multi-class CXR segmentation and classification network (MCSC-Net) to detect COVID-19 using CXR images. METHODS Initially, a hybrid median bilateral filter (HMBF) is applied to CXR images to reduce image noise and enhance the COVID-19 infected regions. Then, a skip connection-based residual network-50 (SC-ResNet50) is used to segment (localize) COVID-19 regions. The features from CXRs are further extracted using a robust feature neural network (RFNN). Since the initial features contain joint COVID-19, normal, pneumonia bacterial, and viral properties, the conventional methods fail to separate the class of each disease-based feature. To extract the distinct features of each class, RFNN includes a disease-specific feature separate attention mechanism (DSFSAM). Furthermore, the hunting nature of the Hybrid whale optimization algorithm (HWOA) is used to select the best features in each class. Finally, the deep-Q-neural network (DQNN) classifies CXRs into multiple disease classes. RESULTS The proposed MCSC-Net shows the enhanced accuracy of 99.09% for 2-class, 99.16% for 3-class, and 99.25% for 4-class classification of CXR images compared to other state-of-art approaches. CONCLUSION The proposed MCSC-Net enables to conduct multi-class segmentation and classification tasks applying to CXR images with high accuracy. Thus, together with gold-standard clinical and laboratory tests, this new method is promising to be used in future clinical practice to evaluate patients.
Collapse
Affiliation(s)
- Gerard Deepak
- Department of Computer Science and Engineering, Manipal Institute of Technology Bengaluru, Manipal Academy of Higher Education, Manipal, India
| | - M Madiajagan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Sanjeev Kulkarni
- Department of Information Science and Engineering, Yenepoya Institute of Technology, Mangalore, Karnataka, India
| | - Ahmed Najat Ahmed
- Department of Computer Engineering, Lebanese French University, Erbil, Iraq
| | - Anandbabu Gopatoti
- Department of Electronics and Communication Engineering, Hindusthan College of Engineering and Technology, Coimbatore, Tamil Nadu, India
| | - Veeraswamy Ammisetty
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Guntur, Andhra Pradesh, India
| |
Collapse
|
18
|
D 2BOF-COVIDNet: A Framework of Deep Bayesian Optimization and Fusion-Assisted Optimal Deep Features for COVID-19 Classification Using Chest X-ray and MRI Scans. Diagnostics (Basel) 2022; 13:diagnostics13010101. [PMID: 36611393 PMCID: PMC9818184 DOI: 10.3390/diagnostics13010101] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/23/2022] [Accepted: 12/24/2022] [Indexed: 12/31/2022] Open
Abstract
BACKGROUND AND OBJECTIVE In 2019, a corona virus disease (COVID-19) was detected in China that affected millions of people around the world. On 11 March 2020, the WHO declared this disease a pandemic. Currently, more than 200 countries in the world have been affected by this disease. The manual diagnosis of this disease using chest X-ray (CXR) images and magnetic resonance imaging (MRI) is time consuming and always requires an expert person; therefore, researchers introduced several computerized techniques using computer vision methods. The recent computerized techniques face some challenges, such as low contrast CTX images, the manual initialization of hyperparameters, and redundant features that mislead the classification accuracy. METHODS In this paper, we proposed a novel framework for COVID-19 classification using deep Bayesian optimization and improved canonical correlation analysis (ICCA). In this proposed framework, we initially performed data augmentation for better training of the selected deep models. After that, two pre-trained deep models were employed (ResNet50 and InceptionV3) and trained using transfer learning. The hyperparameters of both models were initialized through Bayesian optimization. Both trained models were utilized for feature extractions and fused using an ICCA-based approach. The fused features were further optimized using an improved tree growth optimization algorithm that finally was classified using a neural network classifier. RESULTS The experimental process was conducted on five publically available datasets and achieved an accuracy of 99.6, 98.5, 99.9, 99.5, and 100%. CONCLUSION The comparison with recent methods and t-test-based analysis showed the significance of this proposed framework.
Collapse
|
19
|
Rastegar H, Giveki D, Choubin M. EEG signals classification using a new radial basis function neural network and jellyfish meta-heuristic algorithm. EVOLUTIONARY INTELLIGENCE 2022:1-12. [PMID: 36590928 PMCID: PMC9789523 DOI: 10.1007/s12065-022-00802-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 08/16/2022] [Accepted: 11/19/2022] [Indexed: 12/25/2022]
Abstract
The purpose of this paper is to investigate a new method for EEG signals classification. A powerful method for detecting these signals can greatly contribute to areas such as making robotic arms for disabled people, mind reading and lie detection tools. To this end, this study makes two interesting contributions. As a major contribution, a new classifier based on a radial basis function neural network (RBFNN) is presented. As the center determination method of a RBFNN classifier has a high impact on the final classification results, we have adopted Jellyfish search (JS) algorithm for choosing the centers of the Gaussian functions in the hidden layer of the RBFNN classifier. Additionally, Locally Linear Embedding (LLE) technique is investigated for reducing the dimensionality of EEG signals. Two series of various experiments are designed to validate our proposals. In the first set of the experiments, the proposed RBFNN classifier is compared with other state-of-the-art RBFNN classifiers. In the second set of the experiments, the performances of the proposed EEG signals classifications are evaluated on a challenging dataset for EEG signals classification. The experimental results demonstrate the superiority of our proposed method even compared to the methods based on the convolutional neural networks. Supplementary Information The online version contains supplementary material available at 10.1007/s12065-022-00802-2.
Collapse
Affiliation(s)
- Homayoun Rastegar
- Department of Computer Engineering, Malayer University, P. O. Box 65719-95863, Malayer, Iran
| | - Davar Giveki
- Department of Computer Engineering, Malayer University, P. O. Box 65719-95863, Malayer, Iran
| | - Morteza Choubin
- Department of Electrical Engineering, Malayer University, P. O. Box 65719-95863, Malayer, Iran
| |
Collapse
|
20
|
Dubey AK, Mohbey KK. Combined Cloud-Based Inference System for the Classification of COVID-19 in CT-Scan and X-Ray Images. NEW GENERATION COMPUTING 2022; 41:61-84. [PMID: 36439302 PMCID: PMC9676871 DOI: 10.1007/s00354-022-00195-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Accepted: 11/09/2022] [Indexed: 06/16/2023]
Abstract
In the past few years, most of the work has been done around the classification of covid-19 using different images like CT-scan, X-ray, and ultrasound. But none of that is capable enough to deal with each of these image types on a single common platform and can identify the possibility that a person is suffering from COVID or not. Thus, we realized there should be a platform to identify COVID-19 in CT-scan and X-ray images on the fly. So, to fulfill this need, we proposed an AI model to identify CT-scan and X-ray images from each other and then use this inference to classify them of COVID positive or negative. The proposed model uses the inception architecture under the hood and trains on the open-source extended covid-19 dataset. The dataset consists of plenty of images for both image types and is of size 4 GB. We achieved an accuracy of 100%, average macro-Precision of 100%, average macro-Recall of 100%, average macro f1-score of 100%, and AUC score of 99.6%. Furthermore, in this work, cloud-based architecture is proposed to massively scale and load balance as the Number of user requests rises. As a result, it will deliver a service with minimal latency to all users.
Collapse
Affiliation(s)
- Ankit Kumar Dubey
- Department of Computer Science, Central University of Rajasthan, Ajmer, India
| | | |
Collapse
|
21
|
Wang W, Liu S, Xu H, Deng L. COVIDX-LwNet: A Lightweight Network Ensemble Model for the Detection of COVID-19 Based on Chest X-ray Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:8578. [PMID: 36366277 PMCID: PMC9655773 DOI: 10.3390/s22218578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 11/02/2022] [Accepted: 11/03/2022] [Indexed: 06/16/2023]
Abstract
Recently, the COVID-19 pandemic coronavirus has put a lot of pressure on health systems around the world. One of the most common ways to detect COVID-19 is to use chest X-ray images, which have the advantage of being cheap and fast. However, in the early days of the COVID-19 outbreak, most studies applied pretrained convolutional neural network (CNN) models, and the features produced by the last convolutional layer were directly passed into the classification head. In this study, the proposed ensemble model consists of three lightweight networks, Xception, MobileNetV2 and NasNetMobile as three original feature extractors, and then three base classifiers are obtained by adding the coordinated attention module, LSTM and a new classification head to the original feature extractors. The classification results from the three base classifiers are then fused by a confidence fusion method. Three publicly available chest X-ray datasets for COVID-19 testing were considered, with ternary (COVID-19, normal and other pneumonia) and quaternary (COVID-19, normal) analyses performed on the first two datasets, bacterial pneumonia and viral pneumonia classification, and achieved high accuracy rates of 95.56% and 91.20%, respectively. The third dataset was used to compare the performance of the model compared to other models and the generalization ability on different datasets. We performed a thorough ablation study on the first dataset to understand the impact of each proposed component. Finally, we also performed visualizations. These saliency maps not only explain key prediction decisions of the model, but also help radiologists locate areas of infection. Through extensive experiments, it was finally found that the results obtained by the proposed method are comparable to the state-of-the-art methods.
Collapse
|
22
|
Hamza A, Attique Khan M, Wang SH, Alhaisoni M, Alharbi M, Hussein HS, Alshazly H, Kim YJ, Cha J. COVID-19 classification using chest X-ray images based on fusion-assisted deep Bayesian optimization and Grad-CAM visualization. Front Public Health 2022; 10:1046296. [PMID: 36408000 PMCID: PMC9672507 DOI: 10.3389/fpubh.2022.1046296] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 10/12/2022] [Indexed: 11/06/2022] Open
Abstract
The COVID-19 virus's rapid global spread has caused millions of illnesses and deaths. As a result, it has disastrous consequences for people's lives, public health, and the global economy. Clinical studies have revealed a link between the severity of COVID-19 cases and the amount of virus present in infected people's lungs. Imaging techniques such as computed tomography (CT) and chest x-rays can detect COVID-19 (CXR). Manual inspection of these images is a difficult process, so computerized techniques are widely used. Deep convolutional neural networks (DCNNs) are a type of machine learning that is frequently used in computer vision applications, particularly in medical imaging, to detect and classify infected regions. These techniques can assist medical personnel in the detection of patients with COVID-19. In this article, a Bayesian optimized DCNN and explainable AI-based framework is proposed for the classification of COVID-19 from the chest X-ray images. The proposed method starts with a multi-filter contrast enhancement technique that increases the visibility of the infected part. Two pre-trained deep models, namely, EfficientNet-B0 and MobileNet-V2, are fine-tuned according to the target classes and then trained by employing Bayesian optimization (BO). Through BO, hyperparameters have been selected instead of static initialization. Features are extracted from the trained model and fused using a slicing-based serial fusion approach. The fused features are classified using machine learning classifiers for the final classification. Moreover, visualization is performed using a Grad-CAM that highlights the infected part in the image. Three publically available COVID-19 datasets are used for the experimental process to obtain improved accuracies of 98.8, 97.9, and 99.4%, respectively.
Collapse
Affiliation(s)
- Ameer Hamza
- Department of Computer Science, HITEC University, Taxila, Pakistan
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University, Taxila, Pakistan,*Correspondence: Muhammad Attique Khan
| | - Shui-Hua Wang
- Department of Mathematics, University of Leicester, Leicester, United Kingdom
| | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Meshal Alharbi
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Hany S. Hussein
- Electrical Engineering Department, College of Engineering, King Khalid University, Abha, Saudi Arabia,Electrical Engineering Department, Faculty of Engineering, Aswan University, Aswan, Egypt
| | - Hammam Alshazly
- Faculty of Computers and Information, South Valley University, Qena, Egypt
| | - Ye Jin Kim
- Department of Computer Science, Hanyang University, Seoul, South Korea
| | - Jaehyuk Cha
- Department of Computer Science, Hanyang University, Seoul, South Korea,Jaehyuk Cha
| |
Collapse
|
23
|
Wei J, Chen P, Liu B, Han Y. A Multienergy Computed Tomography Method without Image Segmentation or Prior Knowledge of X-ray Spectra or Materials. Heliyon 2022; 8:e11584. [DOI: 10.1016/j.heliyon.2022.e11584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 05/01/2022] [Accepted: 11/07/2022] [Indexed: 11/17/2022] Open
|
24
|
T N SG, Satish R, Sridhar R. Learning effective embedding for automated COVID-19 prediction from chest X-ray images. MULTIMEDIA SYSTEMS 2022; 29:739-751. [PMID: 36310764 PMCID: PMC9596346 DOI: 10.1007/s00530-022-01015-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 10/13/2022] [Indexed: 06/16/2023]
Abstract
The pandemic that the SARS-CoV-2 originated in 2019 is continuing to cause serious havoc on the global population's health, economy, and livelihood. A critical way to suppress and restrain this pandemic is the early detection of COVID-19, which will help to control the virus. Chest X-rays are one of the more straightforward ways to detect the COVID-19 virus compared to the standard methods like CT scans and RT-PCR diagnosis, which are very complex, expensive, and take much time. Our research on various papers shows that the currently researchers are actively working for an efficient Deep Learning model to produce an unbiased detection of COVID-19 through chest X-ray images. In this work, we propose a novel convolution neural network model based on supervised classification that simultaneously computes identification and verification loss. We adopt a transfer learning approach using pretrained models trained on imagenet dataset such as Alex Net and VGG16 as back-bone models and use data augmentation techniques to solve class imbalance and boost the classifier's performance. Finally, our proposed classifier architecture model ensures unbiased and high accuracy results, outperforming existing deep learning models for COVID-19 detection from chest X-ray images producing State of the Art performance. It shows strong and robust performance and proves to be easily deployable and scalable, therefore increasing the efficiency of analyzing chest X-ray images with high accuracy in detection of Coronavirus.
Collapse
Affiliation(s)
- Sree Ganesh T N
- Department of Computer Science and Engineering, National Institute of Technology, Tiruchirappalli, Tamil Nadu 620015 India
| | - Rishi Satish
- Department of Computer Science and Engineering, National Institute of Technology, Tiruchirappalli, Tamil Nadu 620015 India
| | - Rajeswari Sridhar
- Department of Computer Science and Engineering, National Institute of Technology, Tiruchirappalli, Tamil Nadu 620015 India
| |
Collapse
|
25
|
Saravanan S, Kumar VV, Sarveshwaran V, Indirajithu A, Elangovan D, Allayear SM. Computational and Mathematical Methods in Medicine Glioma Brain Tumor Detection and Classification Using Convolutional Neural Network. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:4380901. [PMID: 36277002 PMCID: PMC9586767 DOI: 10.1155/2022/4380901] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 09/14/2022] [Accepted: 09/22/2022] [Indexed: 09/29/2023]
Abstract
The classification of the brain tumor image is playing a vital role in the medical image domain, and it directly assists the clinicians to understand the severity and to take an appropriate solution. The magnetic resonance imaging tool is used to analyze the brain tissues and to examine the different portion of brain circumstance. We propose the convolutional neural network database learning along with neighboring network limitation (CDBLNL) technique for brain tumor image classification in medical image processing domain. The proposed system architecture is constructed with multilayer-based metadata learning, and they have integrated with CNN layer to deliver the accurate information. The metadata-based vector encoding is used, and the type of coding estimation for extra dimension is known as sparse. In order to maintain the supervised data in terms of geometric format, the atoms of neighboring limitation are built based on a well-structured k-neighbored network. The resultant of the proposed system is considerably strong and subjective for classification. The proposed system used two different datasets, such as BRATS and REMBRANDT, and the proposed brain MRI classification technique outcome is more efficient than the other existing techniques.
Collapse
Affiliation(s)
- S. Saravanan
- Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Avadi, Chennai, India
| | - V. Vinoth Kumar
- Department of Computer Science and Engineering, Jain (Deemed to Be University), Bangalore, India
| | - Velliangiri Sarveshwaran
- Department of Computational Intelligence, SRM Institute of Science and Technology, Kattankulathur Campus, Chennai, India
| | - Alagiri Indirajithu
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, 632014 Tamil Nadu, India
| | - D. Elangovan
- Department of Computer Science and Engineering, Panimalar Engineering College, Chennai, Tamil Nadu, India
| | - Shaikh Muhammad Allayear
- Department of Multimedia and Creative Technology, Daffodil International University, Daffodil Smart City, Khagan, Ashulia, Dhaka, Bangladesh
| |
Collapse
|
26
|
Velu SR, Ravi V, Tabianan K. Predictive analytics of COVID-19 cases and tourist arrivals in ASEAN based on covid-19 cases. HEALTH AND TECHNOLOGY 2022; 12:1237-1258. [PMID: 36246540 PMCID: PMC9546420 DOI: 10.1007/s12553-022-00701-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 09/22/2022] [Indexed: 11/24/2022]
Abstract
Purpose Research into predictive analytics, which helps predict future values using historical data, is crucial. In order to foresee future instances of COVID-19, a method based on the Seasonal ARIMA (SARIMA) model is proposed here. Additionally, the suggested model is able to predict tourist arrivals in the tourism business by factoring in COVID-19 during the pandemic. In this paper, we present a model that uses time-series analysis to predict the impact of a pandemic event, in this case the spread of the Coronavirus pandemic (Covid-19). Methods The proposed approach outperformed the Autoregressive Integrated Moving Average (ARIMA) and Holt Winters models in all experiments for forecasting future values using COVID-19 and tourism datasets, with the lowest mean absolute error (MAE), mean absolute percentage error (MAPE), mean squared error (MSE), and root mean squared error (RMSE). The SARIMA model predicts COVID-19 and tourist arrivals with and without the COVID-19 pandemic with less than 5% MAPE error. Results The suggested method provides a dashboard that shows COVID-19 and tourism-related information to end users. The suggested tool can be deployed in the healthcare, tourism, and government sectors to monitor the number of COVID-19 cases and determine the correlation between COVID-19 cases and tourism. Conclusion Management in the tourism industries and stakeholders are expected to benefit from this study in making decisions about whether or not to keep funding a given tourism business. The datasets, codes, and all the experiments are available for further research, and details are included in the appendix.
Collapse
Affiliation(s)
| | - Vinayakumar Ravi
- Center for Artificial Intelligence, Prince Mohammad Bin Fahd University, Khobar, Saudi Arabia
| | - Kayalvily Tabianan
- Faculty of Information Technology, Inti International University, Persiaran Perdana BBN Putra Nilai, 71800 Nilai, Negeri Sembilan Malaysia
| |
Collapse
|
27
|
Comprehensive Survey of Machine Learning Systems for COVID-19 Detection. J Imaging 2022; 8:jimaging8100267. [PMID: 36286361 PMCID: PMC9604704 DOI: 10.3390/jimaging8100267] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 09/11/2022] [Accepted: 09/20/2022] [Indexed: 01/14/2023] Open
Abstract
The last two years are considered the most crucial and critical period of the COVID-19 pandemic affecting most life aspects worldwide. This virus spreads quickly within a short period, increasing the fatality rate associated with the virus. From a clinical perspective, several diagnosis methods are carried out for early detection to avoid virus propagation. However, the capabilities of these methods are limited and have various associated challenges. Consequently, many studies have been performed for COVID-19 automated detection without involving manual intervention and allowing an accurate and fast decision. As is the case with other diseases and medical issues, Artificial Intelligence (AI) provides the medical community with potential technical solutions that help doctors and radiologists diagnose based on chest images. In this paper, a comprehensive review of the mentioned AI-based detection solution proposals is conducted. More than 200 papers are reviewed and analyzed, and 145 articles have been extensively examined to specify the proposed AI mechanisms with chest medical images. A comprehensive examination of the associated advantages and shortcomings is illustrated and summarized. Several findings are concluded as a result of a deep analysis of all the previous works using machine learning for COVID-19 detection, segmentation, and classification.
Collapse
|
28
|
Albahli S, Meraj T, Chakraborty C, Rauf HT. AI-driven deep and handcrafted features selection approach for Covid-19 and chest related diseases identification. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:37569-37589. [PMID: 35968412 PMCID: PMC9362623 DOI: 10.1007/s11042-022-13499-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 09/29/2021] [Accepted: 07/13/2022] [Indexed: 05/27/2023]
Abstract
To identify various pneumonia types, a gap of 15% value is being created every five years. To fill this gap, accurate detection of chest disease is required in the healthcare department to avoid any serious issues in the future. Testing the affected lungs to detect a Coronavirus 2019 (COVID-19) using the same imaging modalities may detect some other chest diseases. This wrong diagnosis strongly needs a multidisciplinary approach to the right diagnosis of chest-related diseases. Only a few works till now are targeting pathological x-ray images. Many studies target only a single chest disease that is not enough to automate chest disease detection. Only a few studies regarding the observation of the COVID-19, but more cases are those where it can be misclassified as detecting techniques not providing any generic solution for all types of chest diseases. However, the existing studies can only detect if the person has COVID-19 or not. The proposed work significantly contributes to detecting COVID-19 and other chest diseases by providing useful analysis of chest-related diseases. One of our testing approaches achieves 90.22% accuracy for 15 types of chest disease with 100% correct classification of COVID-19. Though it analyzes the perfect detection as the accuracy level is high enough, but it would be an excellent decision to consider the proposed study until doctors can visually inspect the input images used by models that lead to its detection.
Collapse
Affiliation(s)
- Saleh Albahli
- Department of Information Technology, College of Computer, Qassim University, Buraydah, Saudi Arabia
| | - Talha Meraj
- Department of Computer Science, COMSATS University Islamabad - Wah Campus, 47040 Wah Cantt, Pakistan
| | | | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent, UK
| |
Collapse
|
29
|
Gomes R, Kamrowski C, Langlois J, Rozario P, Dircks I, Grottodden K, Martinez M, Tee WZ, Sargeant K, LaFleur C, Haley M. A Comprehensive Review of Machine Learning Used to Combat COVID-19. Diagnostics (Basel) 2022; 12:diagnostics12081853. [PMID: 36010204 PMCID: PMC9406981 DOI: 10.3390/diagnostics12081853] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/22/2022] [Accepted: 07/26/2022] [Indexed: 12/19/2022] Open
Abstract
Coronavirus disease (COVID-19) has had a significant impact on global health since the start of the pandemic in 2019. As of June 2022, over 539 million cases have been confirmed worldwide with over 6.3 million deaths as a result. Artificial Intelligence (AI) solutions such as machine learning and deep learning have played a major part in this pandemic for the diagnosis and treatment of COVID-19. In this research, we review these modern tools deployed to solve a variety of complex problems. We explore research that focused on analyzing medical images using AI models for identification, classification, and tissue segmentation of the disease. We also explore prognostic models that were developed to predict health outcomes and optimize the allocation of scarce medical resources. Longitudinal studies were conducted to better understand COVID-19 and its effects on patients over a period of time. This comprehensive review of the different AI methods and modeling efforts will shed light on the role that AI has played and what path it intends to take in the fight against COVID-19.
Collapse
Affiliation(s)
- Rahul Gomes
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
- Correspondence:
| | - Connor Kamrowski
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Jordan Langlois
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Papia Rozario
- Department of Geography and Anthropology, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA;
| | - Ian Dircks
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Keegan Grottodden
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Matthew Martinez
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Wei Zhong Tee
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Kyle Sargeant
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Corbin LaFleur
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Mitchell Haley
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| |
Collapse
|
30
|
Aswathy AL, Anand HS, Chandra SSV. COVID-19 severity detection using machine learning techniques from CT-images. EVOLUTIONARY INTELLIGENCE 2022; 16:1-9. [PMID: 35765538 PMCID: PMC9226273 DOI: 10.1007/s12065-022-00739-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2021] [Revised: 01/27/2022] [Accepted: 05/06/2022] [Indexed: 10/26/2022]
Abstract
COVID-19 has spread worldwide and the World Health Organization was forced to list it as a Public Health Emergency of International Concern. The disease has severely impacted most of the people because it affects the lung and causes severe breathing problems and lung infections. Differentiating other lung ailments from COVID-19 infection and determining the severity is a challenging process. Doctors can give vital life-saving services and support patients' lives only if the severity of their condition is determined. This work proposed a two-step approach for detecting the COVID-19 infection from the lung CT images and determining the severity of the patient's illness. To extract the features, pre-trained models are used, and by analyzing them, integrated the features from AlexNet, DenseNet-201, and ResNet-50. The COVID-19 detection is carried out by using an Artificial Neural Network(ANN) model. After the COVID-19 infection has been identified, severity detection is performed. For that, image features are combined with the clinical data and is classified as High, Moderate, Low with the help of Cubic Support Vector Machine(SVM). By considering three severity levels, patients with high risk can be given more attention. The method was tested on a publicly available dataset and obtained an accuracy of 92.0%, sensitivity of 96.0%, and an F1-Score of 91.44% for COVID-19 detection and got overall accuracy of 90.0% for COVID-19 severity detection for three classes.
Collapse
Affiliation(s)
- A. L. Aswathy
- Department of Computer Science, University of Kerala, Trivandrum, Kerala India
| | - Hareendran S. Anand
- Department of Computer Science and Engineering, Muthoot Institute of Technology and Science, Kochi, Kerala India
| | - S. S. Vinod Chandra
- Department of Computer Science, University of Kerala, Trivandrum, Kerala India
| |
Collapse
|
31
|
Das R, Kaur K, Walia E. Feature Generalization for Breast Cancer Detection in Histopathological Images. Interdiscip Sci 2022; 14:566-581. [PMID: 35482216 DOI: 10.1007/s12539-022-00515-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 03/17/2022] [Accepted: 03/22/2022] [Indexed: 06/14/2023]
Abstract
Recent period has witnessed benchmarked performance of transfer learning using deep architectures in computer-aided diagnosis (CAD) of breast cancer. In this perspective, the pre-trained neural network needs to be fine-tuned with relevant data to extract useful features from the dataset. However, in addition to the computational overhead, it suffers the curse of overfitting in case of feature extraction from smaller datasets. Handcrafted feature extraction techniques as well as feature extraction using pre-trained deep networks come into rescue in aforementioned situation and have proved to be much more efficient and lightweight compared to deep architecture-based transfer learning techniques. This research has identified the competence of classifying breast cancer images using feature engineering and representation learning over the established and contemporary notion of using transfer learning techniques. Moreover, it has revealed superior feature learning capacity with feature fusion in contrast to the conventional belief of understanding unknown feature patterns better with representation learning alone. Experiments have been conducted on two different and popular breast cancer image datasets, namely, KIMIA Path960 and BreakHis datasets. A comparison of image-level accuracy is performed on these datasets using the above-mentioned feature extraction techniques. Image level accuracy of 97.81% is achieved for KIMIA Path960 dataset using individual features extracted with handcrafted (color histogram) technique. Fusion of uniform Local Binary Pattern (uLBP) and color histogram features has resulted in 99.17% of highest accuracy for the same dataset. Experimentation with BreakHis dataset has resulted in highest classification accuracy of 88.41% with color histogram features for images with 200X magnification factor. Finally, the results are contrasted to that of state-of-the-art and superior performances are observed on many occasions with the proposed fusion-based techniques. In case of BreakHis dataset, the highest accuracies 87.60% (with least standard deviation) and 85.77% are recorded for 200X and 400X magnification factors, respectively, and the results for the aforesaid magnification factors of images have exceeded the state-of-the-art.
Collapse
Affiliation(s)
- Rik Das
- Programme of Information Technology, Xavier Institute of Social Service, Ranchi, 834001, Jharkhand, India.
| | - Kanwalpreet Kaur
- Department of Computer Science, Punjabi University, Patiala, India
| | - Ekta Walia
- Department of Medical Imaging, University of Saskatchewan, Saskatoon, Canada
| |
Collapse
|
32
|
Prasad DS, Chanamallu SR, Prasad KS. Optimized deformable convolution network for detection and mitigation of ocular artifacts from EEG signal. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:30841-30879. [PMID: 35431612 PMCID: PMC8989407 DOI: 10.1007/s11042-022-12874-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Revised: 02/08/2022] [Accepted: 03/10/2022] [Indexed: 06/14/2023]
Abstract
Electroencephalogram (EEG) is the key component in the field of analyzing brain activity and behavior. EEG signals are affected by artifacts in the recorded electrical activity; thereby it affects the analysis of EGG. To extract the clean data from EEG signals and to improve the efficiency of detection during encephalogram recordings, a developed model is required. Although various methods have been proposed for the artifacts removal process, sill the research on this process continues. Even if, several types of artifacts from both the subject and equipment interferences are highly contaminated the EEG signals, the most common and important type of interferences is known as Ocular artifacts. Many applications like Brain-Computer Interface (BCI) need online and real-time processing of EEG signals. Hence, it is best if the removal of artifacts is performed in an online fashion. The main intention of this proposal is to accomplish the new deep learning-based ocular artifacts detection and prevention model. In the detection phase, the 5-level Discrete Wavelet Transform (DWT), and Pisarenko harmonic decomposition are used for decomposing the signals. Then, the Principle Component Analysis (PCA) and Independent Component Analysis (ICA) are adopted as the techniques for extracting the features. With the collected features, the development of optimized Deformable Convolutional Networks (DCN) is used for the detection of ocular artifacts from the input EEG signal. Here, the optimized DCN is developed by optimizing or tuning some significant parameters by Distance Sorted-Electric Fish Optimization (DS-EFO). If the artifacts are detected, the mitigation process is performed by applying the Empirical Mean Curve Decomposition (EMCD), and then, the optimized DCN is used for denoising the signals. Finally, the clean signal is generated by applying inverse EMCD. Based on the EEG data collected from diverse subjects, the proposed method has achieved a higher performance than that of conventional methods, which demonstrates a better ocular-artifact reduction by the proposed method.
Collapse
Affiliation(s)
| | | | - Kodati Satya Prasad
- Department of ECE, JNTUK, University College of Engineering, Kakinada, AP India
| |
Collapse
|
33
|
A Hybrid Deep Learning Approach for COVID-19 Diagnosis via CT and X-ray Medical Images. IOCA 2021 2021. [DOI: 10.3390/ioca2021-10909] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
34
|
Dash S, Chakraborty C, Giri SK, Pani SK. Intelligent computing on time-series data analysis and prediction of COVID-19 pandemics. Pattern Recognit Lett 2021; 151:69-75. [PMID: 34413555 PMCID: PMC8364174 DOI: 10.1016/j.patrec.2021.07.027] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Revised: 07/11/2021] [Accepted: 07/28/2021] [Indexed: 11/29/2022]
Abstract
Covid-19 disease caused by novel coronavirus (SARS-CoV-2) is a highly contagious epidemic that originated in Wuhan, Hubei Province of China in late December 2019. World Health Organization (WHO) declared Covid-19 as a pandemic on 12th March 2020. Researchers and policy makers are designing strategies to control the pandemic in order to minimize its impact on human health and economy round the clock. The SARS-CoV-2 virus transmits mostly through respiratory droplets and through contaminated surfacesin human body.Securing an appropriate level of safety during the pandemic situation is a highly problematic issue which resulted from the transportation sector which has been hit hard by COVID-19. This paper focuses on developing an intelligent computing model for forecasting the outbreak of COVID-19. The Facebook Prophet model predicts 90 days future values including the peak date of the confirmed cases of COVID-19 for six worst hit countries of the world including India and six high incidence states of India. The model also identifies five significant changepoints in the growth curve of confirmed cases of India which indicate the impact of the interventions imposed by Government of India on the growth rate of the infection. The goodness-of-fit of the model measures 85% MAPE for all six countries and all six states of India. The above computational analysis may be able to throw some light on planning and management of healthcare system and infrastructure.
Collapse
Affiliation(s)
- Sujata Dash
- Maharaja Sriram Chandra BhanjaDeo University (Erstwhile North Orissa University) Takatpur, Baripada, India
| | | | - Sourav K Giri
- Maharaja Sriram Chandra BhanjaDeo University (Erstwhile North Orissa University) Takatpur, Baripada, India
| | | |
Collapse
|