1
|
Xiong X, Sun Y, Liu X, Ke W, Lam CT, Chen J, Jiang M, Wang M, Xie H, Tong T, Gao Q, Chen H, Tan T. Distance guided generative adversarial network for explainable medical image classifications. Comput Med Imaging Graph 2024; 118:102444. [PMID: 39426341 DOI: 10.1016/j.compmedimag.2024.102444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 09/11/2024] [Accepted: 10/01/2024] [Indexed: 10/21/2024]
Abstract
Despite the potential benefits of data augmentation for mitigating data insufficiency, traditional augmentation methods primarily rely on prior intra-domain knowledge. On the other hand, advanced generative adversarial networks (GANs) generate inter-domain samples with limited variety. These previous methods make limited contributions to describing the decision boundaries for binary classification. In this paper, we propose a distance-guided GAN (DisGAN) that controls the variation degrees of generated samples in the hyperplane space. Specifically, we instantiate the idea of DisGAN by combining two ways. The first way is vertical distance GAN (VerDisGAN) where the inter-domain generation is conditioned on the vertical distances. The second way is horizontal distance GAN (HorDisGAN) where the intra-domain generation is conditioned on the horizontal distances. Furthermore, VerDisGAN can produce the class-specific regions by mapping the source images to the hyperplane. Experimental results show that DisGAN consistently outperforms the GAN-based augmentation methods with explainable binary classification. The proposed method can apply to different classification architectures and has the potential to extend to multi-class classification. We provide the code in https://github.com/yXiangXiong/DisGAN.
Collapse
Affiliation(s)
- Xiangyu Xiong
- Faculty of Applied Sciences, Macao Polytechnic University, 999078, Macao Special Administrative Region of China
| | - Yue Sun
- Faculty of Applied Sciences, Macao Polytechnic University, 999078, Macao Special Administrative Region of China
| | - Xiaohong Liu
- John Hopcroft Center (JHC) for Computer Science, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Wei Ke
- Faculty of Applied Sciences, Macao Polytechnic University, 999078, Macao Special Administrative Region of China
| | - Chan-Tong Lam
- Faculty of Applied Sciences, Macao Polytechnic University, 999078, Macao Special Administrative Region of China
| | - Jiangang Chen
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication and Electronic Engineering, East China Normal University, Shanghai, 200241, China; Engineering Research Center of Traditional Chinese Medicine Intelligent Rehabilitation, Ministry of Education, Shanghai, 201203, China
| | - Mingfeng Jiang
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, 310018, China
| | - Mingwei Wang
- Department of Cardiology, Affiliated Hospital of Hangzhou Normal University, China; Institute of Cardiovascular Diseases, Hangzhou Normal University, Hangzhou, 310015, China
| | - Hui Xie
- Department of Radiation Oncology, Affiliated Hospital (Clinical College) of Xiangnan University, Chenzhou, 423000, China
| | - Tong Tong
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, 350108, China
| | - Qinquan Gao
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, 350108, China
| | - Hao Chen
- Department of Mathware, Jiangsu JITRI Sioux Technologies Company, Ltd, Suzhou, 215000, China
| | - Tao Tan
- Faculty of Applied Sciences, Macao Polytechnic University, 999078, Macao Special Administrative Region of China.
| |
Collapse
|
2
|
Gao F, Li B, Chen L, Wei X, Shang Z, Liu C. Ultrasound image super-resolution reconstruction based on semi-supervised CycleGAN. ULTRASONICS 2024; 137:107177. [PMID: 37832382 DOI: 10.1016/j.ultras.2023.107177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 08/31/2023] [Accepted: 10/05/2023] [Indexed: 10/15/2023]
Abstract
In ultrasonic testing, diffraction artifacts generated around defects increase the challenge of quantitatively characterizing defects. In this paper, we propose a label-enhanced semi-supervised CycleGAN network model, referred to as LESS-CycleGAN, which is a conditional cycle generative adversarial network designed for accurately characterizing defect morphology in ultrasonic testing images. The proposed method introduces paired cross-domain image samples during model training to achieve a defect transformation between the ultrasound image domain and the morphology image domain, thereby eliminating artifacts. Furthermore, the method incorporates a novel authenticity loss function to ensure high-precision defect reconstruction capability. To validate the effectiveness and robustness of the model, we use simulated 2D images of defects and corresponding ultrasonic detection images as training and test sets, and an actual ultrasonic phased array image of a test block as the validation set to evaluate the model's application performance. The experimental results demonstrate that the proposed method is convenient and effective, achieving subwavelength-scale defect reconstruction with good robustness.
Collapse
Affiliation(s)
- Fei Gao
- State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, 710049, China; International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technology, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Bing Li
- State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, 710049, China; International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technology, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Lei Chen
- State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, 710049, China; International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technology, Xi'an Jiaotong University, Xi'an, 710049, China.
| | - Xiang Wei
- State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, 710049, China; International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technology, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Zhongyu Shang
- State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, 710049, China; International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technology, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Chunman Liu
- State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, 710049, China; International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technology, Xi'an Jiaotong University, Xi'an, 710049, China
| |
Collapse
|
3
|
Hussein AM, Sharifai AG, Alia OM, Abualigah L, Almotairi KH, Abujayyab SKM, Gandomi AH. Auto-detection of the coronavirus disease by using deep convolutional neural networks and X-ray photographs. Sci Rep 2024; 14:534. [PMID: 38177156 PMCID: PMC10766625 DOI: 10.1038/s41598-023-47038-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 11/08/2023] [Indexed: 01/06/2024] Open
Abstract
The most widely used method for detecting Coronavirus Disease 2019 (COVID-19) is real-time polymerase chain reaction. However, this method has several drawbacks, including high cost, lengthy turnaround time for results, and the potential for false-negative results due to limited sensitivity. To address these issues, additional technologies such as computed tomography (CT) or X-rays have been employed for diagnosing the disease. Chest X-rays are more commonly used than CT scans due to the widespread availability of X-ray machines, lower ionizing radiation, and lower cost of equipment. COVID-19 presents certain radiological biomarkers that can be observed through chest X-rays, making it necessary for radiologists to manually search for these biomarkers. However, this process is time-consuming and prone to errors. Therefore, there is a critical need to develop an automated system for evaluating chest X-rays. Deep learning techniques can be employed to expedite this process. In this study, a deep learning-based method called Custom Convolutional Neural Network (Custom-CNN) is proposed for identifying COVID-19 infection in chest X-rays. The Custom-CNN model consists of eight weighted layers and utilizes strategies like dropout and batch normalization to enhance performance and reduce overfitting. The proposed approach achieved a classification accuracy of 98.19% and aims to accurately classify COVID-19, normal, and pneumonia samples.
Collapse
Affiliation(s)
- Ahmad MohdAziz Hussein
- Department of Computer Science, Faculty of Information Technology, Middle East University, Amman, Jordan.
| | - Abdulrauf Garba Sharifai
- Department of Computer Sciences, Yusuf Maitama Sule University, Kofar Nassarawa, Kano, 700222, Nigeria
| | - Osama Moh'd Alia
- Department of Computer Science, Faculty of Computes and Information Technology, University of Tabuk, 71491, Tabuk, Saudi Arabia
| | - Laith Abualigah
- Computer Science Department, Prince Hussein Bin Abdullah Faculty for Information Technology, Al Al-Bayt University, Mafraq, 25113, Jordan
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos, 13-5053, Lebanon
- Hourani Center for Applied Scientific Research, Al-Ahliyya Amman University, Amman, 19328, Jordan
- Applied Science Research Center, Applied Science Private University, Amman, 11931, Jordan
- School of Engineering and Technology, Sunway University Malaysia, 27500, Petaling Jaya, Malaysia
- School of Computer Sciences, Universiti Sains Malaysia, 11800, Pulau Pinang, Malaysia
| | - Khaled H Almotairi
- Computer Engineering Department, Computer and Information Systems College, Umm Al-Qura University, 21955, Makkah, Saudi Arabia
| | | | - Amir H Gandomi
- Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW, 2007, Australia.
- University Research and Innovation Center (EKIK), Óbuda University, Budapest, 1034, Hungary.
| |
Collapse
|
4
|
Valbuena Rubio S, García-Ordás MT, García-Olalla Olivera O, Alaiz-Moretón H, González-Alonso MI, Benítez-Andrades JA. Survival and grade of the glioma prediction using transfer learning. PeerJ Comput Sci 2023; 9:e1723. [PMID: 38192446 PMCID: PMC10773899 DOI: 10.7717/peerj-cs.1723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 11/06/2023] [Indexed: 01/10/2024]
Abstract
Glioblastoma is a highly malignant brain tumor with a life expectancy of only 3-6 months without treatment. Detecting and predicting its survival and grade accurately are crucial. This study introduces a novel approach using transfer learning techniques. Various pre-trained networks, including EfficientNet, ResNet, VGG16, and Inception, were tested through exhaustive optimization to identify the most suitable architecture. Transfer learning was applied to fine-tune these models on a glioblastoma image dataset, aiming to achieve two objectives: survival and tumor grade prediction.The experimental results show 65% accuracy in survival prediction, classifying patients into short, medium, or long survival categories. Additionally, the prediction of tumor grade achieved an accuracy of 97%, accurately differentiating low-grade gliomas (LGG) and high-grade gliomas (HGG). The success of the approach is attributed to the effectiveness of transfer learning, surpassing the current state-of-the-art methods. In conclusion, this study presents a promising method for predicting the survival and grade of glioblastoma. Transfer learning demonstrates its potential in enhancing prediction models, particularly in scenarios with limited large datasets. These findings hold promise for improving diagnostic and treatment approaches for glioblastoma patients.
Collapse
Affiliation(s)
| | - María Teresa García-Ordás
- SECOMUCI Research Group, Escuela de Ingenierías Industrial e Informática, Universidad de León, León, Spain
| | | | - Héctor Alaiz-Moretón
- SECOMUCI Research Group, Escuela de Ingenierías Industrial e Informática, Universidad de León, León, Spain
| | | | | |
Collapse
|
5
|
Chatterjee A, Prinz A, Riegler MA, Das J. A systematic review and knowledge mapping on ICT-based remote and automatic COVID-19 patient monitoring and care. BMC Health Serv Res 2023; 23:1047. [PMID: 37777722 PMCID: PMC10543863 DOI: 10.1186/s12913-023-10047-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 09/20/2023] [Indexed: 10/02/2023] Open
Abstract
BACKGROUND e-Health has played a crucial role during the COVID-19 pandemic in primary health care. e-Health is the cost-effective and secure use of Information and Communication Technologies (ICTs) to support health and health-related fields. Various stakeholders worldwide use ICTs, including individuals, non-profit organizations, health practitioners, and governments. As a result of the COVID-19 pandemic, ICT has improved the quality of healthcare, the exchange of information, training of healthcare professionals and patients, and facilitated the relationship between patients and healthcare providers. This study systematically reviews the literature on ICT-based automatic and remote monitoring methods, as well as different ICT techniques used in the care of COVID-19-infected patients. OBJECTIVE The purpose of this systematic literature review is to identify the e-Health methods, associated ICTs, method implementation strategies, information collection techniques, advantages, and disadvantages of remote and automatic patient monitoring and care in COVID-19 pandemic. METHODS The search included primary studies that were published between January 2020 and June 2022 in scientific and electronic databases, such as EBSCOhost, Scopus, ACM, Nature, SpringerLink, IEEE Xplore, MEDLINE, Google Scholar, JMIR, Web of Science, Science Direct, and PubMed. In this review, the findings from the included publications are presented and elaborated according to the identified research questions. Evidence-based systematic reviews and meta-analyses were conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework. Additionally, we improved the review process using the Rayyan tool and the Scale for the Assessment of Narrative Review Articles (SANRA). Among the eligibility criteria were methodological rigor, conceptual clarity, and useful implementation of ICTs in e-Health for remote and automatic monitoring of COVID-19 patients. RESULTS Our initial search identified 664 potential studies; 102 were assessed for eligibility in the pre-final stage and 65 articles were used in the final review with the inclusion and exclusion criteria. The review identified the following eHealth methods-Telemedicine, Mobile Health (mHealth), and Telehealth. The associated ICTs are Wearable Body Sensors, Artificial Intelligence (AI) algorithms, Internet-of-Things, or Internet-of-Medical-Things (IoT or IoMT), Biometric Monitoring Technologies (BioMeTs), and Bluetooth-enabled (BLE) home health monitoring devices. Spatial or positional data, personal and individual health, and wellness data, including vital signs, symptoms, biomedical images and signals, and lifestyle data are examples of information that is managed by ICTs. Different AI and IoT methods have opened new possibilities for automatic and remote patient monitoring with associated advantages and weaknesses. Our findings were represented in a structured manner using a semantic knowledge graph (e.g., ontology model). CONCLUSIONS Various e-Health methods, related remote monitoring technologies, different approaches, information categories, the adoption of ICT tools for an automatic remote patient monitoring (RPM), advantages and limitations of RMTs in the COVID-19 case are discussed in this review. The use of e-Health during the COVID-19 pandemic illustrates the constraints and possibilities of using ICTs. ICTs are not merely an external tool to achieve definite remote and automatic health monitoring goals; instead, they are embedded in contexts. Therefore, the importance of the mutual design process between ICT and society during the global health crisis has been observed from a social informatics perspective. A global health crisis can be observed as an information crisis (e.g., insufficient information, unreliable information, and inaccessible information); however, this review shows the influence of ICTs on COVID-19 patients' health monitoring and related information collection techniques.
Collapse
Affiliation(s)
- Ayan Chatterjee
- Department of Information and Communication Technology, Centre for e-Health, University of Agder, Grimstad, Norway.
- Department of Holistic Systems, Simula Metropolitan Center for Digital Engineering, Oslo, Norway.
| | - Andreas Prinz
- Department of Information and Communication Technology, Centre for e-Health, University of Agder, Grimstad, Norway
| | - Michael A Riegler
- Department of Holistic Systems, Simula Metropolitan Center for Digital Engineering, Oslo, Norway
| | - Jishnu Das
- Department of Information Systems, Centre for e-Health, University of Agder, Kristiansand, Norway
| |
Collapse
|
6
|
Ghassemi N, Shoeibi A, Khodatars M, Heras J, Rahimi A, Zare A, Zhang YD, Pachori RB, Gorriz JM. Automatic diagnosis of COVID-19 from CT images using CycleGAN and transfer learning. Appl Soft Comput 2023; 144:110511. [PMID: 37346824 PMCID: PMC10263244 DOI: 10.1016/j.asoc.2023.110511] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 08/23/2022] [Accepted: 06/08/2023] [Indexed: 06/23/2023]
Abstract
The outbreak of the corona virus disease (COVID-19) has changed the lives of most people on Earth. Given the high prevalence of this disease, its correct diagnosis in order to quarantine patients is of the utmost importance in the steps of fighting this pandemic. Among the various modalities used for diagnosis, medical imaging, especially computed tomography (CT) imaging, has been the focus of many previous studies due to its accuracy and availability. In addition, automation of diagnostic methods can be of great help to physicians. In this paper, a method based on pre-trained deep neural networks is presented, which, by taking advantage of a cyclic generative adversarial net (CycleGAN) model for data augmentation, has reached state-of-the-art performance for the task at hand, i.e., 99.60% accuracy. Also, in order to evaluate the method, a dataset containing 3163 images from 189 patients has been collected and labeled by physicians. Unlike prior datasets, normal data have been collected from people suspected of having COVID-19 disease and not from data from other diseases, and this database is made available publicly. Moreover, the method's reliability is further evaluated by calibration metrics, and its decision is interpreted by Grad-CAM also to find suspicious regions as another output of the method and make its decisions trustworthy and explainable.
Collapse
Affiliation(s)
- Navid Ghassemi
- Faculty of Electrical Engineering, FPGA Lab, K. N. Toosi University of Technology, Tehran, Iran
- Computer Engineering department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Afshin Shoeibi
- Faculty of Electrical Engineering, FPGA Lab, K. N. Toosi University of Technology, Tehran, Iran
- Computer Engineering department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Marjane Khodatars
- Department of Medical Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran
| | - Jonathan Heras
- Department of Mathematics and Computer Science, University of La Rioja, La Rioja, Spain
| | - Alireza Rahimi
- Computer Engineering department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Assef Zare
- Faculty of Electrical Engineering, Gonabad Branch, Islamic Azad University, Gonabad, Iran
| | - Yu-Dong Zhang
- School of Informatics, University of Leicester, Leicester, LE1 7RH, UK
| | - Ram Bilas Pachori
- Department of Electrical Engineering, Indian Institute of Technology Indore, Indore 453552, India
| | - J Manuel Gorriz
- Department of Signal Theory, Networking and Communications, Universidad de Granada, Spain
- Department of Psychiatry, University of Cambridge, UK
| |
Collapse
|
7
|
Dogan S, Baygin M, Tasci B, Loh HW, Barua PD, Tuncer T, Tan RS, Acharya UR. Primate brain pattern-based automated Alzheimer's disease detection model using EEG signals. Cogn Neurodyn 2023; 17:647-659. [PMID: 37265658 PMCID: PMC10229526 DOI: 10.1007/s11571-022-09859-2] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 07/08/2022] [Accepted: 07/18/2022] [Indexed: 11/03/2022] Open
Abstract
Electroencephalography (EEG) may detect early changes in Alzheimer's disease (AD), a debilitating progressive neurodegenerative disease. We have developed an automated AD detection model using a novel directed graph for local texture feature extraction with EEG signals. The proposed graph was created from a topological map of the macroscopic connectome, i.e., neuronal pathways linking anatomo-functional brain segments involved in visual object recognition and motor response in the primate brain. This primate brain pattern (PBP)-based model was tested on a public AD EEG signal dataset. The dataset comprised 16-channel EEG signal recordings of 12 AD patients and 11 healthy controls. While PBP could generate 448 low-level features per one-dimensional EEG signal, combining it with tunable q-factor wavelet transform created a multilevel feature extractor (which mimicked deep models) to generate 8,512 (= 448 × 19) features per signal input. Iterative neighborhood component analysis was used to choose the most discriminative features (the number of optimal features varied among the individual EEG channels) to feed to a weighted k-nearest neighbor (KNN) classifier for binary classification into AD vs. healthy using both leave-one subject-out (LOSO) and tenfold cross-validations. Iterative majority voting was used to compute subject-level general performance results from the individual channel classification outputs. Channel-wise, as well as subject-level general results demonstrated exemplary performance. In addition, the model attained 100% and 92.01% accuracy for AD vs. healthy classification using the KNN classifier with tenfold and LOSO cross-validations, respectively. Our developed multilevel PBP-based model extracted discriminative features from EEG signals and paved the way for further development of models inspired by the brain connectome.
Collapse
Affiliation(s)
- Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Mehmet Baygin
- Department of Computer Engineering, College of Engineering, Ardahan University, Ardahan, Turkey
| | - Burak Tasci
- Vocational School of Technical Sciences, Firat University, Elazig, 23119 Turkey
| | - Hui Wen Loh
- School of Science and Technology, Singapore University of Social Sciences, 463 Clementi Road, Singapore, 599494 Singapore
| | - Prabal D. Barua
- School of Business (Information System), University of Southern Queensland, Toowoomba, QLD 4350 Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007 Australia
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Ru-San Tan
- Department of Cardiology, National Heart Centre Singapore, Singapore 169609, Singapore
- Duke-NUS Medical School, Singapore 169857, Singapore
| | - U. Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore, 599489 Singapore
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore, Singapore
- Department of Biomedical Informatics and Medical Engineering, AsiaUniversity, Taichung, Taiwan
| |
Collapse
|
8
|
Mozaffari J, Amirkhani A, Shokouhi SB. A survey on deep learning models for detection of COVID-19. Neural Comput Appl 2023; 35:1-29. [PMID: 37362568 PMCID: PMC10224665 DOI: 10.1007/s00521-023-08683-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 05/10/2023] [Indexed: 06/28/2023]
Abstract
The spread of the COVID-19 started back in 2019; and so far, more than 4 million people around the world have lost their lives to this deadly virus and its variants. In view of the high transmissibility of the Corona virus, which has turned this disease into a global pandemic, artificial intelligence can be employed as an effective tool for an earlier detection and treatment of this illness. In this review paper, we evaluate the performance of the deep learning models in processing the X-Ray and CT-Scan images of the Corona patients' lungs and describe the changes made to these models in order to enhance their Corona detection accuracy. To this end, we introduce the famous deep learning models such as VGGNet, GoogleNet and ResNet and after reviewing the research works in which these models have been used for the detection of COVID-19, we compare the performances of the newer models such as DenseNet, CapsNet, MobileNet and EfficientNet. We then present the deep learning techniques of GAN, transfer learning, and data augmentation and examine the statistics of using these techniques. Here, we also describe the datasets introduced since the onset of the COVID-19. These datasets contain the lung images of Corona patients, healthy individuals, and the patients with non-Corona pulmonary diseases. Lastly, we elaborate on the existing challenges in the use of artificial intelligence for COVID-19 detection and the prospective trends of using this method in similar situations and conditions. Supplementary Information The online version contains supplementary material available at 10.1007/s00521-023-08683-x.
Collapse
Affiliation(s)
- Javad Mozaffari
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, 16846-13114 Iran
| | - Abdollah Amirkhani
- School of Automotive Engineering, Iran University of Science and Technology, Tehran, 16846-13114 Iran
| | - Shahriar B. Shokouhi
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, 16846-13114 Iran
| |
Collapse
|
9
|
Soundrapandiyan R, Naidu H, Karuppiah M, Maheswari M, Poonia RC. AI-based wavelet and stacked deep learning architecture for detecting coronavirus (COVID-19) from chest X-ray images. COMPUTERS & ELECTRICAL ENGINEERING : AN INTERNATIONAL JOURNAL 2023; 108:108711. [PMID: 37065503 PMCID: PMC10086108 DOI: 10.1016/j.compeleceng.2023.108711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 03/29/2023] [Accepted: 03/31/2023] [Indexed: 06/19/2023]
Abstract
A novel coronavirus (COVID-19), belonging to a family of severe acute respiratory syndrome coronavirus 2 (SARs-CoV-2), was identified in Wuhan city, Hubei, China, in November 2019. The disease had already infected more than 681.529665 million people as of March 13, 2023. Hence, early detection and diagnosis of COVID-19 are essential. For this purpose, radiologists use medical images such as X-ray and computed tomography (CT) images for the diagnosis of COVID-19. It is very difficult for researchers to help radiologists to do automatic diagnoses by using traditional image processing methods. Therefore, a novel artificial intelligence (AI)-based deep learning model to detect COVID-19 from chest X-ray images is proposed. The proposed work uses a wavelet and stacked deep learning architecture (ResNet50, VGG19, Xception, and DarkNet19) named WavStaCovNet-19 to detect COVID-19 from chest X-ray images automatically. The proposed work has been tested on two publicly available datasets and achieved an accuracy of 94.24% and 96.10% on 4 classes and 3 classes, respectively. From the experimental results, we believe that the proposed work can surely be useful in the healthcare domain to detect COVID-19 with less time and cost, and with higher accuracy.
Collapse
Affiliation(s)
- Rajkumar Soundrapandiyan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | | | - Marimuthu Karuppiah
- School of Computer Science and Engineering & Information Science, Presidency University, Bengaluru, Karnataka 560064, India
| | - M Maheswari
- Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai 600119, India
| | - Ramesh Chandra Poonia
- Department of Computer Science, CHRIST (Deemed to be University), Bengaluru, Karnataka 560029, India
| |
Collapse
|
10
|
Lal KN. A lung sound recognition model to diagnoses the respiratory diseases by using transfer learning. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 82:1-17. [PMID: 37362727 PMCID: PMC10050810 DOI: 10.1007/s11042-023-14727-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 09/29/2022] [Accepted: 02/05/2023] [Indexed: 06/28/2023]
Abstract
Respiratory disease is one of the leading causes of death in the world. Through advances in Artificial Intelligence, it appears possible for the days of misdiagnosis and treatment of respiratory disease symptoms rather than their root cause to move behind us. The traditional convolutional neural network cannot extract the temporal features of lung sounds. To solve the problem, a lung sounds recognition algorithm based on VGGish- stacked BiGRU is proposed which combines the VGGish network with the stacked bidirectional gated recurrent unit neural network. A lung Sound Recognition Algorithm Based on VGGish-Stacked BiGRU is used as a feature extractor which is a pre-trained model used for transfer learning. The target model is built with the same structure as the source model which is the VGGish model and parameter transfer is done from the source model to the target model. The multi-layer BiGRU stack is used to enhance the feature value and retain the model. While fine-tuning of the parameter of VGGish is frozen which successfully improves the model. The experimental results show that the proposed algorithm improves the recognition accuracy of lung sounds and the recognition accuracy of respiratory diseases.
Collapse
Affiliation(s)
- Kumari Nidhi Lal
- Department of Computer Science Engineering, Visvesvaraya National Institute of Technology (VNIT Nagpur), Nagpur, Maharashrta India
| |
Collapse
|
11
|
Gupta K, Bajaj V. Deep learning models-based CT-scan image classification for automated screening of COVID-19. Biomed Signal Process Control 2023; 80:104268. [PMID: 36267466 PMCID: PMC9556167 DOI: 10.1016/j.bspc.2022.104268] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 09/07/2022] [Accepted: 09/26/2022] [Indexed: 02/01/2023]
Abstract
COVID-19 is the most transmissible disease, caused by the SARS-CoV-2 virus that severely infects the lungs and the upper respiratory tract of the human body. This virus badly affected the lives and wellness of millions of people worldwide and spread widely. Early diagnosis, timely treatment, and proper confinement of the infected patients are some possible ways to control the spreading of coronavirus. Computed tomography (CT) scanning has proven useful in diagnosing several respiratory lung problems, including COVID-19 infections. Automated detection of COVID-19 using chest CT-scan images may reduce the clinician's load and save the lives of thousands of people. This study proposes a robust framework for the automated screening of COVID-19 using chest CT-scan images and deep learning-based techniques. In this work, a publically accessible CT-scan image dataset (contains the 1252 COVID-19 and 1230 non-COVID chest CT images), two pre-trained deep learning models (DLMs) namely, MobileNetV2 and DarkNet19, and a newly-designed lightweight DLM, are utilized for the automated screening of COVID-19. A repeated ten-fold holdout validation method is utilized for the training, validation, and testing of DLMs. The highest classification accuracy of 98.91% is achieved using transfer-learned DarkNet19. The proposed framework is ready to be tested with more CT images. The simulation results with the publicly available COVID-19 CT scan image dataset are included to show the effectiveness of the presented study.
Collapse
|
12
|
Kaya Y, Gürsoy E. A MobileNet-based CNN model with a novel fine-tuning mechanism for COVID-19 infection detection. Soft comput 2023; 27:5521-5535. [PMID: 36618761 PMCID: PMC9812349 DOI: 10.1007/s00500-022-07798-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/24/2022] [Indexed: 01/05/2023]
Abstract
COVID-19 is a virus that causes upper respiratory tract and lung infections. The number of cases and deaths increased daily during the pandemic. Once it is vital to diagnose such a disease in a timely manner, the researchers have focused on computer-aided diagnosis systems. Chest X-rays have helped monitor various lung diseases consisting COVID-19. In this study, we proposed a deep transfer learning approach with novel fine-tuning mechanisms to classify COVID-19 from chest X-ray images. We presented one classical and two new fine-tuning mechanisms to increase the model's performance. Two publicly available databases were combined and used for the study, which included 3616 COVID-19 and 1576 normal (healthy) and 4265 pneumonia X-ray images. The models achieved average accuracy rates of 95.62%, 96.10%, and 97.61%, respectively, for 3-class cases with fivefold cross-validation. Numerical results show that the third model reduced 81.92% of the total fine-tuning operations and achieved better results. The proposed approach is quite efficient compared with other state-of-the-art methods of detecting COVID-19.
Collapse
Affiliation(s)
- Yasin Kaya
- Department of Computer Engineering, Adana Alparslan Turkes Science and Technology University, Adana, Turkey
| | - Ercan Gürsoy
- Department of Computer Engineering, Adana Alparslan Turkes Science and Technology University, Adana, Turkey
| |
Collapse
|
13
|
Morís DI, de Moura J, Novo J, Ortega M. Unsupervised contrastive unpaired image generation approach for improving tuberculosis screening using chest X-ray images. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.10.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
14
|
Ukwuoma CC, Qin Z, Agbesi VK, Ejiyi CJ, Bamisile O, Chikwendu IA, Tienin BW, Hossin MA. LCSB-inception: Reliable and effective light-chroma separated branches for Covid-19 detection from chest X-ray images. Comput Biol Med 2022; 150:106195. [PMID: 37859288 PMCID: PMC9561436 DOI: 10.1016/j.compbiomed.2022.106195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 09/03/2022] [Accepted: 10/09/2022] [Indexed: 11/24/2022]
Abstract
According to the World Health Organization, an estimate of more than five million infections and 355,000 deaths have been recorded worldwide since the emergence of the coronavirus disease (COVID-19). Various researchers have developed interesting and effective deep learning frameworks to tackle this disease. However, poor feature extraction from the Chest X-ray images and the high computational cost of the available models impose difficulties to an accurate and fast Covid-19 detection framework. Thus, the major purpose of this study is to offer an accurate and efficient approach for extracting COVID-19 features from chest X-rays that is also less computationally expensive than earlier research. To achieve the specified goal, we explored the Inception V3 deep artificial neural network. This study proposed LCSB-Inception; a two-path (L and AB channel) Inception V3 network along the first three convolutional layers. The RGB input image is first transformed to CIE LAB coordinates (L channel which is aimed at learning the textural and edge features of the Chest X-Ray and AB channel which is aimed at learning the color variations of the Chest X-ray images). The L achromatic channel and the AB channels filters are set to 50%L-50%AB. This method saves between one-third and one-half of the parameters in the divided branches. We further introduced a global second-order pooling at the last two convolutional blocks for more robust image feature extraction against the conventional max-pooling. The detection accuracy of the LCSB-Inception is further improved by employing the Contrast Limited Adaptive Histogram Equalization (CLAHE) image enhancement technique on the input image before feeding them to the network. The proposed LCSB-Inception network is experimented on using two loss functions (Categorically smooth loss and categorically Cross-entropy) and two learning rates whereas Accuracy, Precision, Sensitivity, Specificity F1-Score, and AUC Score were used for evaluation via the chestX-ray-15k (Data_1) and COVID-19 Radiography dataset (Data_2). The proposed models produced an acceptable outcome with an accuracy of 0.97867 (Data_1) and 0.98199 (Data_2) according to the experimental findings. In terms of COVID-19 identification, the suggested models outperform conventional deep learning models and other state-of-the-art techniques presented in the literature based on the results.
Collapse
Affiliation(s)
- Chiagoziem C Ukwuoma
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Sichuan, PR China.
| | - Zhiguang Qin
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Sichuan, PR China.
| | - Victor Kwaku Agbesi
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Sichuan, PR China
| | - Chukwuebuka J Ejiyi
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Sichuan, PR China
| | - Olusola Bamisile
- Sichuan Industrial Internet Intelligent Monitoring and Application Engineering Technology Research Center, Chengdu University of Technology, Chenghua District, Chengdu, Sichuan, PR China
| | - Ijeoma A Chikwendu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Sichuan, PR China
| | - Bole W Tienin
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Sichuan, PR China
| | - Md Altab Hossin
- School of Innovation and Entrepreneurship, Chengdu University, No. 2025, Chengluo Avenue, 610106, Chengdu, Sichuan, PR China
| |
Collapse
|
15
|
El-Dahshan ESA, Bassiouni MM, Hagag A, Chakrabortty RK, Loh H, Acharya UR. RESCOVIDTCNnet: A residual neural network-based framework for COVID-19 detection using TCN and EWT with chest X-ray images. EXPERT SYSTEMS WITH APPLICATIONS 2022; 204:117410. [PMID: 35502163 PMCID: PMC9045872 DOI: 10.1016/j.eswa.2022.117410] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2021] [Revised: 04/07/2022] [Accepted: 04/25/2022] [Indexed: 06/14/2023]
Abstract
Since the advent of COVID-19, the number of deaths has increased exponentially, boosting the requirement for various research studies that may correctly diagnose the illness at an early stage. Using chest X-rays, this study presents deep learning-based algorithms for classifying patients with COVID illness, healthy controls, and pneumonia classes. Data gathering, pre-processing, feature extraction, and classification are the four primary aspects of the approach. The pictures of chest X-rays utilized in this investigation came from various publicly available databases. The pictures were filtered to increase image quality in the pre-processing stage, and the chest X-ray images were de-noised using the empirical wavelet transform (EWT). Following that, four deep learning models were used to extract features. The first two models, Inception-V3 and Resnet-50, are based on transfer learning models. The Resnet-50 is combined with a temporal convolutional neural network (TCN) to create the third model. The fourth model is our suggested RESCOVIDTCNNet model, which integrates EWT, Resnet-50, and TCN. Finally, an artificial neural network (ANN) and a support vector machine were used to classify the data (SVM). Using five-fold cross-validation for 3-class classification, our suggested RESCOVIDTCNNet achieved a 99.5 percent accuracy. Our prototype can be utilized in developing nations where radiologists are in low supply to acquire a diagnosis quickly.
Collapse
Affiliation(s)
- El-Sayed A El-Dahshan
- Department of Physics, Faculty of Science, Ain Shams University, Postal Code: 11566, Cairo, Egypt
- Egyptian E-Learning University (EELU), 33 El-messah Street, Eldoki, Postal Code: 11261, El-Giza, Egypt
| | - Mahmoud M Bassiouni
- Egyptian E-Learning University (EELU), 33 El-messah Street, Eldoki, Postal Code: 11261, El-Giza, Egypt
| | - Ahmed Hagag
- Department of Scientific Computing, Faculty of Computers and Artificial Intelligence, Benha University, Benha 13518, Egypt
| | - Ripon K Chakrabortty
- School of Engineering and IT, UNSW Canberra at ADFA, Canberra, ACT 2612, Australia
| | - Huiwen Loh
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore
| | - U Rajendra Acharya
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, 599489, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
16
|
A Novel Method for COVID-19 Detection Based on DCNNs and Hierarchical Structure. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:2484435. [PMID: 36092785 PMCID: PMC9453086 DOI: 10.1155/2022/2484435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 07/13/2022] [Accepted: 07/26/2022] [Indexed: 11/20/2022]
Abstract
The worldwide outbreak of the new coronavirus disease (COVID-19) has been declared a pandemic by the World Health Organization (WHO). It has a devastating impact on daily life, public health, and global economy. Due to the highly infectiousness, it is urgent to early screening of suspected cases quickly and accurately. Chest X-ray medical image, as a diagnostic basis for COVID-19, arouses attention from medical engineering. However, due to small lesion difference and lack of training data, the accuracy of detection model is insufficient. In this work, a transfer learning strategy is introduced to hierarchical structure to enhance high-level features of deep convolutional neural networks. The proposed framework consisting of asymmetric pretrained DCNNs with attention networks integrates various information into a wider architecture to learn more discriminative and complementary features. Furthermore, a novel cross-entropy loss function with a penalty term weakens misclassification. Extensive experiments are implemented on the COVID-19 dataset. Compared with the state-of-the-arts, the effectiveness and high performance of the proposed method are demonstrated.
Collapse
|
17
|
Handling class imbalance in COVID-19 chest X-ray images classification: Using SMOTE and weighted loss. Appl Soft Comput 2022; 129:109588. [PMID: 36061418 PMCID: PMC9422401 DOI: 10.1016/j.asoc.2022.109588] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 08/07/2022] [Accepted: 08/24/2022] [Indexed: 11/24/2022]
|
18
|
Triaging Medical Referrals Based on Clinical Prioritisation Criteria Using Machine Learning Techniques. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19127384. [PMID: 35742633 PMCID: PMC9224242 DOI: 10.3390/ijerph19127384] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Revised: 06/07/2022] [Accepted: 06/14/2022] [Indexed: 02/04/2023]
Abstract
Triaging of medical referrals can be completed using various machine learning techniques, but trained models with historical datasets may not be relevant as the clinical criteria for triaging are regularly updated and changed. This paper proposes the use of machine learning techniques coupled with the clinical prioritisation criteria (CPC) of Queensland (QLD), Australia, to deliver better triaging for referrals in accordance with the CPC’s updates. The unique feature of the proposed model is its non-reliance on the past datasets for model training. Medical Natural Language Processing (NLP) was applied in the proposed approach to process the medical referrals, which are unstructured free text. The proposed multiclass classification approach achieved a Micro F1 score = 0.98. The proposed approach can help in the processing of two million referrals that the QLD health service receives annually; therefore, they can deliver better and more efficient health services.
Collapse
|
19
|
Enhancement of Image Classification Using Transfer Learning and GAN-Based Synthetic Data Augmentation. MATHEMATICS 2022. [DOI: 10.3390/math10091541] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Plastic bottle recycling has a crucial role in environmental degradation and protection. Position and background should be the same to classify plastic bottles on a conveyor belt. The manual detection of plastic bottles is time consuming and leads to human error. Hence, the automatic classification of plastic bottles using deep learning techniques can assist with the more accurate results and reduce cost. To achieve a considerably good result using the DL model, we need a large volume of data to train. We propose a GAN-based model to generate synthetic images similar to the original. To improve the image synthesis quality with less training time and decrease the chances of mode collapse, we propose a modified lightweight-GAN model, which consists of a generator and a discriminator with an auto-encoding feature to capture essential parts of the input image and to encourage the generator to produce a wide range of real data. Then a newly designed weighted average ensemble model based on two pre-trained models, inceptionV3 and xception, to classify transparent plastic bottles obtains an improved classification accuracy of 99.06%.
Collapse
|
20
|
VANT-GAN: Adversarial Learning for Discrepancy-Based Visual Attribution in Medical Imaging. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.02.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|