1
|
Attallah O. Skin-CAD: Explainable deep learning classification of skin cancer from dermoscopic images by feature selection of dual high-level CNNs features and transfer learning. Comput Biol Med 2024; 178:108798. [PMID: 38925085 DOI: 10.1016/j.compbiomed.2024.108798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 05/30/2024] [Accepted: 06/19/2024] [Indexed: 06/28/2024]
Abstract
Skin cancer (SC) significantly impacts many individuals' health all over the globe. Hence, it is imperative to promptly identify and diagnose such conditions at their earliest stages using dermoscopic imaging. Computer-aided diagnosis (CAD) methods relying on deep learning techniques especially convolutional neural networks (CNN) can effectively address this issue with outstanding outcomes. Nevertheless, such black box methodologies lead to a deficiency in confidence as dermatologists are incapable of comprehending and verifying the predictions that were made by these models. This article presents an advanced an explainable artificial intelligence (XAI) based CAD system named "Skin-CAD" which is utilized for the classification of dermoscopic photographs of SC. The system accurately categorises the photographs into two categories: benign or malignant, and further classifies them into seven subclasses of SC. Skin-CAD employs four CNNs of different topologies and deep layers. It gathers features out of a pair of deep layers of every CNN, particularly the final pooling and fully connected layers, rather than merely depending on attributes from a single deep layer. Skin-CAD applies the principal component analysis (PCA) dimensionality reduction approach to minimise the dimensions of pooling layer features. This also reduces the complexity of the training procedure compared to using deep features from a CNN that has a substantial size. Furthermore, it combines the reduced pooling features with the fully connected features of each CNN. Additionally, Skin-CAD integrates the dual-layer features of the four CNNs instead of entirely depending on the features of a single CNN architecture. In the end, it utilizes a feature selection step to determine the most important deep attributes. This helps to decrease the general size of the feature set and streamline the classification process. Predictions are analysed in more depth using the local interpretable model-agnostic explanations (LIME) approach. This method is used to create visual interpretations that align with an already existing viewpoint and adhere to recommended standards for general clarifications. Two benchmark datasets are employed to validate the efficiency of Skin-CAD which are the Skin Cancer: Malignant vs. Benign and HAM10000 datasets. The maximum accuracy achieved using Skin-CAD is 97.2 % and 96.5 % for the Skin Cancer: Malignant vs. Benign and HAM10000 datasets respectively. The findings of Skin-CAD demonstrate its potential to assist professional dermatologists in detecting and classifying SC precisely and quickly.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandri, 21937, Egypt; Wearables, Biosensing, and Biosignal Processing Laboratory, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 21937, Egypt.
| |
Collapse
|
2
|
Attallah O. ADHD-AID: Aiding Tool for Detecting Children's Attention Deficit Hyperactivity Disorder via EEG-Based Multi-Resolution Analysis and Feature Selection. Biomimetics (Basel) 2024; 9:188. [PMID: 38534873 DOI: 10.3390/biomimetics9030188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 03/12/2024] [Accepted: 03/13/2024] [Indexed: 03/28/2024] Open
Abstract
The severe effects of attention deficit hyperactivity disorder (ADHD) among adolescents can be prevented by timely identification and prompt therapeutic intervention. Traditional diagnostic techniques are complicated and time-consuming because they are subjective-based assessments. Machine learning (ML) techniques can automate this process and prevent the limitations of manual evaluation. However, most of the ML-based models extract few features from a single domain. Furthermore, most ML-based studies have not examined the most effective electrode placement on the skull, which affects the identification process, while others have not employed feature selection approaches to reduce the feature space dimension and consequently the complexity of the training models. This study presents an ML-based tool for automatically identifying ADHD entitled "ADHD-AID". The present study uses several multi-resolution analysis techniques including variational mode decomposition, discrete wavelet transform, and empirical wavelet decomposition. ADHD-AID extracts thirty features from the time and time-frequency domains to identify ADHD, including nonlinear features, band-power features, entropy-based features, and statistical features. The present study also looks at the best EEG electrode placement for detecting ADHD. Additionally, it looks into the location combinations that have the most significant impact on identification accuracy. Additionally, it uses a variety of feature selection methods to choose those features that have the greatest influence on the diagnosis of ADHD, reducing the classification's complexity and training time. The results show that ADHD-AID has provided scores for accuracy, sensitivity, specificity, F1-score, and Mathew correlation coefficients of 0.991, 0.989, 0.992, 0.989, and 0.982, respectively, in identifying ADHD with 10-fold cross-validation. Also, the area under the curve has reached 0.9958. ADHD-AID's results are significantly higher than those of all earlier studies for the detection of ADHD in adolescents. These notable and trustworthy findings support the use of such an automated tool as a means of assistance for doctors in the prompt identification of ADHD in youngsters.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 21937, Egypt
- Wearables, Biosensing and Biosignal Processing Laboratory, Arab Academy for Science, Technology and Maritime Transport, Alexandria 21937, Egypt
| |
Collapse
|
3
|
Peng Y, Xu H, Zhao L, Zhu W, Shi F, Wang M, Zhou Y, Feng K, Chen X. Automatic zoning for retinopathy of prematurity with a key area location system. BIOMEDICAL OPTICS EXPRESS 2024; 15:725-742. [PMID: 38404326 PMCID: PMC10890844 DOI: 10.1364/boe.506119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 10/29/2023] [Accepted: 10/30/2023] [Indexed: 02/27/2024]
Abstract
Retinopathy of prematurity (ROP) usually occurs in premature or low birth weight infants and has been an important cause of childhood blindness worldwide. Diagnosis and treatment of ROP are mainly based on stage, zone and disease, where the zone is more important than the stage for serious ROP. However, due to the great subjectivity and difference of ophthalmologists in the diagnosis of ROP zoning, it is challenging to achieve accurate and objective ROP zoning diagnosis. To address it, we propose a new key area location (KAL) system to achieve automatic and objective ROP zoning based on its definition, which consists of a key point location network and an object detection network. Firstly, to achieve the balance between real-time and high-accuracy, a lightweight residual heatmap network (LRH-Net) is designed to achieve the location of the optic disc (OD) and macular center, which transforms the location problem into a pixel-level regression problem based on the heatmap regression method and maximum likelihood estimation theory. In addition, to meet the needs of clinical accuracy and real-time detection, we use the one-stage object detection framework Yolov3 to achieve ROP lesion location. Finally, the experimental results have demonstrated that the proposed KAL system has achieved better performance on key point location (6.13 and 17.03 pixels error for OD and macular center location) and ROP lesion location (93.05% for AP50), and the ROP zoning results based on it have good consistency with the results manually labeled by clinicians, which can support clinical decision-making and help ophthalmologists correctly interpret ROP zoning, reducing subjective differences of diagnosis and increasing the interpretability of zoning results.
Collapse
Affiliation(s)
- Yuanyuan Peng
- School of Biomedical Engineering, Anhui Medical University, Anhui 230032, China
| | - Hua Xu
- Department of Ophthalmology, Children's Hospital of Soochow University, Jiangsu 215025, China
| | - Lei Zhao
- Department of Ophthalmology, Children's Hospital of Soochow University, Jiangsu 215025, China
| | - Weifang Zhu
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu Province 215006, China
| | - Fei Shi
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu Province 215006, China
| | - Meng Wang
- Institute of High Performance Computing, A*STAR, Singapore 138632, Singapore
| | - Yi Zhou
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu Province 215006, China
| | - Kehong Feng
- Department of Ophthalmology, Children's Hospital of Soochow University, Jiangsu 215025, China
| | - Xinjian Chen
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu Province 215006, China
- State Key Laboratory of Radiation Medicine and Protection, Soochow University, Suzhou 215123, China
| |
Collapse
|
4
|
Hoyek S, Cruz NFSD, Patel NA, Al-Khersan H, Fan KC, Berrocal AM. Identification of novel biomarkers for retinopathy of prematurity in preterm infants by use of innovative technologies and artificial intelligence. Prog Retin Eye Res 2023; 97:101208. [PMID: 37611892 DOI: 10.1016/j.preteyeres.2023.101208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 08/16/2023] [Accepted: 08/18/2023] [Indexed: 08/25/2023]
Abstract
Retinopathy of prematurity (ROP) is a leading cause of preventable vision loss in preterm infants. While appropriate screening is crucial for early identification and treatment of ROP, current screening guidelines remain limited by inter-examiner variability in screening modalities, absence of local protocol for ROP screening in some settings, a paucity of resources and an increased survival of younger and smaller infants. This review summarizes the advancements and challenges of current innovative technologies, artificial intelligence (AI), and predictive biomarkers for the diagnosis and management of ROP. We provide a contemporary overview of AI-based models for detection of ROP, its severity, progression, and response to treatment. To address the transition from experimental settings to real-world clinical practice, challenges to the clinical implementation of AI for ROP are reviewed and potential solutions are proposed. The use of optical coherence tomography (OCT) and OCT angiography (OCTA) technology is also explored, providing evaluation of subclinical ROP characteristics that are often imperceptible on fundus examination. Furthermore, we explore several potential biomarkers to reduce the need for invasive procedures, to enhance diagnostic accuracy and treatment efficacy. Finally, we emphasize the need of a symbiotic integration of biologic and imaging biomarkers and AI in ROP screening, where the robustness of biomarkers in early disease detection is complemented by the predictive precision of AI algorithms.
Collapse
Affiliation(s)
- Sandra Hoyek
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Natasha F S da Cruz
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Nimesh A Patel
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Hasenin Al-Khersan
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Kenneth C Fan
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Audina M Berrocal
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA.
| |
Collapse
|
5
|
Liu YF, Ji YK, Fei FQ, Chen NM, Zhu ZT, Fei XZ. Research progress in artificial intelligence assisted diabetic retinopathy diagnosis. Int J Ophthalmol 2023; 16:1395-1405. [PMID: 37724288 PMCID: PMC10475636 DOI: 10.18240/ijo.2023.09.05] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 06/14/2023] [Indexed: 09/20/2023] Open
Abstract
Diabetic retinopathy (DR) is one of the most common retinal vascular diseases and one of the main causes of blindness worldwide. Early detection and treatment can effectively delay vision decline and even blindness in patients with DR. In recent years, artificial intelligence (AI) models constructed by machine learning and deep learning (DL) algorithms have been widely used in ophthalmology research, especially in diagnosing and treating ophthalmic diseases, particularly DR. Regarding DR, AI has mainly been used in its diagnosis, grading, and lesion recognition and segmentation, and good research and application results have been achieved. This study summarizes the research progress in AI models based on machine learning and DL algorithms for DR diagnosis and discusses some limitations and challenges in AI research.
Collapse
Affiliation(s)
- Yun-Fang Liu
- Department of Ophthalmology, First People's Hospital of Huzhou, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Yu-Ke Ji
- Eye Hospital, Nanjing Medical University, Nanjing 210000, Jiangsu Province, China
| | - Fang-Qin Fei
- Department of Endocrinology, First People's Hospital of Huzhou, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Nai-Mei Chen
- Department of Ophthalmology, Huai'an Hospital of Huai'an City, Huai'an 223000, Jiangsu Province, China
| | - Zhen-Tao Zhu
- Department of Ophthalmology, Huai'an Hospital of Huai'an City, Huai'an 223000, Jiangsu Province, China
| | - Xing-Zhen Fei
- Department of Endocrinology, First People's Hospital of Huzhou, Huzhou University, Huzhou 313000, Zhejiang Province, China
| |
Collapse
|
6
|
Ochoa-Astorga JE, Wang L, Du W, Peng Y. A Straightforward Bifurcation Pattern-Based Fundus Image Registration Method. SENSORS (BASEL, SWITZERLAND) 2023; 23:7809. [PMID: 37765866 PMCID: PMC10534639 DOI: 10.3390/s23187809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 08/23/2023] [Accepted: 09/08/2023] [Indexed: 09/29/2023]
Abstract
Fundus image registration is crucial in eye disease examination, as it enables the alignment of overlapping fundus images, facilitating a comprehensive assessment of conditions like diabetic retinopathy, where a single image's limited field of view might be insufficient. By combining multiple images, the field of view for retinal analysis is extended, and resolution is enhanced through super-resolution imaging. Moreover, this method facilitates patient follow-up through longitudinal studies. This paper proposes a straightforward method for fundus image registration based on bifurcations, which serve as prominent landmarks. The approach aims to establish a baseline for fundus image registration using these landmarks as feature points, addressing the current challenge of validation in this field. The proposed approach involves the use of a robust vascular tree segmentation method to detect feature points within a specified range. The method involves coarse vessel segmentation to analyze patterns in the skeleton of the segmentation foreground, followed by feature description based on the generation of a histogram of oriented gradients and determination of image relation through a transformation matrix. Image blending produces a seamless registered image. Evaluation on the FIRE dataset using registration error as the key parameter for accuracy demonstrates the method's effectiveness. The results show the superior performance of the proposed method compared to other techniques using vessel-based feature extraction or partially based on SURF, achieving an area under the curve of 0.526 for the entire FIRE dataset.
Collapse
Affiliation(s)
| | - Linni Wang
- Retina & Neuron-Ophthalmology, Tianjin Medical University Eye Hospital, Tianjin 300084, China
| | - Weiwei Du
- Information and Human Science, Kyoto Institute of Technology University, Kyoto 6068585, Japan;
| | - Yahui Peng
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China;
| |
Collapse
|
7
|
Attallah O. RiPa-Net: Recognition of Rice Paddy Diseases with Duo-Layers of CNNs Fostered by Feature Transformation and Selection. Biomimetics (Basel) 2023; 8:417. [PMID: 37754168 PMCID: PMC10527565 DOI: 10.3390/biomimetics8050417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 08/31/2023] [Accepted: 09/05/2023] [Indexed: 09/28/2023] Open
Abstract
Rice paddy diseases significantly reduce the quantity and quality of crops, so it is essential to recognize them quickly and accurately for prevention and control. Deep learning (DL)-based computer-assisted expert systems are encouraging approaches to solving this issue and dealing with the dearth of subject-matter specialists in this area. Nonetheless, a major generalization obstacle is posed by the existence of small discrepancies between various classes of paddy diseases. Numerous studies have used features taken from a single deep layer of an individual complex DL construction with many deep layers and parameters. All of them have relied on spatial knowledge only to learn their recognition models trained with a large number of features. This study suggests a pipeline called "RiPa-Net" based on three lightweight CNNs that can identify and categorize nine paddy diseases as well as healthy paddy. The suggested pipeline gathers features from two different layers of each of the CNNs. Moreover, the suggested method additionally applies the dual-tree complex wavelet transform (DTCWT) to the deep features of the first layer to obtain spectral-temporal information. Additionally, it incorporates the deep features of the first layer of the three CNNs using principal component analysis (PCA) and discrete cosine transform (DCT) transformation methods, which reduce the dimension of the first layer features. The second layer's spatial deep features are then combined with these fused time-frequency deep features. After that, a feature selection process is introduced to reduce the size of the feature vector and choose only those features that have a significant impact on the recognition process, thereby further reducing recognition complexity. According to the results, combining deep features from two layers of different lightweight CNNs can improve recognition accuracy. Performance also improves as a result of the acquired spatial-spectral-temporal information used to learn models. Using 300 features, the cubic support vector machine (SVM) achieves an outstanding accuracy of 97.5%. The competitive ability of the suggested pipeline is confirmed by a comparison of the experimental results with findings from previously conducted research on the recognition of paddy diseases.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| |
Collapse
|
8
|
Ramanathan A, Athikarisamy SE, Lam GC. Artificial intelligence for the diagnosis of retinopathy of prematurity: A systematic review of current algorithms. Eye (Lond) 2023; 37:2518-2526. [PMID: 36577806 PMCID: PMC10397194 DOI: 10.1038/s41433-022-02366-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 11/23/2022] [Accepted: 12/09/2022] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND/OBJECTIVES With the increasing survival of premature infants, there is an increased demand to provide adequate retinopathy of prematurity (ROP) services. Wide field retinal imaging (WFDRI) and artificial intelligence (AI) have shown promise in the field of ROP and have the potential to improve the diagnostic performance and reduce the workload for screening ophthalmologists. The aim of this review is to systematically review and provide a summary of the diagnostic characteristics of existing deep learning algorithms. SUBJECT/METHODS Two authors independently searched the literature, and studies using a deep learning system from retinal imaging were included. Data were extracted, assessed and reported using PRISMA guidelines. RESULTS Twenty-seven studies were included in this review. Nineteen studies used AI systems to diagnose ROP, classify the staging of ROP, diagnose the presence of pre-plus or plus disease, or assess the quality of retinal images. The included studies reported a sensitivity of 71%-100%, specificity of 74-99% and area under the curve of 91-99% for the primary outcome of the study. AI techniques were comparable to the assessment of ophthalmologists in terms of overall accuracy and sensitivity. Eight studies evaluated vascular severity scores and were able to accurately differentiate severity using an automated classification score. CONCLUSION Artificial intelligence for ROP diagnosis is a growing field, and many potential utilities have already been identified, including the presence of plus disease, staging of disease and a new automated severity score. AI has a role as an adjunct to clinical assessment; however, there is insufficient evidence to support its use as a sole diagnostic tool currently.
Collapse
Affiliation(s)
- Ashwin Ramanathan
- Department of Paediatrics, Perth Children's Hospital, Perth, Australia
| | - Sam Ebenezer Athikarisamy
- Department of Neonatology, Perth Children's Hospital, Perth, Australia.
- School of Medicine, University of Western Australia, Crawley, Australia.
| | - Geoffrey C Lam
- Department of Ophthalmology, Perth Children's Hospital, Perth, Australia
- Centre for Ophthalmology and Visual Science, University of Western Australia, Crawley, Australia
| |
Collapse
|
9
|
Alwakid G, Gouda W, Humayun M. Enhancement of Diabetic Retinopathy Prognostication Using Deep Learning, CLAHE, and ESRGAN. Diagnostics (Basel) 2023; 13:2375. [PMID: 37510123 PMCID: PMC10378524 DOI: 10.3390/diagnostics13142375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 07/07/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
One of the primary causes of blindness in the diabetic population is diabetic retinopathy (DR). Many people could have their sight saved if only DR were detected and treated in time. Numerous Deep Learning (DL)-based methods have been presented to improve human analysis. Using a DL model with three scenarios, this research classified DR and its severity stages from fundus images using the "APTOS 2019 Blindness Detection" dataset. Following the adoption of the DL model, augmentation methods were implemented to generate a balanced dataset with consistent input parameters across all test scenarios. As a last step in the categorization process, the DenseNet-121 model was employed. Several methods, including Enhanced Super-resolution Generative Adversarial Networks (ESRGAN), Histogram Equalization (HIST), and Contrast Limited Adaptive HIST (CLAHE), have been used to enhance image quality in a variety of contexts. The suggested model detected the DR across all five APTOS 2019 grading process phases with the highest test accuracy of 98.36%, top-2 accuracy of 100%, and top-3 accuracy of 100%. Further evaluation criteria (precision, recall, and F1-score) for gauging the efficacy of the proposed model were established with the help of APTOS 2019. Furthermore, comparing CLAHE + ESRGAN against both state-of-the-art technology and other recommended methods, it was found that its use was more effective in DR classification.
Collapse
Affiliation(s)
- Ghadah Alwakid
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakakah 72341, Al Jouf, Saudi Arabia
| | - Walaa Gouda
- Department of Electrical Engineering, Faculty of Engineering at Shoubra, Benha University, Cairo 11672, Egypt
| | - Mamoona Humayun
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakakah 72341, Al Jouf, Saudi Arabia
| |
Collapse
|
10
|
Attallah O. RADIC:A tool for diagnosing COVID-19 from chest CT and X-ray scans using deep learning and quad-radiomics. CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS : AN INTERNATIONAL JOURNAL SPONSORED BY THE CHEMOMETRICS SOCIETY 2023; 233:104750. [PMID: 36619376 PMCID: PMC9807270 DOI: 10.1016/j.chemolab.2022.104750] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 12/29/2022] [Accepted: 12/30/2022] [Indexed: 05/28/2023]
Abstract
Deep learning (DL) algorithms have demonstrated a high ability to perform speedy and accurate COVID-19 diagnosis utilizing computed tomography (CT) and X-Ray scans. The spatial information in these images was used to train DL models in the majority of relevant studies. However, training these models with images generated by radiomics approaches could enhance diagnostic accuracy. Furthermore, combining information from several radiomics approaches with time-frequency representations of the COVID-19 patterns can increase performance even further. This study introduces "RADIC", an automated tool that uses three DL models that are trained using radiomics-generated images to detect COVID-19. First, four radiomics approaches are used to analyze the original CT and X-ray images. Next, each of the three DL models is trained on a different set of radiomics, X-ray, and CT images. Then, for each DL model, deep features are obtained, and their dimensions are decreased using the Fast Walsh Hadamard Transform, yielding a time-frequency representation of the COVID-19 patterns. The tool then uses the discrete cosine transform to combine these deep features. Four classification models are then used to achieve classification. In order to validate the performance of RADIC, two benchmark datasets (CT and X-Ray) for COVID-19 are employed. The final accuracy attained using RADIC is 99.4% and 99% for the first and second datasets respectively. To prove the competing ability of RADIC, its performance is compared with related studies in the literature. The results reflect that RADIC achieve superior performance compared to other studies. The results of the proposed tool prove that a DL model can be trained more effectively with images generated by radiomics techniques than the original X-Ray and CT images. Besides, the incorporation of deep features extracted from DL models trained with multiple radiomics approaches will improve diagnostic accuracy.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering & Technology, Arab Academy for Science, Technology & Maritime Transport, Alexandria, Egypt
| |
Collapse
|
11
|
Auto-MyIn: Automatic diagnosis of myocardial infarction via multiple GLCMs, CNNs, and SVMs. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
12
|
GabROP: Gabor Wavelets-Based CAD for Retinopathy of Prematurity Diagnosis via Convolutional Neural Networks. Diagnostics (Basel) 2023; 13:diagnostics13020171. [PMID: 36672981 PMCID: PMC9857608 DOI: 10.3390/diagnostics13020171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 12/12/2022] [Accepted: 12/19/2022] [Indexed: 01/05/2023] Open
Abstract
One of the most serious and dangerous ocular problems in premature infants is retinopathy of prematurity (ROP), a proliferative vascular disease. Ophthalmologists can use automatic computer-assisted diagnostic (CAD) tools to help them make a safe, accurate, and low-cost diagnosis of ROP. All previous CAD tools for ROP diagnosis use the original fundus images. Unfortunately, learning the discriminative representation from ROP-related fundus images is difficult. Textural analysis techniques, such as Gabor wavelets (GW), can demonstrate significant texture information that can help artificial intelligence (AI) based models to improve diagnostic accuracy. In this paper, an effective and automated CAD tool, namely GabROP, based on GW and multiple deep learning (DL) models is proposed. Initially, GabROP analyzes fundus images using GW and generates several sets of GW images. Next, these sets of images are used to train three convolutional neural networks (CNNs) models independently. Additionally, the actual fundus pictures are used to build these networks. Using the discrete wavelet transform (DWT), texture features retrieved from every CNN trained with various sets of GW images are combined to create a textural-spectral-temporal demonstration. Afterward, for each CNN, these features are concatenated with spatial deep features obtained from the original fundus images. Finally, the previous concatenated features of all three CNN are incorporated using the discrete cosine transform (DCT) to lessen the size of features caused by the fusion process. The outcomes of GabROP show that it is accurate and efficient for ophthalmologists. Additionally, the effectiveness of GabROP is compared to recently developed ROP diagnostic techniques. Due to GabROP's superior performance compared to competing tools, ophthalmologists may be able to identify ROP more reliably and precisely, which could result in a reduction in diagnostic effort and examination time.
Collapse
|
13
|
Attallah O. MonDiaL-CAD: Monkeypox diagnosis via selected hybrid CNNs unified with feature selection and ensemble learning. Digit Health 2023; 9:20552076231180054. [PMID: 37312961 PMCID: PMC10259124 DOI: 10.1177/20552076231180054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 05/18/2023] [Indexed: 06/15/2023] Open
Abstract
Objective Recently, monkeypox virus is slowly evolving and there are fears it will spread as COVID-19. Computer-aided diagnosis (CAD) based on deep learning approaches especially convolutional neural network (CNN) can assist in the rapid determination of reported incidents. The current CADs were mostly based on an individual CNN. Few CADs employed multiple CNNs but did not investigate which combination of CNNs has a greater impact on the performance. Furthermore, they relied on only spatial information of deep features to train their models. This study aims to construct a CAD tool named "Monkey-CAD" that can address the previous limitations and automatically diagnose monkeypox rapidly and accurately. Methods Monkey-CAD extracts features from eight CNNs and then examines the best possible combination of deep features that influence classification. It employs discrete wavelet transform (DWT) to merge features which diminishes fused features' size and provides a time-frequency demonstration. These deep features' sizes are then further reduced via an entropy-based feature selection approach. These reduced fused features are finally used to deliver a better representation of the input features and feed three ensemble classifiers. Results Two freely accessible datasets called Monkeypox skin image (MSID) and Monkeypox skin lesion (MSLD) are employed in this study. Monkey-CAD could discriminate among cases with and without Monkeypox achieving an accuracy of 97.1% for MSID and 98.7% for MSLD datasets respectively. Conclusions Such promising results demonstrate that the Monkey-CAD can be employed to assist health practitioners. They also verify that fusing deep features from selected CNNs can boost performance.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
14
|
Bujoreanu Bezman L, Tiutiuca C, Totolici G, Carneciu N, Bujoreanu FC, Ciortea DA, Niculet E, Fulga A, Alexandru AM, Stan DJ, Nechita A. Latest Trends in Retinopathy of Prematurity: Research on Risk Factors, Diagnostic Methods and Therapies. Int J Gen Med 2023; 16:937-949. [PMID: 36942030 PMCID: PMC10024537 DOI: 10.2147/ijgm.s401122] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 02/17/2023] [Indexed: 03/15/2023] Open
Abstract
Retinopathy of prematurity (ROP) is a vasoproliferative disorder with an imminent risk of blindness, in cases where early diagnosis and treatment are not performed. The doctors' constant motivation to give these fragile beings a chance at life with optimal visual acuity has never stopped, since Terry first described this condition. Thus, throughout time, several specific advancements have been made in the management of ROP. Apart from the most known risk factors, this narrative review brings to light the latest research about new potential risk factors, such as: proteinuria, insulin-like growth factor 1 (IGF-1) and blood transfusions. Digital imaging has revolutionized the management of retinal pathologies, and it is more and more used in identifying and staging ROP, particularly in the disadvantaged regions by the means of telescreening. Moreover, optical coherence tomography (OCT) and automated diagnostic tools based on deep learning offer new perspectives on the ROP diagnosis. The new therapeutical trend based on the use of anti-VEGF agents is increasingly used in the treatment of ROP patients, and recent research sustains the theory according to which these agents do not interfere with the neurodevelopment of premature babies.
Collapse
Affiliation(s)
- Laura Bujoreanu Bezman
- Department of Ophthalmology, “Sfantul Apostol Andrei” Emergency Clinical Hospital, Galati, Romania
- Department of Morphological and Functional Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Carmen Tiutiuca
- Department of Ophthalmology, “Sfantul Apostol Andrei” Emergency Clinical Hospital, Galati, Romania
- Clinical Surgical Department, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
- Correspondence: Carmen Tiutiuca, Clinical Surgical Department, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, 800008, Romania, Tel +40741330788, Email
| | - Geanina Totolici
- Department of Ophthalmology, “Sfantul Apostol Andrei” Emergency Clinical Hospital, Galati, Romania
- Clinical Surgical Department, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Nicoleta Carneciu
- Department of Ophthalmology, “Sfantul Apostol Andrei” Emergency Clinical Hospital, Galati, Romania
- Department of Morphological and Functional Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Florin Ciprian Bujoreanu
- Doctoral School of Biomedical Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
- Florin Ciprian Bujoreanu, Doctoral School of Biomedical Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, 800008, Romania, Tel +40741395844, Email
| | - Diana Andreea Ciortea
- Department of Pediatrics, “Sfantul Ioan” Emergency Clinical Hospital for Children, Galati, Romania
- Clinical Medical Department, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Elena Niculet
- Department of Morphological and Functional Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
- Doctoral School of Biomedical Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Ana Fulga
- Clinical Surgical Department, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
- Doctoral School of Biomedical Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Anamaria Madalina Alexandru
- Doctoral School of Biomedical Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
- Department of Neonatology, “Sfantul Apostol Andrei” Emergency Clinical Hospital, Galati, Romania
| | - Daniela Jicman Stan
- Doctoral School of Biomedical Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Aurel Nechita
- Department of Pediatrics, “Sfantul Ioan” Emergency Clinical Hospital for Children, Galati, Romania
- Clinical Medical Department, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| |
Collapse
|
15
|
Luo Z, Ding X, Hou N, Wan J. A Deep-Learning-Based Collaborative Edge-Cloud Telemedicine System for Retinopathy of Prematurity. SENSORS (BASEL, SWITZERLAND) 2022; 23:276. [PMID: 36616874 PMCID: PMC9824555 DOI: 10.3390/s23010276] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/22/2022] [Accepted: 12/22/2022] [Indexed: 06/17/2023]
Abstract
Retinopathy of prematurity is an ophthalmic disease with a very high blindness rate. With its increasing incidence year by year, its timely diagnosis and treatment are of great significance. Due to the lack of timely and effective fundus screening for premature infants in remote areas, leading to an aggravation of the disease and even blindness, in this paper, a deep learning-based collaborative edge-cloud telemedicine system is proposed to mitigate this issue. In the proposed system, deep learning algorithms are mainly used for classification of processed images. Our algorithm is based on ResNet101 and uses undersampling and resampling to improve the data imbalance problem in the field of medical image processing. Artificial intelligence algorithms are combined with a collaborative edge-cloud architecture to implement a comprehensive telemedicine system to realize timely screening and diagnosis of retinopathy of prematurity in remote areas with shortages or a complete lack of expert medical staff. Finally, the algorithm is successfully embedded in a mobile terminal device and deployed through the support of a core hospital of Guangdong Province. The results show that we achieved 75% ACC and 60% AUC. This research is of great significance for the development of telemedicine systems and aims to mitigate the lack of medical resources and their uneven distribution in rural areas.
Collapse
Affiliation(s)
- Zeliang Luo
- College of Electro-Mechanical Engineering, Zhuhai City Polytechnic, Zhuhai 519090, China
| | - Xiaoxuan Ding
- Guangdong Provincial Key Laboratory of Technique and Equipment for Macromolecular Advanced Manufacturing, School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou 510641, China
| | - Ning Hou
- Guangdong Provincial Key Laboratory of Technique and Equipment for Macromolecular Advanced Manufacturing, School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou 510641, China
| | - Jiafu Wan
- Guangdong Provincial Key Laboratory of Technique and Equipment for Macromolecular Advanced Manufacturing, School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou 510641, China
| |
Collapse
|
16
|
Ji Y, Liu S, Hong X, Lu Y, Wu X, Li K, Li K, Liu Y. Advances in artificial intelligence applications for ocular surface diseases diagnosis. Front Cell Dev Biol 2022; 10:1107689. [PMID: 36605721 PMCID: PMC9808405 DOI: 10.3389/fcell.2022.1107689] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 12/05/2022] [Indexed: 01/07/2023] Open
Abstract
In recent years, with the rapid development of computer technology, continual optimization of various learning algorithms and architectures, and establishment of numerous large databases, artificial intelligence (AI) has been unprecedentedly developed and applied in the field of ophthalmology. In the past, ophthalmological AI research mainly focused on posterior segment diseases, such as diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration, retinal vein occlusion, and glaucoma optic neuropathy. Meanwhile, an increasing number of studies have employed AI to diagnose ocular surface diseases. In this review, we summarize the research progress of AI in the diagnosis of several ocular surface diseases, namely keratitis, keratoconus, dry eye, and pterygium. We discuss the limitations and challenges of AI in the diagnosis of ocular surface diseases, as well as prospects for the future.
Collapse
Affiliation(s)
- Yuke Ji
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Sha Liu
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Xiangqian Hong
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Yi Lu
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Xingyang Wu
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Kunke Li
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China,*Correspondence: Yunfang Liu, ; Keran Li, ; Kunke Li,
| | - Keran Li
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China,*Correspondence: Yunfang Liu, ; Keran Li, ; Kunke Li,
| | - Yunfang Liu
- Department of Ophthalmology, First Affiliated Hospital of Huzhou University, Huzhou, China,*Correspondence: Yunfang Liu, ; Keran Li, ; Kunke Li,
| |
Collapse
|
17
|
Attallah O, Aslan MF, Sabanci K. A Framework for Lung and Colon Cancer Diagnosis via Lightweight Deep Learning Models and Transformation Methods. Diagnostics (Basel) 2022; 12:diagnostics12122926. [PMID: 36552933 PMCID: PMC9776637 DOI: 10.3390/diagnostics12122926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 11/19/2022] [Accepted: 11/22/2022] [Indexed: 11/25/2022] Open
Abstract
Among the leading causes of mortality and morbidity in people are lung and colon cancers. They may develop concurrently in organs and negatively impact human life. If cancer is not diagnosed in its early stages, there is a great likelihood that it will spread to the two organs. The histopathological detection of such malignancies is one of the most crucial components of effective treatment. Although the process is lengthy and complex, deep learning (DL) techniques have made it feasible to complete it more quickly and accurately, enabling researchers to study a lot more patients in a short time period and for a lot less cost. Earlier studies relied on DL models that require great computational ability and resources. Most of them depended on individual DL models to extract features of high dimension or to perform diagnoses. However, in this study, a framework based on multiple lightweight DL models is proposed for the early detection of lung and colon cancers. The framework utilizes several transformation methods that perform feature reduction and provide a better representation of the data. In this context, histopathology scans are fed into the ShuffleNet, MobileNet, and SqueezeNet models. The number of deep features acquired from these models is subsequently reduced using principal component analysis (PCA) and fast Walsh-Hadamard transform (FHWT) techniques. Following that, discrete wavelet transform (DWT) is used to fuse the FWHT's reduced features obtained from the three DL models. Additionally, the three DL models' PCA features are concatenated. Finally, the diminished features as a result of PCA and FHWT-DWT reduction and fusion processes are fed to four distinct machine learning algorithms, reaching the highest accuracy of 99.6%. The results obtained using the proposed framework based on lightweight DL models show that it can distinguish lung and colon cancer variants with a lower number of features and less computational complexity compared to existing methods. They also prove that utilizing transformation methods to reduce features can offer a superior interpretation of the data, thus improving the diagnosis procedure.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
- Correspondence:
| | - Muhammet Fatih Aslan
- Department of Electrical and Electronics Engineering, Karamanoglu Mehmetbey University, 70100 Karaman, Turkey
| | - Kadir Sabanci
- Department of Electrical and Electronics Engineering, Karamanoglu Mehmetbey University, 70100 Karaman, Turkey
| |
Collapse
|
18
|
Attallah O, Samir A. A wavelet-based deep learning pipeline for efficient COVID-19 diagnosis via CT slices. Appl Soft Comput 2022; 128:109401. [PMID: 35919069 PMCID: PMC9335861 DOI: 10.1016/j.asoc.2022.109401] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 05/20/2022] [Accepted: 07/25/2022] [Indexed: 12/30/2022]
Abstract
The quick diagnosis of the novel coronavirus (COVID-19) disease is vital to prevent its propagation and improve therapeutic outcomes. Computed tomography (CT) is believed to be an effective tool for diagnosing COVID-19, however, the CT scan contains hundreds of slices that are complex to be analyzed and could cause delays in diagnosis. Artificial intelligence (AI) especially deep learning (DL), could facilitate and speed up COVID-19 diagnosis from such scans. Several studies employed DL approaches based on 2D CT images from a single view, nevertheless, 3D multiview CT slices demonstrated an excellent ability to enhance the efficiency of COVID-19 diagnosis. The majority of DL-based studies utilized the spatial information of the original CT images to train their models, though, using spectral–temporal information could improve the detection of COVID-19. This article proposes a DL-based pipeline called CoviWavNet for the automatic diagnosis of COVID-19. CoviWavNet uses a 3D multiview dataset called OMNIAHCOV. Initially, it analyzes the CT slices using multilevel discrete wavelet decomposition (DWT) and then uses the heatmaps of the approximation levels to train three ResNet CNN models. These ResNets use the spectral–temporal information of such images to perform classification. Subsequently, it investigates whether the combination of spatial information with spectral–temporal information could improve the diagnostic accuracy of COVID-19. For this purpose, it extracts deep spectral–temporal features from such ResNets using transfer learning and integrates them with deep spatial features extracted from the same ResNets trained with the original CT slices. Then, it utilizes a feature selection step to reduce the dimension of such integrated features and use them as inputs to three support vector machine (SVM) classifiers. To further validate the performance of CoviWavNet, a publicly available benchmark dataset called SARS-COV-2-CT-Scan is employed. The results of CoviWavNet have demonstrated that using the spectral–temporal information of the DWT heatmap images to train the ResNets is superior to utilizing the spatial information of the original CT images. Furthermore, integrating deep spectral–temporal features with deep spatial features has enhanced the classification accuracy of the three SVM classifiers reaching a final accuracy of 99.33% and 99.7% for the OMNIAHCOV and SARS-COV-2-CT-Scan datasets respectively. These accuracies verify the outstanding performance of CoviWavNet compared to other related studies. Thus, CoviWavNet can help radiologists in the rapid and accurate diagnosis of COVID-19 diagnosis.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| | - Ahmed Samir
- Department of Radiodiagnosis, Faculty of Medicine, University of Alexandria, Egypt
| |
Collapse
|
19
|
Trends in Neonatal Ophthalmic Screening Methods. Diagnostics (Basel) 2022; 12:diagnostics12051251. [PMID: 35626406 PMCID: PMC9140133 DOI: 10.3390/diagnostics12051251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Revised: 05/12/2022] [Accepted: 05/17/2022] [Indexed: 11/30/2022] Open
Abstract
Neonatal ophthalmic screening should lead to early diagnosis of ocular abnormalities to reduce long-term visual impairment in selected diseases. If a treatable pathology is diagnosed within a few days after the birth, adequate therapy may be indicated to facilitate the best possible conditions for further development of visual functions. Traditional neonatal ophthalmic screening uses the red reflex test (RRT). It tests the transmittance of the light through optical media towards the retina and the general disposition of the central part of the retina. However, RRT has weaknesses, especially in posterior segment affections. Wide-field digital imaging techniques have shown promising results in detecting anterior and posterior segment pathologies. Particular attention should be paid to telemedicine and artificial intelligence. These methods can improve the specificity and sensitivity of neonatal eye screening. Both are already highly advanced in diagnosing and monitoring of retinopathy of prematurity.
Collapse
|
20
|
Attallah O. An Intelligent ECG-Based Tool for Diagnosing COVID-19 via Ensemble Deep Learning Techniques. BIOSENSORS 2022; 12:bios12050299. [PMID: 35624600 PMCID: PMC9138764 DOI: 10.3390/bios12050299] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 04/06/2022] [Accepted: 04/24/2022] [Indexed: 06/01/2023]
Abstract
Diagnosing COVID-19 accurately and rapidly is vital to control its quick spread, lessen lockdown restrictions, and decrease the workload on healthcare structures. The present tools to detect COVID-19 experience numerous shortcomings. Therefore, novel diagnostic tools are to be examined to enhance diagnostic accuracy and avoid the limitations of these tools. Earlier studies indicated multiple structures of cardiovascular alterations in COVID-19 cases which motivated the realization of using ECG data as a tool for diagnosing the novel coronavirus. This study introduced a novel automated diagnostic tool based on ECG data to diagnose COVID-19. The introduced tool utilizes ten deep learning (DL) models of various architectures. It obtains significant features from the last fully connected layer of each DL model and then combines them. Afterward, the tool presents a hybrid feature selection based on the chi-square test and sequential search to select significant features. Finally, it employs several machine learning classifiers to perform two classification levels. A binary level to differentiate between normal and COVID-19 cases, and a multiclass to discriminate COVID-19 cases from normal and other cardiac complications. The proposed tool reached an accuracy of 98.2% and 91.6% for binary and multiclass levels, respectively. This performance indicates that the ECG could be used as an alternative means of diagnosis of COVID-19.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| |
Collapse
|
21
|
Attallah O. ECG-BiCoNet: An ECG-based pipeline for COVID-19 diagnosis using Bi-Layers of deep features integration. Comput Biol Med 2022; 142:105210. [PMID: 35026574 PMCID: PMC8730786 DOI: 10.1016/j.compbiomed.2022.105210] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 01/01/2022] [Accepted: 01/01/2022] [Indexed: 12/29/2022]
Abstract
The accurate and speedy detection of COVID-19 is essential to avert the fast propagation of the virus, alleviate lockdown constraints and diminish the burden on health organizations. Currently, the methods used to diagnose COVID-19 have several limitations, thus new techniques need to be investigated to improve the diagnosis and overcome these limitations. Taking into consideration the great benefits of electrocardiogram (ECG) applications, this paper proposes a new pipeline called ECG-BiCoNet to investigate the potential of using ECG data for diagnosing COVID-19. ECG-BiCoNet employs five deep learning models of distinct structural design. ECG-BiCoNet extracts two levels of features from two different layers of each deep learning technique. Features mined from higher layers are fused using discrete wavelet transform and then integrated with lower-layers features. Afterward, a feature selection approach is utilized. Finally, an ensemble classification system is built to merge predictions of three machine learning classifiers. ECG-BiCoNet accomplishes two classification categories, binary and multiclass. The results of ECG-BiCoNet present a promising COVID-19 performance with an accuracy of 98.8% and 91.73% for binary and multiclass classification categories. These results verify that ECG data may be used to diagnose COVID-19 which can help clinicians in the automatic diagnosis and overcome limitations of manual diagnosis.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 1029, Egypt.
| |
Collapse
|
22
|
Attallah O. A computer-aided diagnostic framework for coronavirus diagnosis using texture-based radiomics images. Digit Health 2022; 8:20552076221092543. [PMID: 35433024 PMCID: PMC9005822 DOI: 10.1177/20552076221092543] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 03/21/2022] [Indexed: 12/14/2022] Open
Abstract
The accurate and rapid detection of the novel coronavirus infection, coronavirus is very important to prevent the fast spread of such disease. Thus, reducing negative effects that influenced many industrial sectors, especially healthcare. Artificial intelligence techniques in particular deep learning could help in the fast and precise diagnosis of coronavirus from computed tomography images. Most artificial intelligence-based studies used the original computed tomography images to build their models; however, the integration of texture-based radiomics images and deep learning techniques could improve the diagnostic accuracy of the novel coronavirus diseases. This study proposes a computer-assisted diagnostic framework based on multiple deep learning and texture-based radiomics approaches. It first trains three Residual Networks (ResNets) deep learning techniques with two texture-based radiomics images including discrete wavelet transform and gray-level covariance matrix instead of the original computed tomography images. Then, it fuses the texture-based radiomics deep features sets extracted from each using discrete cosine transform. Thereafter, it further combines the fused texture-based radiomics deep features obtained from the three convolutional neural networks. Finally, three support vector machine classifiers are utilized for the classification procedure. The proposed method is validated experimentally on the benchmark severe respiratory syndrome coronavirus 2 computed tomography image dataset. The accuracies attained indicate that using texture-based radiomics (gray-level covariance matrix, discrete wavelet transform) images for training the ResNet-18 (83.22%, 74.9%), ResNet-50 (80.94%, 78.39%), and ResNet-101 (80.54%, 77.99%) is better than using the original computed tomography images (70.34%, 76.51%, and 73.42%) for ResNet-18, ResNet-50, and ResNet-101, respectively. Furthermore, the sensitivity, specificity, accuracy, precision, and F1-score achieved using the proposed computer-assisted diagnostic after the two fusion steps are 99.47%, 99.72%, 99.60%, 99.72%, and 99.60% which proves that combining texture-based radiomics deep features obtained from the three ResNets has boosted its performance. Thus, fusing multiple texture-based radiomics deep features mined from several convolutional neural networks is better than using only one type of radiomics approach and a single convolutional neural network. The performance of the proposed computer-assisted diagnostic framework allows it to be used by radiologists in attaining fast and accurate diagnosis.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
23
|
Attallah O. A deep learning-based diagnostic tool for identifying various diseases via facial images. Digit Health 2022; 8:20552076221124432. [PMID: 36105626 PMCID: PMC9465585 DOI: 10.1177/20552076221124432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 08/18/2022] [Indexed: 11/16/2022] Open
Abstract
With the current health crisis caused by the COVID-19 pandemic, patients have
become more anxious about infection, so they prefer not to have direct contact
with doctors or clinicians. Lately, medical scientists have confirmed that
several diseases exhibit corresponding specific features on the face the face.
Recent studies have indicated that computer-aided facial diagnosis can be a
promising tool for the automatic diagnosis and screening of diseases from facial
images. However, few of these studies used deep learning (DL) techniques. Most
of them focused on detecting a single disease, using handcrafted feature
extraction methods and conventional machine learning techniques based on
individual classifiers trained on small and private datasets using images taken
from a controlled environment. This study proposes a novel computer-aided facial
diagnosis system called FaceDisNet that uses a new public dataset based on
images taken from an unconstrained environment and could be employed for
forthcoming comparisons. It detects single and multiple diseases. FaceDisNet is
constructed by integrating several spatial deep features from convolutional
neural networks of various architectures. It does not depend only on spatial
features but also extracts spatial-spectral features. FaceDisNet searches for
the fused spatial-spectral feature set that has the greatest impact on the
classification. It employs two feature selection techniques to reduce the large
dimension of features resulting from feature fusion. Finally, it builds an
ensemble classifier based on stacking to perform classification. The performance
of FaceDisNet verifies its ability to diagnose single and multiple diseases.
FaceDisNet achieved a maximum accuracy of 98.57% and 98% after the ensemble
classification and feature selection steps for binary and multiclass
classification categories. These results prove that FaceDisNet is a reliable
tool and could be employed to avoid the difficulties and complications of manual
diagnosis. Also, it can help physicians achieve accurate diagnoses without the
need for physical contact with the patients.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|