1
|
Kamalakannan N, Macharla SR, Kanimozhi M, Sudhakar MS. Exponential Pixelating Integral transform with dual fractal features for enhanced chest X-ray abnormality detection. Comput Biol Med 2024; 182:109093. [PMID: 39232407 DOI: 10.1016/j.compbiomed.2024.109093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 08/25/2024] [Accepted: 08/29/2024] [Indexed: 09/06/2024]
Abstract
The heightened prevalence of respiratory disorders, particularly exacerbated by a significant upswing in fatalities due to the novel coronavirus, underscores the critical need for early detection and timely intervention. This imperative is paramount, possessing the potential to profoundly impact and safeguard numerous lives. Medically, chest radiography stands out as an essential and economically viable medical imaging approach for diagnosing and assessing the severity of diverse Respiratory Disorders. However, their detection in Chest X-Rays is a cumbersome task even for well-trained radiologists owing to low contrast issues, overlapping of the tissue structures, subjective variability, and the presence of noise. To address these issues, a novel analytical model termed Exponential Pixelating Integral is introduced for the automatic detection of infections in Chest X-Rays in this work. Initially, the presented Exponential Pixelating Integral enhances the pixel intensities to overcome the low-contrast issues that are then polar-transformed followed by their representation using the locally invariant Mandelbrot and Julia fractal geometries for effective distinction of structural features. The collated features labeled Exponential Pixelating Integral with dually characterized fractal features are then classified by the non-parametric multivariate adaptive regression splines to establish an ensemble model between each pair of classes for effective diagnosis of diverse diseases. Rigorous analysis of the proposed classification framework on large medical benchmarked datasets showcases its superiority over its peers by registering a higher classification accuracy and F1 scores ranging from 98.46 to 99.45 % and 96.53-98.10 % respectively, making it a precise and interpretable automated system for diagnosing respiratory disorders.
Collapse
Affiliation(s)
| | | | - M Kanimozhi
- School of Electrical & Electronics, Sathyabama Institute of Science and Technology, Chennai, Tamilnadu, India
| | - M S Sudhakar
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, India.
| |
Collapse
|
2
|
Yang B, Gong K, Liu H, Li Q, Zhu W. Anatomically Guided PET Image Reconstruction Using Conditional Weakly-Supervised Multi-Task Learning Integrating Self-Attention. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2098-2112. [PMID: 38241121 DOI: 10.1109/tmi.2024.3356189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2024]
Abstract
To address the lack of high-quality training labels in positron emission tomography (PET) imaging, weakly-supervised reconstruction methods that generate network-based mappings between prior images and noisy targets have been developed. However, the learned model has an intrinsic variance proportional to the average variance of the target image. To suppress noise and improve the accuracy and generalizability of the learned model, we propose a conditional weakly-supervised multi-task learning (MTL) strategy, in which an auxiliary task is introduced serving as an anatomical regularizer for the PET reconstruction main task. In the proposed MTL approach, we devise a novel multi-channel self-attention (MCSA) module that helps learn an optimal combination of shared and task-specific features by capturing both local and global channel-spatial dependencies. The proposed reconstruction method was evaluated on NEMA phantom PET datasets acquired at different positions in a PET/CT scanner and 26 clinical whole-body PET datasets. The phantom results demonstrate that our method outperforms state-of-the-art learning-free and weakly-supervised approaches obtaining the best noise/contrast tradeoff with a significant noise reduction of approximately 50.0% relative to the maximum likelihood (ML) reconstruction. The patient study results demonstrate that our method achieves the largest noise reductions of 67.3% and 35.5% in the liver and lung, respectively, as well as consistently small biases in 8 tumors with various volumes and intensities. In addition, network visualization reveals that adding the auxiliary task introduces more anatomical information into PET reconstruction than adding only the anatomical loss, and the developed MCSA can abstract features and retain PET image details.
Collapse
|
3
|
Bennour A, Ben Aoun N, Khalaf OI, Ghabban F, Wong WK, Algburi S. Contribution to pulmonary diseases diagnostic from X-ray images using innovative deep learning models. Heliyon 2024; 10:e30308. [PMID: 38707425 PMCID: PMC11068804 DOI: 10.1016/j.heliyon.2024.e30308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 04/09/2024] [Accepted: 04/23/2024] [Indexed: 05/07/2024] Open
Abstract
Pulmonary disease identification and characterization are among the most intriguing research topics of recent years since they require an accurate and prompt diagnosis. Although pulmonary radiography has helped in lung disease diagnosis, the interpretation of the radiographic image has always been a major concern for doctors and radiologists to reduce diagnosis errors. Due to their success in image classification and segmentation tasks, cutting-edge artificial intelligence techniques like machine learning (ML) and deep learning (DL) are widely encouraged to be applied in the field of diagnosing lung disorders and identifying them using medical images, particularly radiographic ones. For this end, the researchers are concurring to build systems based on these techniques in particular deep learning ones. In this paper, we proposed three deep-learning models that were trained to identify the presence of certain lung diseases using thoracic radiography. The first model, named "CovCXR-Net", identifies the COVID-19 disease (two cases: COVID-19 or normal). The second model, named "MDCXR3-Net", identifies the COVID-19 and pneumonia diseases (three cases: COVID-19, pneumonia, or normal), and the last model, named "MDCXR4-Net", is destined to identify the COVID-19, pneumonia and the pulmonary opacity diseases (4 cases: COVID-19, pneumonia, pulmonary opacity or normal). These models have proven their superiority in comparison with the state-of-the-art models and reached an accuracy of 99,09 %, 97.74 %, and 90,37 % respectively with three benchmarks.
Collapse
Affiliation(s)
- Akram Bennour
- LAMIS Laboratiry, Echahid Cheikh Larbi Tebessi University, Tebessa, Algeria
| | - Najib Ben Aoun
- College of Computer Science and Information Technology, Al-Baha University, Al Baha, Saudi Arabia
- REGIM-Lab: Research Groups in Intelligent Machines, National School of Engineers of Sfax (ENIS), University of Sfax, Tunisia
| | - Osamah Ibrahim Khalaf
- Department of Solar, Al-Nahrain Research Center for Renewable Energy, Al-Nahrain University, Jadriya, Baghdad, Iraq
| | - Fahad Ghabban
- College of Computer Science and Engineering, Taibah University, Medina, Saudi Arabia
| | | | - Sameer Algburi
- Al-Kitab University, College of Engineering Techniques, Kirkuk, Iraq
| |
Collapse
|
4
|
Chen S, Ren S, Wang G, Huang M, Xue C. Interpretable CNN-Multilevel Attention Transformer for Rapid Recognition of Pneumonia From Chest X-Ray Images. IEEE J Biomed Health Inform 2024; 28:753-764. [PMID: 37027681 DOI: 10.1109/jbhi.2023.3247949] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
Abstract
Chest imaging plays an essential role in diagnosing and predicting patients with COVID-19 with evidence of worsening respiratory status. Many deep learning-based approaches for pneumonia recognition have been developed to enable computer-aided diagnosis. However, the long training and inference time makes them inflexible, and the lack of interpretability reduces their credibility in clinical medical practice. This paper aims to develop a pneumonia recognition framework with interpretability, which can understand the complex relationship between lung features and related diseases in chest X-ray (CXR) images to provide high-speed analytics support for medical practice. To reduce the computational complexity to accelerate the recognition process, a novel multi-level self-attention mechanism within Transformer has been proposed to accelerate convergence and emphasize the task-related feature regions. Moreover, a practical CXR image data augmentation has been adopted to address the scarcity of medical image data problems to boost the model's performance. The effectiveness of the proposed method has been demonstrated on the classic COVID-19 recognition task using the widespread pneumonia CXR image dataset. In addition, abundant ablation experiments validate the effectiveness and necessity of all of the components of the proposed method.
Collapse
|
5
|
Morís DI, de Moura J, Aslani S, Jacob J, Novo J, Ortega M. Multi-task localization of the hemidiaphragms and lung segmentation in portable chest X-ray images of COVID-19 patients. Digit Health 2024; 10:20552076231225853. [PMID: 38313365 PMCID: PMC10836150 DOI: 10.1177/20552076231225853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 12/05/2023] [Indexed: 02/06/2024] Open
Abstract
Background The COVID-19 can cause long-term symptoms in the patients after they overcome the disease. Given that this disease mainly damages the respiratory system, these symptoms are often related with breathing problems that can be caused by an affected diaphragm. The diaphragmatic function can be assessed with imaging modalities like computerized tomography or chest X-ray. However, this process must be performed by expert clinicians with manual visual inspection. Moreover, during the pandemic, the clinicians were asked to prioritize the use of portable devices, preventing the risk of cross-contamination. Nevertheless, the captures of these devices are of a lower quality. Objectives The automatic quantification of the diaphragmatic function can determine the damage of COVID-19 on each patient and assess their evolution during the recovery period, a task that could also be complemented with the lung segmentation. Methods We propose a novel multi-task fully automatic methodology to simultaneously localize the position of the hemidiaphragms and to segment the lung boundaries with a convolutional architecture using portable chest X-ray images of COVID-19 patients. For that aim, the hemidiaphragms' landmarks are located adapting the paradigm of heatmap regression. Results The methodology is exhaustively validated with four analyses, achieving an 82.31% ± 2.78% of accuracy when localizing the hemidiaphragms' landmarks and a Dice score of 0.9688 ± 0.0012 in lung segmentation. Conclusions The results demonstrate that the model is able to perform both tasks simultaneously, being a helpful tool for clinicians despite the lower quality of the portable chest X-ray images.
Collapse
Affiliation(s)
- Daniel I Morís
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain
| | - Joaquim de Moura
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain
| | - Shahab Aslani
- Department of Computer Science, Centre for Medical Image Computing, University College London, UK
| | - Joseph Jacob
- Department of Computer Science, Centre for Medical Image Computing, University College London, UK
- Satsuma Lab, Centre for Medical Image Computing, University College London, UK
| | - Jorge Novo
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain
| | - Marcos Ortega
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain
| |
Collapse
|
6
|
Antony M, Kakileti ST, Shah R, Sahoo S, Bhattacharyya C, Manjunath G. Challenges of AI driven diagnosis of chest X-rays transmitted through smart phones: a case study in COVID-19. Sci Rep 2023; 13:18102. [PMID: 37872204 PMCID: PMC10593822 DOI: 10.1038/s41598-023-44653-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Accepted: 10/11/2023] [Indexed: 10/25/2023] Open
Abstract
Healthcare delivery during the initial days of outbreak of COVID-19 pandemic was badly impacted due to large number of severely infected patients posing an unprecedented global challenge. Although the importance of Chest X-rays (CXRs) in meeting this challenge has now been widely recognized, speedy diagnosis of CXRs remains an outstanding challenge because of fewer Radiologists. The exponential increase in Smart Phone ownership globally, including LMICs, provides an opportunity for exploring AI-driven diagnostic tools when provided with large volumes of CXRs transmitted through Smart Phones. However, the challenges associated with such systems have not been studied to the best of our knowledge. In this paper, we show that the predictions of AI-driven models on CXR images transmitted through Smart Phones via applications, such as WhatsApp, suffer both in terms of Predictability and Explainability, two key aspects of any automated Medical Diagnosis system. We find that several existing Deep learning based models exhibit prediction instability-disagreement between the prediction outcome of the original image and the transmitted image. Concomitantly we find that the explainability of the models deteriorate substantially, prediction on the transmitted CXR is often driven by features present outside the lung region, clearly a manifestation of Spurious Correlations. Our study reveals that there is significant compression of high-resolution CXR images, sometimes as high as 95%, and this could be the reason behind these two problems. Apart from demonstrating these problems, our main contribution is to show that Multi-Task learning (MTL) can serve as an effective bulwark against the aforementioned problems. We show that MTL models exhibit substantially more robustness, 40% over existing baselines. Explainability of such models, when measured by a saliency score dependent on out-of-lung features, also show a 35% improvement. The study is conducted on WaCXR dataset, a curated dataset of 6562 image pairs corresponding to original uncompressed and WhatsApp compressed CXR images. Keeping in mind that there are no previous datasets to study such problems, we open-source this data along with all implementations.
Collapse
Affiliation(s)
| | | | - Rachit Shah
- Indian Institute of Science, Bangalore, India
| | | | | | | |
Collapse
|
7
|
Gopatoti A, Vijayalakshmi P. MTMC-AUR2CNet: Multi-textural multi-class attention recurrent residual convolutional neural network for COVID-19 classification using chest X-ray images. Biomed Signal Process Control 2023; 85:104857. [PMID: 36968651 PMCID: PMC10027978 DOI: 10.1016/j.bspc.2023.104857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 02/13/2023] [Accepted: 03/11/2023] [Indexed: 03/24/2023]
Abstract
Coronavirus disease (COVID-19) has infected over 603 million confirmed cases as of September 2022, and its rapid spread has raised concerns worldwide. More than 6.4 million fatalities in confirmed patients have been reported. According to reports, the COVID-19 virus causes lung damage and rapidly mutates before the patient receives any diagnosis-specific medicine. Daily increasing COVID-19 cases and the limited number of diagnosis tool kits encourage the use of deep learning (DL) models to assist health care practitioners using chest X-ray (CXR) images. The CXR is a low radiation radiography tool available in hospitals to diagnose COVID-19 and combat this spread. We propose a Multi-Textural Multi-Class (MTMC) UNet-based Recurrent Residual Convolutional Neural Network (MTMC-UR2CNet) and MTMC-UR2CNet with attention mechanism (MTMC-AUR2CNet) for multi-class lung lobe segmentation of CXR images. The lung lobe segmentation output of MTMC-UR2CNet and MTMC-AUR2CNet are mapped individually with their input CXRs to generate the region of interest (ROI). The multi-textural features are extracted from the ROI of each proposed MTMC network. The extracted multi-textural features from ROI are fused and are trained to the Whale optimization algorithm (WOA) based DeepCNN classifier on classifying the CXR images into normal (healthy), COVID-19, viral pneumonia, and lung opacity. The experimental result shows that the MTMC-AUR2CNet has superior performance in multi-class lung lobe segmentation of CXR images with an accuracy of 99.47%, followed by MTMC-UR2CNet with an accuracy of 98.39%. Also, MTMC-AUR2CNet improves the multi-textural multi-class classification accuracy of the WOA-based DeepCNN classifier to 97.60% compared to MTMC-UR2CNet.
Collapse
Affiliation(s)
- Anandbabu Gopatoti
- Department of Electronics and Communication Engineering, Hindusthan College of Engineering and Technology, Coimbatore, Tamil Nadu, India
- Centre for Research, Anna University, Chennai, Tamil Nadu, India
| | - P Vijayalakshmi
- Department of Electronics and Communication Engineering, Hindusthan College of Engineering and Technology, Coimbatore, Tamil Nadu, India
| |
Collapse
|
8
|
Chetoui M, Akhloufi MA, Bouattane EM, Abdulnour J, Roux S, Bernard CD. Explainable COVID-19 Detection Based on Chest X-rays Using an End-to-End RegNet Architecture. Viruses 2023; 15:1327. [PMID: 37376626 DOI: 10.3390/v15061327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 06/01/2023] [Indexed: 06/29/2023] Open
Abstract
COVID-19,which is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), is one of the worst pandemics in recent history. The identification of patients suspected to be infected with COVID-19 is becoming crucial to reduce its spread. We aimed to validate and test a deep learning model to detect COVID-19 based on chest X-rays. The recent deep convolutional neural network (CNN) RegNetX032 was adapted for detecting COVID-19 from chest X-ray (CXR) images using polymerase chain reaction (RT-PCR) as a reference. The model was customized and trained on five datasets containing more than 15,000 CXR images (including 4148COVID-19-positive cases) and then tested on 321 images (150 COVID-19-positive) from Montfort Hospital. Twenty percent of the data from the five datasets were used as validation data for hyperparameter optimization. Each CXR image was processed by the model to detect COVID-19. Multi-binary classifications were proposed, such as: COVID-19 vs. normal, COVID-19 + pneumonia vs. normal, and pneumonia vs. normal. The performance results were based on the area under the curve (AUC), sensitivity, and specificity. In addition, an explainability model was developed that demonstrated the high performance and high generalization degree of the proposed model in detecting and highlighting the signs of the disease. The fine-tuned RegNetX032 model achieved an overall accuracy score of 96.0%, with an AUC score of 99.1%. The model showed a superior sensitivity of 98.0% in detecting signs from CXR images of COVID-19 patients, and a specificity of 93.0% in detecting healthy CXR images. A second scenario compared COVID-19 + pneumonia vs. normal (healthy X-ray) patients. The model achieved an overall score of 99.1% (AUC) with a sensitivity of 96.0% and specificity of 93.0% on the Montfort dataset. For the validation set, the model achieved an average accuracy of 98.6%, an AUC score of 98.0%, a sensitivity of 98.0%, and a specificity of 96.0% for detection (COVID-19 patients vs. healthy patients). The second scenario compared COVID-19 + pneumonia vs. normal patients. The model achieved an overall score of 98.8% (AUC) with a sensitivity of 97.0% and a specificity of 96.0%. This robust deep learning model demonstrated excellent performance in detecting COVID-19 from chest X-rays. This model could be used to automate the detection of COVID-19 and improve decision making for patient triage and isolation in hospital settings. This could also be used as a complementary aid for radiologists or clinicians when differentiating to make smart decisions.
Collapse
Affiliation(s)
- Mohamed Chetoui
- Perception, Robotics, and Intelligent Machines (PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1A 3E9, Canada
| | - Moulay A Akhloufi
- Perception, Robotics, and Intelligent Machines (PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1A 3E9, Canada
| | - El Mostafa Bouattane
- Montfort Academic Hospital, Institut du Savoir Montfort, Ottawa, ON 61350, Canada
| | - Joseph Abdulnour
- Montfort Academic Hospital, Institut du Savoir Montfort, Ottawa, ON 61350, Canada
| | - Stephane Roux
- Montfort Academic Hospital, Institut du Savoir Montfort, Ottawa, ON 61350, Canada
| | | |
Collapse
|
9
|
Automated prediction of COVID-19 severity upon admission by chest X-ray images and clinical metadata aiming at accuracy and explainability. Sci Rep 2023; 13:4226. [PMID: 36918593 PMCID: PMC10012307 DOI: 10.1038/s41598-023-30505-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 02/24/2023] [Indexed: 03/16/2023] Open
Abstract
In the past few years COVID-19 posed a huge threat to healthcare systems around the world. One of the first waves of the pandemic hit Northern Italy severely resulting in high casualties and in the near breakdown of primary care. Due to these facts, the Covid CXR Hackathon-Artificial Intelligence for Covid-19 prognosis: aiming at accuracy and explainability challenge had been launched at the beginning of February 2022, releasing a new imaging dataset with additional clinical metadata for each accompanying chest X-ray (CXR). In this article we summarize our techniques at correctly diagnosing chest X-ray images collected upon admission for severity of COVID-19 outcome. In addition to X-ray imagery, clinical metadata was provided and the challenge also aimed at creating an explainable model. We created a best-performing, as well as, an explainable model that makes an effort to map clinical metadata to image features whilst predicting the prognosis. We also did many ablation studies in order to identify crucial parts of the models and the predictive power of each feature in the datasets. We conclude that CXRs at admission do not help the predicting power of the metadata significantly by itself and contain mostly information that is also mutually present in the blood samples and other clinical factors collected at admission.
Collapse
|
10
|
Esmi N, Golshan Y, Asadi S, Shahbahrami A, Gaydadjiev G. A fuzzy fine-tuned model for COVID-19 diagnosis. Comput Biol Med 2023; 153:106483. [PMID: 36621192 PMCID: PMC9811914 DOI: 10.1016/j.compbiomed.2022.106483] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 12/16/2022] [Accepted: 12/25/2022] [Indexed: 01/06/2023]
Abstract
The COVID-19 disease pandemic spread rapidly worldwide and caused extensive human death and financial losses. Therefore, finding accurate, accessible, and inexpensive methods for diagnosing the disease has challenged researchers. To automate the process of diagnosing COVID-19 disease through images, several strategies based on deep learning, such as transfer learning and ensemble learning, have been presented. However, these techniques cannot deal with noises and their propagation in different layers. In addition, many of the datasets already being used are imbalanced, and most techniques have used binary classification, COVID-19, from normal cases. To address these issues, we use the blind/referenceless image spatial quality evaluator to filter out inappropriate data in the dataset. In order to increase the volume and diversity of the data, we merge two datasets. This combination of two datasets allows multi-class classification between the three states of normal, COVID-19, and types of pneumonia, including bacterial and viral types. A weighted multi-class cross-entropy is used to reduce the effect of data imbalance. In addition, a fuzzy fine-tuned Xception model is applied to reduce the noise propagation in different layers. Quantitative analysis shows that our proposed model achieves 96.60% accuracy on the merged test set, which is more accurate than previously mentioned state-of-the-art methods.
Collapse
Affiliation(s)
- Nima Esmi
- Faculty of Science and Engineering, University of Groningen, Netherlands.
| | - Yasaman Golshan
- Department of Computer Engineering, Faculty of Engineering, University of Guilan, Rasht, Iran.
| | - Sara Asadi
- Department of Computer Engineering, Faculty of Engineering, University of Guilan, Rasht, Iran.
| | - Asadollah Shahbahrami
- Faculty of Science and Engineering, University of Groningen, Netherlands; Department of Computer Engineering, Faculty of Engineering, University of Guilan, Rasht, Iran.
| | - Georgi Gaydadjiev
- Faculty of Science and Engineering, University of Groningen, Netherlands.
| |
Collapse
|
11
|
Balan E, Saraniya O. Novel neural network architecture using sharpened cosine similarity for robust classification of Covid-19, pneumonia and tuberculosis diseases from X-rays. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2023. [DOI: 10.3233/jifs-222840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
COVID-19 is a rapidly proliferating transmissible virus that substantially impacts the world population. Consequently, there is an increasing demand for fast testing, diagnosis, and treatment. However, there is a growing need for quick testing, diagnosis, and treatment. In order to treat infected individuals, stop the spread of the disease, and cure severe pneumonia, early covid-19 detection is crucial. Along with covid-19, various pneumonia etiologies, including tuberculosis, provide additional difficulties for the medical system. In this study, covid-19, pneumonia, tuberculosis, and other specific diseases are categorized using Sharpened Cosine Similarity Network (SCS-Net) rather than dot products in neural networks. In order to benchmark the SCS-Net, the model’s performance is evaluated on binary class (covid-19 and normal), and four-class (tuberculosis, covid-19, pneumonia, and normal) based X-ray images. The proposed SCS-Net for distinguishing various lung disorders has been successfully validated. In multiclass classification, the proposed SCS-Net succeeded with an accuracy of 94.05% and a Cohen’s kappa score of 90.70% ; in binary class, it achieved an accuracy of 96.67% and its Cohen’s kappa score of 93.70%. According to our investigation, SCS in deep neural networks significantly lowers the test error with lower divergence. SCS significantly increases classification accuracy in neural networks and speeds up training.
Collapse
Affiliation(s)
- Elakkiya Balan
- Department of Electronics and Communication Engineering, Sri Venkateswara College of Engineering, Chennai, Tamil Nadu, India
| | - O. Saraniya
- Department of Electronics and Communication Engineering, Government College of Technology, Coimbatore, Tamil Nadu, India
| |
Collapse
|
12
|
Akhter Y, Singh R, Vatsa M. AI-based radiodiagnosis using chest X-rays: A review. Front Big Data 2023; 6:1120989. [PMID: 37091458 PMCID: PMC10116151 DOI: 10.3389/fdata.2023.1120989] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 01/06/2023] [Indexed: 04/25/2023] Open
Abstract
Chest Radiograph or Chest X-ray (CXR) is a common, fast, non-invasive, relatively cheap radiological examination method in medical sciences. CXRs can aid in diagnosing many lung ailments such as Pneumonia, Tuberculosis, Pneumoconiosis, COVID-19, and lung cancer. Apart from other radiological examinations, every year, 2 billion CXRs are performed worldwide. However, the availability of the workforce to handle this amount of workload in hospitals is cumbersome, particularly in developing and low-income nations. Recent advances in AI, particularly in computer vision, have drawn attention to solving challenging medical image analysis problems. Healthcare is one of the areas where AI/ML-based assistive screening/diagnostic aid can play a crucial part in social welfare. However, it faces multiple challenges, such as small sample space, data privacy, poor quality samples, adversarial attacks and most importantly, the model interpretability for reliability on machine intelligence. This paper provides a structured review of the CXR-based analysis for different tasks, lung diseases and, in particular, the challenges faced by AI/ML-based systems for diagnosis. Further, we provide an overview of existing datasets, evaluation metrics for different[][15mm][0mm]Q5 tasks and patents issued. We also present key challenges and open problems in this research domain.
Collapse
|
13
|
Chen H, Jiang Y, Ko H, Loew M. A teacher-student framework with Fourier Transform augmentation for COVID-19 infection segmentation in CT images. Biomed Signal Process Control 2023; 79:104250. [PMID: 36188130 PMCID: PMC9510070 DOI: 10.1016/j.bspc.2022.104250] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 08/11/2022] [Accepted: 09/18/2022] [Indexed: 11/23/2022]
Abstract
Automatic segmentation of infected regions in computed tomography (CT) images is necessary for the initial diagnosis of COVID-19. Deep-learning-based methods have the potential to automate this task but require a large amount of data with pixel-level annotations. Training a deep network with annotated lung cancer CT images, which are easier to obtain, can alleviate this problem to some extent. However, this approach may suffer from a reduction in performance when applied to unseen COVID-19 images during the testing phase, caused by the difference in the image intensity and object region distribution between the training set and test set. In this paper, we proposed a novel unsupervised method for COVID-19 infection segmentation that aims to learn the domain-invariant features from lung cancer and COVID-19 images to improve the generalization ability of the segmentation network for use with COVID-19 CT images. First, to address the intensity difference, we proposed a novel data augmentation module based on Fourier Transform, which transfers the annotated lung cancer data into the style of COVID-19 image. Secondly, to reduce the distribution difference, we designed a teacher-student network to learn rotation-invariant features for segmentation. The experiments demonstrated that even without getting access to the annotations of the COVID-19 CT images during the training phase, the proposed network can achieve a state-of-the-art segmentation performance on COVID-19 infection.
Collapse
Affiliation(s)
- Han Chen
- School of Electrical Engineering, Korea University, Seoul, South Korea
| | - Yifan Jiang
- School of Electrical Engineering, Korea University, Seoul, South Korea
| | - Hanseok Ko
- School of Electrical Engineering, Korea University, Seoul, South Korea
| | - Murray Loew
- Biomedical Engineering, George Washington University, Washington D.C., USA
| |
Collapse
|
14
|
Ding W, Abdel-Basset M, Hawash H, Ali AM. Explainability of artificial intelligence methods, applications and challenges: A comprehensive survey. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.10.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
15
|
Sharma A, Mishra PK. Covid-MANet: Multi-task attention network for explainable diagnosis and severity assessment of COVID-19 from CXR images. PATTERN RECOGNITION 2022; 131:108826. [PMID: 35698723 PMCID: PMC9170279 DOI: 10.1016/j.patcog.2022.108826] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 04/24/2022] [Accepted: 06/02/2022] [Indexed: 05/17/2023]
Abstract
The devastating outbreak of Coronavirus Disease (COVID-19) cases in early 2020 led the world to face health crises. Subsequently, the exponential reproduction rate of COVID-19 disease can only be reduced by early diagnosis of COVID-19 infection cases correctly. The initial research findings reported that radiological examinations using CT and CXR modality have successfully reduced false negatives by RT-PCR test. This research study aims to develop an explainable diagnosis system for the detection and infection region quantification of COVID-19 disease. The existing research studies successfully explored deep learning approaches with higher performance measures but lacked generalization and interpretability for COVID-19 diagnosis. In this study, we address these issues by the Covid-MANet network, an automated end-to-end multi-task attention network that works for 5 classes in three stages for COVID-19 infection screening. The first stage of the Covid-MANet network localizes attention of the model to the relevant lungs region for disease recognition. The second stage of the Covid-MANet network differentiates COVID-19 cases from bacterial pneumonia, viral pneumonia, normal and tuberculosis cases, respectively. To improve the interpretation and explainability, three experiments have been conducted in exploration of the most coherent and appropriate classification approach. Moreover, the multi-scale attention model MA-DenseNet201 proposed for the classification of COVID-19 cases. The final stage of the Covid-MANet network quantifies the proportion of infection and severity of COVID-19 in the lungs. The COVID-19 cases are graded into more specific severity levels such as mild, moderate, severe, and critical as per the score assigned by the RALE scoring system. The MA-DenseNet201 classification model outperforms eight state-of-the-art CNN models, in terms of sensitivity and interpretation with lung localization network. The COVID-19 infection segmentation by UNet with DenseNet121 encoder achieves dice score of 86.15% outperforming UNet, UNet++, AttentionUNet, R2UNet, with VGG16, ResNet50 and DenseNet201 encoder. The proposed network not only classifies images based on the predicted label but also highlights the infection by segmentation/localization of model-focused regions to support explainable decisions. MA-DenseNet201 model with a segmentation-based cropping approach achieves maximum interpretation of 96% with COVID-19 sensitivity of 97.75%. Finally, based on class-varied sensitivity analysis Covid-MANet ensemble network of MA-DenseNet201, ResNet50 and MobileNet achieve 95.05% accuracy and 98.75% COVID-19 sensitivity. The proposed model is externally validated on an unseen dataset, yields 98.17% COVID-19 sensitivity.
Collapse
Affiliation(s)
- Ajay Sharma
- Department of Computer Science, Institute of Science, Banaras Hindu University, Varanasi 221005, India
| | - Pramod Kumar Mishra
- Department of Computer Science, Institute of Science, Banaras Hindu University, Varanasi 221005, India
| |
Collapse
|
16
|
Loh HW, Ooi CP, Seoni S, Barua PD, Molinari F, Acharya UR. Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011-2022). COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107161. [PMID: 36228495 DOI: 10.1016/j.cmpb.2022.107161] [Citation(s) in RCA: 113] [Impact Index Per Article: 56.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 09/16/2022] [Accepted: 09/25/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVES Artificial intelligence (AI) has branched out to various applications in healthcare, such as health services management, predictive medicine, clinical decision-making, and patient data and diagnostics. Although AI models have achieved human-like performance, their use is still limited because they are seen as a black box. This lack of trust remains the main reason for their low use in practice, especially in healthcare. Hence, explainable artificial intelligence (XAI) has been introduced as a technique that can provide confidence in the model's prediction by explaining how the prediction is derived, thereby encouraging the use of AI systems in healthcare. The primary goal of this review is to provide areas of healthcare that require more attention from the XAI research community. METHODS Multiple journal databases were thoroughly searched using PRISMA guidelines 2020. Studies that do not appear in Q1 journals, which are highly credible, were excluded. RESULTS In this review, we surveyed 99 Q1 articles covering the following XAI techniques: SHAP, LIME, GradCAM, LRP, Fuzzy classifier, EBM, CBR, rule-based systems, and others. CONCLUSION We discovered that detecting abnormalities in 1D biosignals and identifying key text in clinical notes are areas that require more attention from the XAI research community. We hope this is review will encourage the development of a holistic cloud system for a smart city.
Collapse
Affiliation(s)
- Hui Wen Loh
- School of Science and Technology, Singapore University of Social Sciences, Singapore
| | - Chui Ping Ooi
- School of Science and Technology, Singapore University of Social Sciences, Singapore
| | - Silvia Seoni
- Department of Electronics and Telecommunications, Biolab, Politecnico di Torino, Torino 10129, Italy
| | - Prabal Datta Barua
- Faculty of Engineering and Information Technology, University of Technology Sydney, Australia; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Australia
| | - Filippo Molinari
- Department of Electronics and Telecommunications, Biolab, Politecnico di Torino, Torino 10129, Italy
| | - U Rajendra Acharya
- School of Science and Technology, Singapore University of Social Sciences, Singapore; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Australia; School of Engineering, Ngee Ann Polytechnic, Singapore; Department of Bioinformatics and Medical Engineering, Asia University, Taiwan; Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto, Japan.
| |
Collapse
|
17
|
Chetoui M, Akhloufi MA. Explainable Vision Transformers and Radiomics for COVID-19 Detection in Chest X-rays. J Clin Med 2022; 11:3013. [PMID: 35683400 PMCID: PMC9181325 DOI: 10.3390/jcm11113013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 05/18/2022] [Accepted: 05/23/2022] [Indexed: 01/30/2023] Open
Abstract
The rapid spread of COVID-19 across the globe since its emergence has pushed many countries' healthcare systems to the verge of collapse. To restrict the spread of the disease and lessen the ongoing cost on the healthcare system, it is critical to appropriately identify COVID-19-positive individuals and isolate them as soon as possible. The primary COVID-19 screening test, RT-PCR, although accurate and reliable, has a long turn-around time. More recently, various researchers have demonstrated the use of deep learning approaches on chest X-ray (CXR) for COVID-19 detection. However, existing Deep Convolutional Neural Network (CNN) methods fail to capture the global context due to their inherent image-specific inductive bias. In this article, we investigated the use of vision transformers (ViT) for detecting COVID-19 in Chest X-ray (CXR) images. Several ViT models were fine-tuned for the multiclass classification problem (COVID-19, Pneumonia and Normal cases). A dataset consisting of 7598 COVID-19 CXR images, 8552 CXR for healthy patients and 5674 for Pneumonia CXR were used. The obtained results achieved high performance with an Area Under Curve (AUC) of 0.99 for multi-class classification (COVID-19 vs. Other Pneumonia vs. normal). The sensitivity of the COVID-19 class achieved 0.99. We demonstrated that the obtained results outperformed comparable state-of-the-art models for detecting COVID-19 on CXR images using CNN architectures. The attention map for the proposed model showed that our model is able to efficiently identify the signs of COVID-19.
Collapse
Affiliation(s)
| | - Moulay A. Akhloufi
- Perception, Robotics, and Intelligent Machines Research Group (PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1A 3E9, Canada;
| |
Collapse
|
18
|
Hemdan EED, El-Shafai W, Sayed A. CR19: a framework for preliminary detection of COVID-19 in cough audio signals using machine learning algorithms for automated medical diagnosis applications. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2022; 14:1-13. [PMID: 35126765 PMCID: PMC8803577 DOI: 10.1007/s12652-022-03732-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Accepted: 01/19/2022] [Indexed: 06/02/2023]
Abstract
Today, there is a level of panic and chaos dominating the entire world due to the massive outbreak in the second wave of COVID-19 disease. As the disease has numerous symptoms ranging from a simple fever to the inability to breathe, which may lead to death. One of these symptoms is a cough which is considered one of the most common symptoms for COVID-19 disease. Recent research shows that the cough of a COVID-19 patient has distinct features that are different from other diseases. Consequently, the cough sound can be detected and classified to be used as a preliminary diagnosis of the COVID-19, which will help in reducing the spreading of that disease. The artificial intelligence (AI) engine can diagnose COVID-19 diseases by executing differential analysis of its inherent characteristics and comparing it to other non-COVID-19 coughs. However, the diagnosis of a COVID-19 infection by cough alone is an extremely challenging multidisciplinary problem. Therefore, this paper proposes a hybrid framework for efficiently COVID-19 detection and diagnosis using various ML algorithms from cough audio signals. The accuracy of this framework is improved with the utilization of the genetic algorithm with the ML techniques. We also assess the proposed system called CR19 for diagnosis on metrics such as precision, recall, F-measure. The results proved that the hybrid (GA-ML) technique provides superior results based on different evaluation metrics compared with ML approaches such as LR, LDA, KNN, CART, NB, and SVM. The proposed framework achieve an accuracy equal to 92.19%, 94.32%, 97.87%, 92.19%, 91.48%, and 93.61% in compared with the ML are 90.78, 92.90, 95.74, 87.94, 81.56, and 92.198 for LR, LDA, KNN, CART, NB, and SVM respectively. The proposed framework will efficiently help the physicians provide a proper medical decision regarding the COVID-19 analysis, thereby saving more lives. Therefore, this CR19 framework can be a clinical decision assistance tool used to channel clinical testing and treatment to those who need it the most, thereby saving more lives.
Collapse
Affiliation(s)
- Ezz El-Din Hemdan
- Department of Computer Science and Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, 32952 Egypt
| | - Walid El-Shafai
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, 32952 Egypt
- Security Engineering Lab, Computer Science Department, Prince Sultan University, Riyadh, 11586 Saudi Arabia
| | - Amged Sayed
- Department of Industrial Electronics and Control Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, 32952 Egypt
| |
Collapse
|
19
|
Deshpande G, Batliner A, Schuller BW. AI-Based human audio processing for COVID-19: A comprehensive overview. PATTERN RECOGNITION 2022; 122:108289. [PMID: 34483372 PMCID: PMC8404390 DOI: 10.1016/j.patcog.2021.108289] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 08/24/2021] [Accepted: 08/29/2021] [Indexed: 06/02/2023]
Abstract
The Coronavirus (COVID-19) pandemic impelled several research efforts, from collecting COVID-19 patients' data to screening them for virus detection. Some COVID-19 symptoms are related to the functioning of the respiratory system that influences speech production; this suggests research on identifying markers of COVID-19 in speech and other human generated audio signals. In this article, we give an overview of research on human audio signals using 'Artificial Intelligence' techniques to screen, diagnose, monitor, and spread the awareness about COVID-19. This overview will be useful for developing automated systems that can help in the context of COVID-19, using non-obtrusive and easy to use bio-signals conveyed in human non-speech and speech audio productions.
Collapse
Affiliation(s)
- Gauri Deshpande
- Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Germany
- TCS Research Pune, India
| | - Anton Batliner
- Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Germany
| | - Björn W Schuller
- Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Germany
- GLAM - Group on Language, Audio, & Music, Imperial College London, UK
| |
Collapse
|
20
|
Detection and Prevention of Virus Infection. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1368:21-52. [DOI: 10.1007/978-981-16-8969-7_2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
21
|
Explainable COVID-19 Detection on Chest X-rays Using an End-to-End Deep Convolutional Neural Network Architecture. BIG DATA AND COGNITIVE COMPUTING 2021. [DOI: 10.3390/bdcc5040073] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
The coronavirus pandemic is spreading around the world. Medical imaging modalities such as radiography play an important role in the fight against COVID-19. Deep learning (DL) techniques have been able to improve medical imaging tools and help radiologists to make clinical decisions for the diagnosis, monitoring and prognosis of different diseases. Computer-Aided Diagnostic (CAD) systems can improve work efficiency by precisely delineating infections in chest X-ray (CXR) images, thus facilitating subsequent quantification. CAD can also help automate the scanning process and reshape the workflow with minimal patient contact, providing the best protection for imaging technicians. The objective of this study is to develop a deep learning algorithm to detect COVID-19, pneumonia and normal cases on CXR images. We propose two classifications problems, (i) a binary classification to classify COVID-19 and normal cases and (ii) a multiclass classification for COVID-19, pneumonia and normal. Nine datasets and more than 3200 COVID-19 CXR images are used to assess the efficiency of the proposed technique. The model is trained on a subset of the National Institute of Health (NIH) dataset using swish activation, thus improving the training accuracy to detect COVID-19 and other pneumonia. The models are tested on eight merged datasets and on individual test sets in order to confirm the degree of generalization of the proposed algorithms. An explainability algorithm is also developed to visually show the location of the lung-infected areas detected by the model. Moreover, we provide a detailed analysis of the misclassified images. The obtained results achieve high performances with an Area Under Curve (AUC) of 0.97 for multi-class classification (COVID-19 vs. other pneumonia vs. normal) and 0.98 for the binary model (COVID-19 vs. normal). The average sensitivity and specificity are 0.97 and 0.98, respectively. The sensitivity of the COVID-19 class achieves 0.99. The results outperformed the comparable state-of-the-art models for the detection of COVID-19 on CXR images. The explainability model shows that our model is able to efficiently identify the signs of COVID-19.
Collapse
|