1
|
Narteni S, Baiardini I, Braido F, Mongelli M. Explainable artificial intelligence for cough-related quality of life impairment prediction in asthmatic patients. PLoS One 2024; 19:e0292980. [PMID: 38502606 PMCID: PMC10950232 DOI: 10.1371/journal.pone.0292980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 02/29/2024] [Indexed: 03/21/2024] Open
Abstract
Explainable Artificial Intelligence (XAI) is becoming a disruptive trend in healthcare, allowing for transparency and interpretability of autonomous decision-making. In this study, we present an innovative application of a rule-based classification model to identify the main causes of chronic cough-related quality of life (QoL) impairment in a cohort of asthmatic patients. The proposed approach first involves the design of a suitable symptoms questionnaire and the subsequent analyses via XAI. Specifically, feature ranking, derived from statistically validated decision rules, helped in automatically identifying the main factors influencing an impaired QoL: pharynx/larynx and upper airways when asthma is under control, and asthma itself and digestive trait when asthma is not controlled. Moreover, the obtained if-then rules identified specific thresholds on the symptoms associated to the impaired QoL. These results, by finding priorities among symptoms, may prove helpful in supporting physicians in the choice of the most adequate diagnostic/therapeutic plan.
Collapse
Affiliation(s)
- Sara Narteni
- CNR-IEIIT, Genoa, Italy
- DAUIN Department, Politecnico di Torino, Turin, Italy
| | - Ilaria Baiardini
- Respiratory Diseases and Allergy Department, IRCCS Polyclinic Hospital San Martino, Genoa, Italy
| | - Fulvio Braido
- Respiratory Diseases and Allergy Department, IRCCS Polyclinic Hospital San Martino, Genoa, Italy
| | | |
Collapse
|
2
|
Li M, Jiang Y, Zhang Y, Zhu H. Medical image analysis using deep learning algorithms. Front Public Health 2023; 11:1273253. [PMID: 38026291 PMCID: PMC10662291 DOI: 10.3389/fpubh.2023.1273253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 10/05/2023] [Indexed: 12/01/2023] Open
Abstract
In the field of medical image analysis within deep learning (DL), the importance of employing advanced DL techniques cannot be overstated. DL has achieved impressive results in various areas, making it particularly noteworthy for medical image analysis in healthcare. The integration of DL with medical image analysis enables real-time analysis of vast and intricate datasets, yielding insights that significantly enhance healthcare outcomes and operational efficiency in the industry. This extensive review of existing literature conducts a thorough examination of the most recent deep learning (DL) approaches designed to address the difficulties faced in medical healthcare, particularly focusing on the use of deep learning algorithms in medical image analysis. Falling all the investigated papers into five different categories in terms of their techniques, we have assessed them according to some critical parameters. Through a systematic categorization of state-of-the-art DL techniques, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), Long Short-term Memory (LSTM) models, and hybrid models, this study explores their underlying principles, advantages, limitations, methodologies, simulation environments, and datasets. Based on our results, Python was the most frequent programming language used for implementing the proposed methods in the investigated papers. Notably, the majority of the scrutinized papers were published in 2021, underscoring the contemporaneous nature of the research. Moreover, this review accentuates the forefront advancements in DL techniques and their practical applications within the realm of medical image analysis, while simultaneously addressing the challenges that hinder the widespread implementation of DL in image analysis within the medical healthcare domains. These discerned insights serve as compelling impetuses for future studies aimed at the progressive advancement of image analysis in medical healthcare research. The evaluation metrics employed across the reviewed articles encompass a broad spectrum of features, encompassing accuracy, sensitivity, specificity, F-score, robustness, computational complexity, and generalizability.
Collapse
Affiliation(s)
- Mengfang Li
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Yuanyuan Jiang
- Department of Cardiovascular Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Yanzhou Zhang
- Department of Cardiovascular Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Haisheng Zhu
- Department of Cardiovascular Medicine, Wencheng People’s Hospital, Wencheng, China
| |
Collapse
|
3
|
Yang Y, Sun K, Gao Y, Wang K, Yu G. Preparing Data for Artificial Intelligence in Pathology with Clinical-Grade Performance. Diagnostics (Basel) 2023; 13:3115. [PMID: 37835858 PMCID: PMC10572440 DOI: 10.3390/diagnostics13193115] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 09/27/2023] [Accepted: 09/28/2023] [Indexed: 10/15/2023] Open
Abstract
The pathology is decisive for disease diagnosis but relies heavily on experienced pathologists. In recent years, there has been growing interest in the use of artificial intelligence in pathology (AIP) to enhance diagnostic accuracy and efficiency. However, the impressive performance of deep learning-based AIP in laboratory settings often proves challenging to replicate in clinical practice. As the data preparation is important for AIP, the paper has reviewed AIP-related studies in the PubMed database published from January 2017 to February 2022, and 118 studies were included. An in-depth analysis of data preparation methods is conducted, encompassing the acquisition of pathological tissue slides, data cleaning, screening, and subsequent digitization. Expert review, image annotation, dataset division for model training and validation are also discussed. Furthermore, we delve into the reasons behind the challenges in reproducing the high performance of AIP in clinical settings and present effective strategies to enhance AIP's clinical performance. The robustness of AIP depends on a randomized collection of representative disease slides, incorporating rigorous quality control and screening, correction of digital discrepancies, reasonable annotation, and sufficient data volume. Digital pathology is fundamental in clinical-grade AIP, and the techniques of data standardization and weakly supervised learning methods based on whole slide image (WSI) are effective ways to overcome obstacles of performance reproduction. The key to performance reproducibility lies in having representative data, an adequate amount of labeling, and ensuring consistency across multiple centers. Digital pathology for clinical diagnosis, data standardization and the technique of WSI-based weakly supervised learning will hopefully build clinical-grade AIP.
Collapse
Affiliation(s)
- Yuanqing Yang
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
- Department of Biomedical Engineering, School of Medical, Tsinghua University, Beijing 100084, China
| | - Kai Sun
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
- Furong Laboratory, Changsha 410013, China
| | - Yanhua Gao
- Department of Ultrasound, Shaanxi Provincial People’s Hospital, Xi’an 710068, China;
| | - Kuansong Wang
- Department of Pathology, School of Basic Medical Sciences, Central South University, Changsha 410013, China;
- Department of Pathology, Xiangya Hospital, Central South University, Changsha 410013, China
| | - Gang Yu
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
| |
Collapse
|
4
|
Zhao Y, Guo Q, Zhang Y, Zheng J, Yang Y, Du X, Feng H, Zhang S. Application of Deep Learning for Prediction of Alzheimer's Disease in PET/MR Imaging. Bioengineering (Basel) 2023; 10:1120. [PMID: 37892850 PMCID: PMC10604050 DOI: 10.3390/bioengineering10101120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 09/19/2023] [Accepted: 09/22/2023] [Indexed: 10/29/2023] Open
Abstract
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that affects millions of people worldwide. Positron emission tomography/magnetic resonance (PET/MR) imaging is a promising technique that combines the advantages of PET and MR to provide both functional and structural information of the brain. Deep learning (DL) is a subfield of machine learning (ML) and artificial intelligence (AI) that focuses on developing algorithms and models inspired by the structure and function of the human brain's neural networks. DL has been applied to various aspects of PET/MR imaging in AD, such as image segmentation, image reconstruction, diagnosis and prediction, and visualization of pathological features. In this review, we introduce the basic concepts and types of DL algorithms, such as feed forward neural networks, convolutional neural networks, recurrent neural networks, and autoencoders. We then summarize the current applications and challenges of DL in PET/MR imaging in AD, and discuss the future directions and opportunities for automated diagnosis, predictions of models, and personalized medicine. We conclude that DL has great potential to improve the quality and efficiency of PET/MR imaging in AD, and to provide new insights into the pathophysiology and treatment of this devastating disease.
Collapse
Affiliation(s)
- Yan Zhao
- Department of Information Center, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
| | - Qianrui Guo
- Department of Nuclear Medicine, Beijing Cancer Hospital, Beijing 100142, China;
| | - Yukun Zhang
- Department of Radiology, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
| | - Jia Zheng
- Department of Nuclear Medicine, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
| | - Yang Yang
- Beijing United Imaging Research Institute of Intelligent Imaging, Beijing 100094, China
| | - Xuemei Du
- Department of Nuclear Medicine, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
| | - Hongbo Feng
- Department of Nuclear Medicine, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
| | - Shuo Zhang
- Department of Nuclear Medicine, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
| |
Collapse
|
5
|
Lenatti M, Paglialonga A, Orani V, Ferretti M, Mongelli M. Characterization of Synthetic Health Data Using Rule-Based Artificial Intelligence Models. IEEE J Biomed Health Inform 2023; 27:3760-3769. [PMID: 37018683 DOI: 10.1109/jbhi.2023.3236722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
The aim of this study is to apply and characterize eXplainable AI (XAI) to assess the quality of synthetic health data generated using a data augmentation algorithm. In this exploratory study, several synthetic datasets are generated using various configurations of a conditional Generative Adversarial Network (GAN) from a set of 156 observations related to adult hearing screening. A rule-based native XAI algorithm, the Logic Learning Machine, is used in combination with conventional utility metrics. The classification performance in different conditions is assessed: models trained and tested on synthetic data, models trained on synthetic data and tested on real data, and models trained on real data and tested on synthetic data. The rules extracted from real and synthetic data are then compared using a rule similarity metric. The results indicate that XAI may be used to assess the quality of synthetic data by (i) the analysis of classification performance and (ii) the analysis of the rules extracted on real and synthetic data (number, covering, structure, cut-off values, and similarity). These results suggest that XAI can be used in an original way to assess synthetic health data and extract knowledge about the mechanisms underlying the generated data.
Collapse
|
6
|
GAN-Based Approaches for Generating Structured Data in the Medical Domain. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12147075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Modern machine and deep learning methods require large datasets to achieve reliable and robust results. This requirement is often difficult to meet in the medical field, due to data sharing limitations imposed by privacy regulations or the presence of a small number of patients (e.g., rare diseases). To address this data scarcity and to improve the situation, novel generative models such as Generative Adversarial Networks (GANs) have been widely used to generate synthetic data that mimic real data by representing features that reflect health-related information without reference to real patients. In this paper, we consider several GAN models to generate synthetic data used for training binary (malignant/benign) classifiers, and compare their performances in terms of classification accuracy with cases where only real data are considered. We aim to investigate how synthetic data can improve classification accuracy, especially when a small amount of data is available. To this end, we have developed and implemented an evaluation framework where binary classifiers are trained on extended datasets containing both real and synthetic data. The results show improved accuracy for classifiers trained with generated data from more advanced GAN models, even when limited amounts of original data are available.
Collapse
|
7
|
Data Augmentation for Audio-Visual Emotion Recognition with an Efficient Multimodal Conditional GAN. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12010527] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Audio-visual emotion recognition is the research of identifying human emotional states by combining the audio modality and the visual modality simultaneously, which plays an important role in intelligent human-machine interactions. With the help of deep learning, previous works have made great progress for audio-visual emotion recognition. However, these deep learning methods often require a large amount of data for training. In reality, data acquisition is difficult and expensive, especially for the multimodal data with different modalities. As a result, the training data may be in the low-data regime, which cannot be effectively used for deep learning. In addition, class imbalance may occur in the emotional data, which can further degrade the performance of audio-visual emotion recognition. To address these problems, we propose an efficient data augmentation framework by designing a multimodal conditional generative adversarial network (GAN) for audio-visual emotion recognition. Specifically, we design generators and discriminators for audio and visual modalities. The category information is used as their shared input to make sure our GAN can generate fake data of different categories. In addition, the high dependence between the audio modality and the visual modality in the generated multimodal data is modeled based on Hirschfeld-Gebelein-Rényi (HGR) maximal correlation. In this way, we relate different modalities in the generated data to approximate the real data. Then, the generated data are used to augment our data manifold. We further apply our approach to deal with the problem of class imbalance. To the best of our knowledge, this is the first work to propose a data augmentation strategy with a multimodal conditional GAN for audio-visual emotion recognition. We conduct a series of experiments on three public multimodal datasets, including eNTERFACE’05, RAVDESS, and CMEW. The results indicate that our multimodal conditional GAN has high effectiveness for data augmentation of audio-visual emotion recognition.
Collapse
|
8
|
Abstract
The problem is the vaccination of a large number of people in a short time period, using minimum space and resources. The tradeoff is that this minimum number of resources must guarantee a good service for the patients, represented by the time spent in the system and in the queue. The goal is to develop a digital twin which integrates the physical and virtual systems and allows a real-time mapping of the patient flow to create a sustainable and dynamic vaccination center. Firstly, to reach this goal, a discrete-event simulation model is implemented. The simulation model is integrated with a mobile application that automatically collects time measures. By processing these measures, indicators can be computed to find problems, run the virtual model to solve them, and replicate improvements in the real system. The model is tested in a South Tyrol vaccination clinic and the best configuration found includes 31 operators and 306 places dedicated for the queues. This configuration allows the vaccination of 2164 patients in a 10-h shift, with a mean process time of 25 min. Data from the APP are managed to build the dashboard with indicators like number of people in queue for each phase and resource utilization.
Collapse
|