1
|
Ayappan G, Anila S. Automatic detection and prediction of COVID-19 in cough audio signals using coronavirus herd immunity optimizer algorithm. Sci Rep 2025; 15:2271. [PMID: 39824893 DOI: 10.1038/s41598-025-85140-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Accepted: 01/01/2025] [Indexed: 01/20/2025] Open
Abstract
The global spread of COVID-19, particularly through cough symptoms, necessitates efficient diagnostic tools. COVID-19 patients exhibit unique cough sound patterns distinguishable from other respiratory conditions. This study proposes an advanced framework to detect and predict COVID-19 using deep learning from cough audio signals. Audio data from the COUGHVID dataset undergo preprocessing through fuzzy gray level difference histogram equalization, followed by segmentation with a U-Net model. Key features are extracted via Zernike Moments (ZM) and Gray Level Co-occurrence Matrix (GLCM). The Enhanced Deep Neural Network (EDNN), tuned by the Coronavirus Herd Immunity Optimizer (CHIO), performs final prediction by minimizing error metrics. Comparative simulation results reveal that the proposed EDNN-CHIO model improves MSE by 25.35% and SMAPE by 42.06% over conventional models like PSO, WOA, and LSTM. The proposed approach demonstrates superior error reduction, highlighting its potential for effective COVID-19 detection.
Collapse
Affiliation(s)
- G Ayappan
- Department of Electronics and Communication Engineering, Sri Venkateswara College of Engineering, Sriperumbudur, Tamilnadu, India, 602117.
| | - S Anila
- Department of Electronics and Communication Engineering, Sri Ramakrishna Institute of Technology, Coimbatore, Tamilnadu, India, 641010
| |
Collapse
|
2
|
Isangula KG, Haule RJ. Leveraging AI and Machine Learning to Develop and Evaluate a Contextualized User-Friendly Cough Audio Classifier for Detecting Respiratory Diseases: Protocol for a Diagnostic Study in Rural Tanzania. JMIR Res Protoc 2024; 13:e54388. [PMID: 38652526 PMCID: PMC11077412 DOI: 10.2196/54388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 02/14/2024] [Accepted: 02/21/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND Respiratory diseases, including active tuberculosis (TB), asthma, and chronic obstructive pulmonary disease (COPD), constitute substantial global health challenges, necessitating timely and accurate diagnosis for effective treatment and management. OBJECTIVE This research seeks to develop and evaluate a noninvasive user-friendly artificial intelligence (AI)-powered cough audio classifier for detecting these respiratory conditions in rural Tanzania. METHODS This is a nonexperimental cross-sectional research with the primary objective of collection and analysis of cough sounds from patients with active TB, asthma, and COPD in outpatient clinics to generate and evaluate a noninvasive cough audio classifier. Specialized cough sound recording devices, designed to be nonintrusive and user-friendly, will facilitate the collection of diverse cough sound samples from patients attending outpatient clinics in 20 health care facilities in the Shinyanga region. The collected cough sound data will undergo rigorous analysis, using advanced AI signal processing and machine learning techniques. By comparing acoustic features and patterns associated with TB, asthma, and COPD, a robust algorithm capable of automated disease discrimination will be generated facilitating the development of a smartphone-based cough sound classifier. The classifier will be evaluated against the calculated reference standards including clinical assessments, sputum smear, GeneXpert, chest x-ray, culture and sensitivity, spirometry and peak expiratory flow, and sensitivity and predictive values. RESULTS This research represents a vital step toward enhancing the diagnostic capabilities available in outpatient clinics, with the potential to revolutionize the field of respiratory disease diagnosis. Findings from the 4 phases of the study will be presented as descriptions supported by relevant images, tables, and figures. The anticipated outcome of this research is the creation of a reliable, noninvasive diagnostic cough classifier that empowers health care professionals and patients themselves to identify and differentiate these respiratory diseases based on cough sound patterns. CONCLUSIONS Cough sound classifiers use advanced technology for early detection and management of respiratory conditions, offering a less invasive and more efficient alternative to traditional diagnostics. This technology promises to ease public health burdens, improve patient outcomes, and enhance health care access in under-resourced areas, potentially transforming respiratory disease management globally. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) PRR1-10.2196/54388.
Collapse
Affiliation(s)
- Kahabi Ganka Isangula
- School of Nursing and Midwifery, Aga Khan University, Dar Es Salaam, United Republic of Tanzania
| | - Rogers John Haule
- School of Nursing and Midwifery, Aga Khan University, Dar Es Salaam, United Republic of Tanzania
| |
Collapse
|
3
|
Diab MS, Rodriguez-Villegas E. Feature evaluation of accelerometry signals for cough detection. Front Digit Health 2024; 6:1368574. [PMID: 38585283 PMCID: PMC10995234 DOI: 10.3389/fdgth.2024.1368574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 03/06/2024] [Indexed: 04/09/2024] Open
Abstract
Cough is a common symptom of multiple respiratory diseases, such as asthma and chronic obstructive pulmonary disorder. Various research works targeted cough detection as a means for continuous monitoring of these respiratory health conditions. This has been mainly achieved using sophisticated machine learning or deep learning algorithms fed with audio recordings. In this work, we explore the use of an alternative detection method, since audio can generate privacy and security concerns related to the use of always-on microphones. This study proposes the use of a non-contact tri-axial accelerometer for motion detection to differentiate between cough and non-cough events/movements. A total of 43 time-domain features were extracted from the acquired tri-axial accelerometry signals. These features were evaluated and ranked for their importance using six methods with adjustable conditions, resulting in a total of 11 feature rankings. The ranking methods included model-based feature importance algorithms, first principal component, leave-one-out, permutation, and recursive features elimination (RFE). The ranking results were further used in the feature selection of the top 10, 20, and 30 for use in cough detection. A total of 68 classification models using a simple logistic regression classifier are reported, using two approaches for data splitting: subject-record-split and leave-one-subject-out (LOSO). The best-performing model out of the 34 using subject-record-split obtained an accuracy of 92.20%, sensitivity of 90.87%, specificity of 93.52%, and F1 score of 92.09% using only 20 features selected by the RFE method. The best-performing model out of the 34 using LOSO obtained an accuracy of 89.57%, sensitivity of 85.71%, specificity of 93.43%, and F1 score of 88.72% using only 10 features selected by the RFE method. These results demonstrate the ability for future implementation of a motion-based wearable cough detector.
Collapse
Affiliation(s)
- Maha S. Diab
- Wearable Technologies Lab, Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| | | |
Collapse
|
4
|
Aytekin I, Dalmaz O, Gonc K, Ankishan H, Saritas EU, Bagci U, Celik H, Cukur T. COVID-19 Detection From Respiratory Sounds With Hierarchical Spectrogram Transformers. IEEE J Biomed Health Inform 2024; 28:1273-1284. [PMID: 38051612 PMCID: PMC11658170 DOI: 10.1109/jbhi.2023.3339700] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
Monitoring of prevalent airborne diseases such as COVID-19 characteristically involves respiratory assessments. While auscultation is a mainstream method for preliminary screening of disease symptoms, its utility is hampered by the need for dedicated hospital visits. Remote monitoring based on recordings of respiratory sounds on portable devices is a promising alternative, which can assist in early assessment of COVID-19 that primarily affects the lower respiratory tract. In this study, we introduce a novel deep learning approach to distinguish patients with COVID-19 from healthy controls given audio recordings of cough or breathing sounds. The proposed approach leverages a novel hierarchical spectrogram transformer (HST) on spectrogram representations of respiratory sounds. HST embodies self-attention mechanisms over local windows in spectrograms, and window size is progressively grown over model stages to capture local to global context. HST is compared against state-of-the-art conventional and deep-learning baselines. Demonstrations on crowd-sourced multi-national datasets indicate that HST outperforms competing methods, achieving over 90% area under the receiver operating characteristic curve (AUC) in detecting COVID-19 cases.
Collapse
|
5
|
Ghrabli S, Elgendi M, Menon C. Identifying unique spectral fingerprints in cough sounds for diagnosing respiratory ailments. Sci Rep 2024; 14:593. [PMID: 38182601 PMCID: PMC10770161 DOI: 10.1038/s41598-023-50371-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 12/19/2023] [Indexed: 01/07/2024] Open
Abstract
Coughing, a prevalent symptom of many illnesses, including COVID-19, has led researchers to explore the potential of cough sound signals for cost-effective disease diagnosis. Traditional diagnostic methods, which can be expensive and require specialized personnel, contrast with the more accessible smartphone analysis of coughs. Typically, coughs are classified as wet or dry based on their phase duration. However, the utilization of acoustic analysis for diagnostic purposes is not widespread. Our study examined cough sounds from 1183 COVID-19-positive patients and compared them with 341 non-COVID-19 cough samples, as well as analyzing distinctions between pneumonia and asthma-related coughs. After rigorous optimization across frequency ranges, specific frequency bands were found to correlate with each respiratory ailment. Statistical separability tests validated these findings, and machine learning algorithms, including linear discriminant analysis and k-nearest neighbors classifiers, were employed to confirm the presence of distinct frequency bands in the cough signal power spectrum associated with particular diseases. The identification of these acoustic signatures in cough sounds holds the potential to transform the classification and diagnosis of respiratory diseases, offering an affordable and widely accessible healthcare tool.
Collapse
Affiliation(s)
- Syrine Ghrabli
- Biomedical and Mobile Health Technology Lab, ETH Zurich, 8008, Zurich, Switzerland
- Department of Physics, ETH Zurich, 8093, Zurich, Switzerland
| | - Mohamed Elgendi
- Biomedical and Mobile Health Technology Lab, ETH Zurich, 8008, Zurich, Switzerland.
| | - Carlo Menon
- Biomedical and Mobile Health Technology Lab, ETH Zurich, 8008, Zurich, Switzerland.
| |
Collapse
|
6
|
Orlandic L, Teijeiro T, Atienza D. A semi-supervised algorithm for improving the consistency of crowdsourced datasets: The COVID-19 case study on respiratory disorder classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 241:107743. [PMID: 37598473 DOI: 10.1016/j.cmpb.2023.107743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 07/12/2023] [Accepted: 08/02/2023] [Indexed: 08/22/2023]
Abstract
BACKGROUND AND OBJECTIVE Cough audio signal classification is a potentially useful tool in screening for respiratory disorders, such as COVID-19. Since it is dangerous to collect data from patients with contagious diseases, many research teams have turned to crowdsourcing to quickly gather cough sound data. The COUGHVID dataset enlisted expert physicians to annotate and diagnose the underlying diseases present in a limited number of recordings. However, this approach suffers from potential cough mislabeling, as well as disagreement between experts. METHODS In this work, we use a semi-supervised learning (SSL) approach - based on audio signal processing tools and interpretable machine learning models - to improve the labeling consistency of the COUGHVID dataset for 1) COVID-19 versus healthy cough sound classification 2) distinguishing wet from dry coughs, and 3) assessing cough severity. First, we leverage SSL expert knowledge aggregation techniques to overcome the labeling inconsistencies and label sparsity in the dataset. Next, our SSL approach is used to identify a subsample of re-labeled COUGHVID audio samples that can be used to train or augment future cough classifiers. RESULTS The consistency of the re-labeled COVID-19 and healthy data is demonstrated in that it exhibits a high degree of inter-class feature separability: 3x higher than that of the user-labeled data. Similarly, the SSL method increases this separability by 11.3x for cough type and 5.1x for severity classifications. Furthermore, the spectral differences in the user-labeled audio segments are amplified in the re-labeled data, resulting in significantly different power spectral densities between healthy and COVID-19 coughs in the 1-1.5 kHz range (p=1.2×10-64), which demonstrates both the increased consistency of the new dataset and its explainability from an acoustic perspective. Finally, we demonstrate how the re-labeled dataset can be used to train a COVID-19 classifier, achieving an AUC of 0.797. CONCLUSIONS We propose a SSL expert knowledge aggregation technique for the field of cough sound classification for the first time, and demonstrate how it can be used to combine the medical knowledge of multiple experts in an explainable fashion, thus providing abundant, consistent data for cough classification tasks.
Collapse
Affiliation(s)
- Lara Orlandic
- Embedded Systems Laboratory (ESL), EPFL, Lausanne, Switzerland.
| | - Tomas Teijeiro
- Embedded Systems Laboratory (ESL), EPFL, Lausanne, Switzerland; Department of Mathematics, University of the Basque Country (UPV/EHU), Spain
| | - David Atienza
- Embedded Systems Laboratory (ESL), EPFL, Lausanne, Switzerland
| |
Collapse
|
7
|
Triantafyllopoulos A, Kathan A, Baird A, Christ L, Gebhard A, Gerczuk M, Karas V, Hübner T, Jing X, Liu S, Mallol-Ragolta A, Milling M, Ottl S, Semertzidou A, Rajamani ST, Yan T, Yang Z, Dineley J, Amiriparian S, Bartl-Pokorny KD, Batliner A, Pokorny FB, Schuller BW. HEAR4Health: a blueprint for making computer audition a staple of modern healthcare. Front Digit Health 2023; 5:1196079. [PMID: 37767523 PMCID: PMC10520966 DOI: 10.3389/fdgth.2023.1196079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 09/01/2023] [Indexed: 09/29/2023] Open
Abstract
Recent years have seen a rapid increase in digital medicine research in an attempt to transform traditional healthcare systems to their modern, intelligent, and versatile equivalents that are adequately equipped to tackle contemporary challenges. This has led to a wave of applications that utilise AI technologies; first and foremost in the fields of medical imaging, but also in the use of wearables and other intelligent sensors. In comparison, computer audition can be seen to be lagging behind, at least in terms of commercial interest. Yet, audition has long been a staple assistant for medical practitioners, with the stethoscope being the quintessential sign of doctors around the world. Transforming this traditional technology with the use of AI entails a set of unique challenges. We categorise the advances needed in four key pillars: Hear, corresponding to the cornerstone technologies needed to analyse auditory signals in real-life conditions; Earlier, for the advances needed in computational and data efficiency; Attentively, for accounting to individual differences and handling the longitudinal nature of medical data; and, finally, Responsibly, for ensuring compliance to the ethical standards accorded to the field of medicine. Thus, we provide an overview and perspective of HEAR4Health: the sketch of a modern, ubiquitous sensing system that can bring computer audition on par with other AI technologies in the strive for improved healthcare systems.
Collapse
Affiliation(s)
- Andreas Triantafyllopoulos
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Alexander Kathan
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Alice Baird
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Lukas Christ
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Alexander Gebhard
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Maurice Gerczuk
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Vincent Karas
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Tobias Hübner
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Xin Jing
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Shuo Liu
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Adria Mallol-Ragolta
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
- Centre for Interdisciplinary Health Research, University of Augsburg, Augsburg, Germany
| | - Manuel Milling
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Sandra Ottl
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Anastasia Semertzidou
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
| | | | - Tianhao Yan
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Zijiang Yang
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Judith Dineley
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Shahin Amiriparian
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Katrin D. Bartl-Pokorny
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
- Division of Phoniatrics, Medical University of Graz, Graz, Austria
| | - Anton Batliner
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Florian B. Pokorny
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
- Division of Phoniatrics, Medical University of Graz, Graz, Austria
- Centre for Interdisciplinary Health Research, University of Augsburg, Augsburg, Germany
| | - Björn W. Schuller
- EIHW – Chair of Embedded Intelligence for Healthcare and Wellbeing, University of Augsburg, Augsburg, Germany
- Centre for Interdisciplinary Health Research, University of Augsburg, Augsburg, Germany
- GLAM – Group on Language, Audio, & Music, Imperial College London, London, United Kingdom
| |
Collapse
|
8
|
Shekhar K, Chittaragi NB, Koolagudi SG. Automatic diagnosis of COVID-19 related respiratory diseases from speech. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 82:1-16. [PMID: 37362694 PMCID: PMC10050801 DOI: 10.1007/s11042-023-14923-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 12/09/2022] [Accepted: 02/22/2023] [Indexed: 06/28/2023]
Abstract
In this work, an attempt is made to propose an intelligent and automatic system to recognize COVID-19 related illnesses from mere speech samples by using automatic speech processing techniques. We used a standard crowd-sourced dataset which was collected by the University of Cambridge through a web based application and an android/iPhone app. We worked on cough and breath datasets individually, and also with a combination of both the datasets. We trained the datasets on two sets of features, one consisting of only standard audio features such as spectral and prosodic features and one combining excitation source features with standard audio features extracted, and trained our model on shallow classifiers such as ensemble classifiers and SVM classification methods. Our model has shown better performance on both breath and cough datasets, but the best results in each of the cases was obtained through different combinations of features and classifiers. We got our best result when we used only standard audio features, and combined both cough and breath data. In this case, we achieved an accuracy of 84% and an Area Under Curve (AUC) score of 84%. Intelligent systems have already started to make a mark in medical diagnosis, and this type of study can help better the health system by providing much needed assistance to the health workers.
Collapse
Affiliation(s)
- Kushan Shekhar
- Department of CSE, National Institute of Technology Karnataka, Surathkal, Mangalore, Karnataka India
| | | | - Shashidhar G. Koolagudi
- Department of CSE, National Institute of Technology Karnataka, Surathkal, Mangalore, Karnataka India
| |
Collapse
|
9
|
Davidson C, Caguana OA, Lozano-García M, Arita Guevara M, Estrada-Petrocelli L, Ferrer-Lluis I, Castillo-Escario Y, Ausín P, Gea J, Jané R. Differences in acoustic features of cough by pneumonia severity in patients with COVID-19: a cross-sectional study. ERJ Open Res 2023; 9:00247-2022. [PMID: 37131524 PMCID: PMC9922471 DOI: 10.1183/23120541.00247-2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 01/07/2023] [Indexed: 02/05/2023] Open
Abstract
BackgroundAcute respiratory syndrome due to coronavirus 2 (SARS-CoV-2) is characterised by heterogeneous levels of disease severity. It is not necessarily apparent whether a patient will develop a severe disease or not. This cross-sectional study explores whether acoustic properties of the cough sound of patients with coronavirus disease (COVID-19), the illness caused by SARS-CoV-2, correlate with their disease and pneumonia severity, with the aim of identifying patients with a severe disease.MethodsVoluntary cough sounds were recorded using a smartphone in 70 COVID-19 patients within the first 24 h of their hospital arrival, between April 2020 and May 2021. Based on gas exchange abnormalities, patients were classified as mild, moderate, or severe. Time- and frequency-based variables were obtained from each cough effort and analysed using a linear mixed-effects modelling approach.ResultsRecords from 62 patients (37% female) were eligible for inclusion in the analysis, with mild, moderate, and severe groups consisting of 31, 14 and 17 patients respectively. 5 of the parameters examined were found to be significantly different in the cough of patients at different disease levels of severity, with a further 2 parameters found to be affected differently by the disease severity in men and women.ConclusionsWe suggest that all these differences reflect the progressive pathophysiological alterations occurring in the respiratory system of COVID-19 patients, and potentially would provide an easy and cost-effective way to initially stratify patients, identifying those with more severe disease, and thereby most effectively allocate healthcare resources.
Collapse
|
10
|
Najaran MHT. An evolutionary ensemble learning for diagnosing COVID-19 via cough signals. INTELLIGENT MEDICINE 2023; 3:S2667-1026(23)00002-5. [PMID: 36743333 PMCID: PMC9882956 DOI: 10.1016/j.imed.2023.01.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 01/10/2023] [Accepted: 01/11/2023] [Indexed: 01/30/2023]
Abstract
Objective The spread of the COVID-19 disease has caused great concern around the world and detecting the positive cases is crucial in curbing the pandemic. One of the symptoms of the disease is the dry cough it causes. It has previously been shown that cough signals can be used to identify a variety of diseases including tuberculosis, asthma, etc. In this paper, we proposed an algorithm to diagnose via cough signals the COVID-19 disease. Methods The proposed algorithm is an ensemble scheme that consists of a number of base learners, where each base learner uses a different feature extractor method, including statistical approaches and convolutional neural networks (CNN) for automatic feature extraction. Features are extracted from the raw signal and some transforms performed it, including Fourier, wavelet, Hilbert-Huang, and short-term Fourier transforms. The outputs of these base-learners are aggregated via a weighted voting scheme, with the weights optimised via an evolutionary paradigm. This paper also proposes a memetic algorithm for training the CNNs in the base-learners, which combines the speed of gradient descent (GD) algorithms and global search space coverage of the evolutionary algorithms. Results Experiments were performed on the proposed algorithm and different rival algorithms which included a number of CNN architectures in the literature and generic machine learning algorithms. The results suggested that the proposed algorithm achieves better performance compared to the existing algorithms in diagnosing COVID-19 via cough signals. Conclusion This research showed that COVID-19 could be diagnosed via cough signals and CNNs could be employed to process these signals and it may be further improved by the optimization of CNN architecture.
Collapse
|
11
|
Chetupalli SR, Krishnan P, Sharma N, Muguli A, Kumar R, Nanda V, Pinto LM, Ghosh PK, Ganapathy S. Multi-Modal Point-of-Care Diagnostics for COVID-19 Based on Acoustics and Symptoms. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2023; 11:199-210. [PMID: 36909300 PMCID: PMC9994626 DOI: 10.1109/jtehm.2023.3250700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 12/05/2022] [Accepted: 02/22/2023] [Indexed: 03/14/2023]
Abstract
BACKGROUND The COVID-19 pandemic has highlighted the need to invent alternative respiratory health diagnosis methodologies which provide improvement with respect to time, cost, physical distancing and detection performance. In this context, identifying acoustic bio-markers of respiratory diseases has received renewed interest. OBJECTIVE In this paper, we aim to design COVID-19 diagnostics based on analyzing the acoustics and symptoms data. Towards this, the data is composed of cough, breathing, and speech signals, and health symptoms record, collected using a web-application over a period of twenty months. METHODS We investigate the use of time-frequency features for acoustic signals and binary features for encoding different health symptoms. We experiment with use of classifiers like logistic regression, support vector machines and long-short term memory (LSTM) network models on the acoustic data, while decision tree models are proposed for the symptoms data. RESULTS We show that a multi-modal integration of inference from different acoustic signal categories and symptoms achieves an area-under-curve (AUC) of 96.3%, a statistically significant improvement when compared against any individual modality ([Formula: see text]). Experimentation with different feature representations suggests that the mel-spectrogram acoustic features performs relatively better across the three kinds of acoustic signals. Further, a score analysis with data recorded from newer SARS-CoV-2 variants highlights the generalization ability of the proposed diagnostic approach for COVID-19 detection. CONCLUSION The proposed method shows a promising direction for COVID-19 detection using a multi-modal dataset, while generalizing to new COVID variants.
Collapse
Affiliation(s)
- Srikanth Raj Chetupalli
- LEAP LaboratoryDepartment of Electrical EngineeringIndian Institute of Science Bengaluru 560012 India
| | - Prashant Krishnan
- LEAP LaboratoryDepartment of Electrical EngineeringIndian Institute of Science Bengaluru 560012 India
| | - Neeraj Sharma
- LEAP LaboratoryDepartment of Electrical EngineeringIndian Institute of Science Bengaluru 560012 India
| | - Ananya Muguli
- LEAP LaboratoryDepartment of Electrical EngineeringIndian Institute of Science Bengaluru 560012 India
| | - Rohit Kumar
- LEAP LaboratoryDepartment of Electrical EngineeringIndian Institute of Science Bengaluru 560012 India
| | - Viral Nanda
- P. D. Hinduja National Hospital and Medical Research Center Mumbai 400016 India
| | - Lancelot Mark Pinto
- P. D. Hinduja National Hospital and Medical Research Center Mumbai 400016 India
| | - Prasanta Kumar Ghosh
- LEAP LaboratoryDepartment of Electrical EngineeringIndian Institute of Science Bengaluru 560012 India
| | - Sriram Ganapathy
- LEAP LaboratoryDepartment of Electrical EngineeringIndian Institute of Science Bengaluru 560012 India
| |
Collapse
|
12
|
Hamidi M, Zealouk O, Satori H, Laaidi N, Salek A. COVID-19 assessment using HMM cough recognition system. INTERNATIONAL JOURNAL OF INFORMATION TECHNOLOGY : AN OFFICIAL JOURNAL OF BHARATI VIDYAPEETH'S INSTITUTE OF COMPUTER APPLICATIONS AND MANAGEMENT 2023; 15:193-201. [PMID: 36313860 PMCID: PMC9595586 DOI: 10.1007/s41870-022-01120-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 10/13/2022] [Indexed: 11/06/2022]
Abstract
This paper is a part of our contributions to research on the ongoing COVID-19 pandemic around the world. This research aims to use Hidden Markov Model (HMM) based automatic speech recognition system to analyze the cough signal and determine whether the signal belongs to a sick or healthy speaker. We built a configurable model by using HMMs, Gaussian Mixture Models (GMMs), Mel frequency spectral coefficients (MFCCs) and a cough corpus collected from healthy and sick voluntary speakers. Our proposed method is able to classify dry cough with sensitivity from 85.86% to 91.57%, differentiate the dry cough, and cough COVID-19 symptom with specificity from 5 to 10%. The obtained results are very encouraging to enrich our corpus with more data and increase the performance of our diagnostic system.
Collapse
Affiliation(s)
- Mohamed Hamidi
- Advanced Systems Engineering Laboratory, ENSA-UIT, Kenitra, Morocco ,grid.412150.30000 0004 0648 5985Multimedia and Arts Department, FLLA, UIT, Kenitra, Morocco
| | - Ouissam Zealouk
- LISAC, Department of Mathematics and Computer Science, FSDM, USMBA, Fez, Morocco
| | - Hassan Satori
- LISAC, Department of Mathematics and Computer Science, FSDM, USMBA, Fez, Morocco
| | - Naouar Laaidi
- LISAC, Department of Mathematics and Computer Science, FSDM, USMBA, Fez, Morocco
| | - Amine Salek
- Faculty of Medicine and Pharmacy, UMP, Oujda, Morocco
| |
Collapse
|
13
|
Reliability of crowdsourced data and patient-reported outcome measures in cough-based COVID-19 screening. Sci Rep 2022; 12:21990. [PMID: 36539519 PMCID: PMC9764298 DOI: 10.1038/s41598-022-26492-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Accepted: 12/15/2022] [Indexed: 12/25/2022] Open
Abstract
Mass community testing is a critical means for monitoring the spread of the COVID-19 pandemic. Polymerase chain reaction (PCR) is the gold standard for detecting the causative coronavirus 2 (SARS-CoV-2) but the test is invasive, test centers may not be readily available, and the wait for laboratory results can take several days. Various machine learning based alternatives to PCR screening for SARS-CoV-2 have been proposed, including cough sound analysis. Cough classification models appear to be a robust means to predict infective status, but collecting reliable PCR confirmed data for their development is challenging and recent work using unverified crowdsourced data is seen as a viable alternative. In this study, we report experiments that assess cough classification models trained (i) using data from PCR-confirmed COVID subjects and (ii) using data of individuals self-reporting their infective status. We compare performance using PCR-confirmed data. Models trained on PCR-confirmed data perform better than those trained on patient-reported data. Models using PCR-confirmed data also exploit more stable predictive features and converge faster. Crowd-sourced cough data is less reliable than PCR-confirmed data for developing predictive models for COVID-19, and raises concerns about the utility of patient reported outcome data in developing other clinical predictive models when better gold-standard data are available.
Collapse
|
14
|
Lalouani W, Younis M, Emokpae RN, Emokpae LE. Enabling effective breathing sound analysis for automated diagnosis of lung diseases. SMART HEALTH 2022; 26:100329. [PMID: 36275046 PMCID: PMC9576264 DOI: 10.1016/j.smhl.2022.100329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 07/21/2022] [Accepted: 09/29/2022] [Indexed: 10/29/2022]
Abstract
With the emergence of the COVID-19 pandemic, early diagnosis of lung diseases has attracted growing attention. Generally, monitoring the breathing sound is the traditional means for assessing the status of a patient's respiratory health through auscultation; for that a stethoscope is one of the clinical tools used by physicians for diagnosis of lung disease and anomalies. On the other hand, recent technological advances have made telehealth systems a practical and effective option for health status assessment and remote patient monitoring. The interest in telehealth solutions have further grown with the COVID-19 pandemic. These telehealth systems aim to provide increased safety and help to cope with the massive growth in healthcare demand. Particularly, employing acoustic sensors to collect breathing sound would enable real-time assessment and instantaneous detection of anomalies. However, existing work focuses on autonomous determination of respiratory rate which is not suitable for anomaly detection due to inability to deal with noisy data recording. This paper presents a novel approach for effective breathing sound analysis. We promote a new segmentation mechanism of the captured acoustic signals to identify breathing cycles in recorded sound signals. A scoring scheme is applied to qualify the segment based on the targeted respiratory illness by the overall breathing sound analysis. We demonstrate the effectiveness of our approach via experiments using published COPD datasets.
Collapse
Affiliation(s)
- Wassila Lalouani
- Department of Computer and Information Science, Towson University, USA
| | - Mohamed Younis
- CSEE Dept., Univ. of Maryland, Baltimore County, Baltimore, MD, USA
| | | | - Lloyd E. Emokpae
- LASARRUS Clinic and Research Center Inc., Baltimore, MD, USA,Corresponding author
| |
Collapse
|
15
|
Xia T, Han J, Mascolo C. Exploring machine learning for audio-based respiratory condition screening: A concise review of databases, methods, and open issues. Exp Biol Med (Maywood) 2022; 247:2053-2061. [PMID: 35974706 PMCID: PMC9791302 DOI: 10.1177/15353702221115428] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Auscultation plays an important role in the clinic, and the research community has been exploring machine learning (ML) to enable remote and automatic auscultation for respiratory condition screening via sounds. To give the big picture of what is going on in this field, in this narrative review, we describe publicly available audio databases that can be used for experiments, illustrate the developed ML methods proposed to date, and flag some under-considered issues which still need attention. Compared to existing surveys on the topic, we cover the latest literature, especially those audio-based COVID-19 detection studies which have gained extensive attention in the last two years. This work can help to facilitate the application of artificial intelligence in the respiratory auscultation field.
Collapse
|
16
|
Suo J, Liu Y, Wu C, Chen M, Huang Q, Liu Y, Yao K, Chen Y, Pan Q, Chang X, Leung AYL, Chan H, Zhang G, Yang Z, Daoud W, Li X, Roy VAL, Shen J, Yu X, Wang J, Li WJ. Wide-Bandwidth Nanocomposite-Sensor Integrated Smart Mask for Tracking Multiphase Respiratory Activities. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2022; 9:e2203565. [PMID: 35999427 PMCID: PMC9631096 DOI: 10.1002/advs.202203565] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 08/03/2022] [Indexed: 06/15/2023]
Abstract
Wearing masks has been a recommended protective measure due to the risks of coronavirus disease 2019 (COVID-19) even in its coming endemic phase. Therefore, deploying a "smart mask" to monitor human physiological signals is highly beneficial for personal and public health. This work presents a smart mask integrating an ultrathin nanocomposite sponge structure-based soundwave sensor (≈400 µm), which allows the high sensitivity in a wide-bandwidth dynamic pressure range, i.e., capable of detecting various respiratory sounds of breathing, speaking, and coughing. Thirty-one subjects test the smart mask in recording their respiratory activities. Machine/deep learning methods, i.e., support vector machine and convolutional neural networks, are used to recognize these activities, which show average macro-recalls of ≈95% in both individual and generalized models. With rich high-frequency (≈4000 Hz) information recorded, the two-/tri-phase coughs can be mapped while speaking words can be identified, demonstrating that the smart mask can be applicable as a daily wearable Internet of Things (IoT) device for respiratory disease identification, voice interaction tool, etc. in the future. This work bridges the technological gap between ultra-lightweight but high-frequency response sensor material fabrication, signal transduction and processing, and machining/deep learning to demonstrate a wearable device for potential applications in continual health monitoring in daily life.
Collapse
Affiliation(s)
- Jiao Suo
- Dept. of Mechanical EngineeringCity University of Hong KongHong KongChina
| | - Yifan Liu
- Dept. of Mechanical EngineeringCity University of Hong KongHong KongChina
| | - Cong Wu
- Dept. of Mechanical EngineeringCity University of Hong KongHong KongChina
- Hong Kong Centre for Cerebro‐cardiovascular Health Engineering (COCHE)Hong KongChina
| | - Meng Chen
- Dept. of Mechanical EngineeringCity University of Hong KongHong KongChina
| | - Qingyun Huang
- Dept. of Mechanical EngineeringCity University of Hong KongHong KongChina
| | - Yiming Liu
- Dept. of Biomedical EngineeringCity University of Hong KongHong KongChina
| | - Kuanming Yao
- Dept. of Biomedical EngineeringCity University of Hong KongHong KongChina
| | - Yangbin Chen
- Dept. of Computer ScienceCity University of Hong KongHong KongChina
| | - Qiqi Pan
- Dept. of Mechanical EngineeringCity University of Hong KongHong KongChina
| | - Xiaoyu Chang
- Dept. of Mechanical EngineeringCity University of Hong KongHong KongChina
| | | | - Ho‐yin Chan
- Dept. of Mechanical EngineeringCity University of Hong KongHong KongChina
| | - Guanglie Zhang
- Dept. of Mechanical EngineeringCity University of Hong KongHong KongChina
| | - Zhengbao Yang
- Dept. of Mechanical EngineeringCity University of Hong KongHong KongChina
| | - Walid Daoud
- Dept. of Mechanical EngineeringCity University of Hong KongHong KongChina
| | - Xinyue Li
- School of Data ScienceCity University of Hong KongHong KongChina
| | | | - Jiangang Shen
- School of Chinese MedicineThe University of Hong KongHong KongChina
| | - Xinge Yu
- Dept. of Biomedical EngineeringCity University of Hong KongHong KongChina
- Hong Kong Centre for Cerebro‐cardiovascular Health Engineering (COCHE)Hong KongChina
| | - Jianping Wang
- Dept. of Computer ScienceCity University of Hong KongHong KongChina
| | - Wen Jung Li
- Dept. of Mechanical EngineeringCity University of Hong KongHong KongChina
- Hong Kong Centre for Cerebro‐cardiovascular Health Engineering (COCHE)Hong KongChina
| |
Collapse
|
17
|
Cohen-McFarlane M, Xi P, Wallace B, Habashy K, Huq S, Goubran R, Knoefel F. Evaluation of Respiratory Sounds Using Image-Based Approaches for Health Measurement Applications. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2022; 3:134-141. [PMID: 36578775 PMCID: PMC9788675 DOI: 10.1109/ojemb.2022.3202435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 05/06/2022] [Accepted: 08/25/2022] [Indexed: 12/31/2022] Open
Abstract
Goal: The evaluation of respiratory events using audio sensing in an at-home setting can be indicative of worsening health conditions. This paper investigates the use of image-based transfer learning applied to five audio visualizations to evaluate three classification tasks (C1: wet vs. dry vs. whooping cough vs. restricted breathing; C2: wet vs. dry cough; C3: cough vs. restricted breathing). Methods: The five visualizations (linear spectrogram, logarithmic spectrogram, Mel-spectrogram, wavelet scalograms, and aggregate images) are applied to a pre-trained AlexNet image classifier for all tasks. Results: The aggregate image-based classifier achieved the highest overall performance across all tasks with C1, C2, and C3 having testing accuracies of 0.88, 0.88, and 0.91 respectively. However, the Mel-spectrogram method had the highest testing accuracy (0.94) for C2. Conclusions: The classification of respiratory events using aggregate image inputs to transfer learning approaches may help healthcare professionals by providing information that would otherwise be unavailable to them.
Collapse
Affiliation(s)
- Madison Cohen-McFarlane
- AGE-WELL NCECarleton University Ottawa ON K1S 5B6 Canada
- AGE-WELL SAM3 National Innovation HubCarleton University Ottawa ON K1S 5B7 Canada
| | - Pengcheng Xi
- Digital Technologies Research CentreNational Research Council Canada Ottawa ON K1A 0R6 Canada
| | - Bruce Wallace
- AGE-WELL SAM3 National Innovation HubCarleton University Ottawa ON K1S 5B7 Canada
- AGE-WELL NCECarleton University Ottawa ON K1S 5B7 Canada
- Bruyère Research Institute Ottawa ON K1N 5C8 Canada
| | - Karim Habashy
- National Research Council Canada Ottawa ON K1A 0R6 Canada
| | - Saiful Huq
- Department of Systems and Computer Engineering, Carleton University Ottawa ON K1S 5B6 Canada
| | - Rafik Goubran
- AGE-WELL SAM3 National Innovation HubCarleton University Ottawa ON K1S 5B7 Canada
- Bruyère Research Institute Ottawa ON K1N 5C8 Canada
| | - Frank Knoefel
- Bruyère Research Institute, Bruyère Continuing CareElisabeth Bruyère Hospital Ottawa ON K1N 5C8 Canada
- AGE-WELL NCECarleton University Ottawa ON K1S 5B6 Canada
- AGE-WELL SAM3 National Innovation Hub Ottawa ON K1S 5B7 Canada
| |
Collapse
|
18
|
Aly M, Alotaibi NS. A novel deep learning model to detect COVID-19 based on wavelet features extracted from Mel-scale spectrogram of patients' cough and breathing sounds. INFORMATICS IN MEDICINE UNLOCKED 2022; 32:101049. [PMID: 35989705 PMCID: PMC9375256 DOI: 10.1016/j.imu.2022.101049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 08/08/2022] [Accepted: 08/09/2022] [Indexed: 10/26/2022] Open
Abstract
The goal of this paper is to classify the various cough and breath sounds of COVID-19 artefacts in the signals from dynamic real-life environments. The main reason for choosing cough and breath sounds than other common symptoms to detect COVID-19 patients from the comfort of their homes, so that they do not overload the Medicare system and therefore do not unwittingly spread the disease by regularly monitoring themselves. The presented model includes two main phases. The first phase is the sound-to-image transformation, which is improved by the Mel-scale spectrogram approach. The second phase consists of extraction of features and classification using nine deep transfer models (ResNet18/34/50/100/101, GoogLeNet, SqueezeNet, MobileNetv2, and NasNetmobile). The dataset contains information data from almost 1600 people (1185 Male and 415 Female) from all over the world. Our classification model is the most accurate, its accuracy is 99.2% according to the SGDM optimizer. The accuracy is good enough that a large set of labelled cough and breath data may be used to check the possibility for generalization. The results demonstrate that ResNet18 is the best stable model for classifying cough and breath tones from a restricted dataset, with a sensitivity of 98.3% and a specificity of 97.8%. Finally, the presented model is shown to be more trustworthy and accurate than any other present model. Cough and breath study accuracy is promising enough to put extrapolation and generalization to the test.
Collapse
Affiliation(s)
- Mohammed Aly
- Department of Artificial Intelligence, Faculty of Computers and Artificial Intelligence, Egyptian Russian University, Badr City, 11829, Cairo, Egypt
| | - Nouf Saeed Alotaibi
- Department of Computer Science, College of Science, Shaqra University, Shaqra City, 11961, Saudi Arabia
| |
Collapse
|
19
|
Pahar M, Miranda I, Diacon A, Niesler T. Automatic Non-Invasive Cough Detection based on Accelerometer and Audio Signals. JOURNAL OF SIGNAL PROCESSING SYSTEMS 2022; 94:821-835. [PMID: 35341095 PMCID: PMC8934184 DOI: 10.1007/s11265-022-01748-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 01/09/2022] [Accepted: 02/23/2022] [Indexed: 12/01/2022]
Abstract
We present an automatic non-invasive way of detecting cough events based on both accelerometer and audio signals. The acceleration signals are captured by a smartphone firmly attached to the patient’s bed, using its integrated accelerometer. The audio signals are captured simultaneously by the same smartphone using an external microphone. We have compiled a manually-annotated dataset containing such simultaneously-captured acceleration and audio signals for approximately 6000 cough and 68000 non-cough events from 14 adult male patients. Logistic regression (LR), support vector machine (SVM) and multilayer perceptron (MLP) classifiers provide a baseline and are compared with three deep architectures, convolutional neural network (CNN), long short-term memory (LSTM) network, and residual-based architecture (Resnet50) using a leave-one-out cross-validation scheme. We find that it is possible to use either acceleration or audio signals to distinguish between coughing and other activities including sneezing, throat-clearing, and movement on the bed with high accuracy. However, in all cases, the deep neural networks outperform the shallow classifiers by a clear margin and the Resnet50 offers the best performance, achieving an area under the ROC curve (AUC) exceeding 0.98 and 0.99 for acceleration and audio signals respectively. While audio-based classification consistently offers better performance than acceleration-based classification, we observe that the difference is very small for the best systems. Since the acceleration signal requires less processing power, and since the need to record audio is sidestepped and thus privacy is inherently secured, and since the recording device is attached to the bed and not worn, an accelerometer-based highly accurate non-invasive cough detector may represent a more convenient and readily accepted method in long-term cough monitoring.
Collapse
Affiliation(s)
- Madhurananda Pahar
- Department of Electrical and Electronic Engineering, Stellenbosch University, Stellenbosch, 7600 Western Cape South Africa
| | - Igor Miranda
- Federal University of Recôncavo da Bahia, Cruz das Almas, 44.380-000 Bahia Brazil
| | - Andreas Diacon
- TASK Applied Science, Cape Town, Western Cape South Africa
| | - Thomas Niesler
- Department of Electrical and Electronic Engineering, Stellenbosch University, Stellenbosch, 7600 Western Cape South Africa
| |
Collapse
|
20
|
Tsang KCH, Pinnock H, Wilson AM, Shah SA. Application of Machine Learning Algorithms for Asthma Management with mHealth: A Clinical Review. J Asthma Allergy 2022; 15:855-873. [PMID: 35791395 PMCID: PMC9250768 DOI: 10.2147/jaa.s285742] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 06/16/2022] [Indexed: 12/21/2022] Open
Abstract
Background Asthma is a variable long-term condition. Currently, there is no cure for asthma and the focus is, therefore, on long-term management. Mobile health (mHealth) is promising for chronic disease management but to be able to realize its potential, it needs to go beyond simply monitoring. mHealth therefore needs to leverage machine learning to provide tailored feedback with personalized algorithms. There is a need to understand the extent of machine learning that has been leveraged in the context of mHealth for asthma management. This review aims to fill this gap. Methods We searched PubMed for peer-reviewed studies that applied machine learning to data derived from mHealth for asthma management in the last five years. We selected studies that included some human data other than routinely collected in primary care and used at least one machine learning algorithm. Results Out of 90 studies, we identified 22 relevant studies that were then further reviewed. Broadly, existing research efforts can be categorized into three types: 1) technology development, 2) attack prediction, 3) patient clustering. Using data from a variety of devices (smartphones, smartwatches, peak flow meters, electronic noses, smart inhalers, and pulse oximeters), most applications used supervised learning algorithms (logistic regression, decision trees, and related algorithms) while a few used unsupervised learning algorithms. The vast majority used traditional machine learning techniques, but a few studies investigated the use of deep learning algorithms. Discussion In the past five years, many studies have successfully applied machine learning to asthma mHealth data. However, most have been developed on small datasets with internal validation at best. Small sample sizes and lack of external validation limit the generalizability of these studies. Future research should collect data that are more representative of the wider asthma population and focus on validating the derived algorithms and technologies in a real-world setting.
Collapse
Affiliation(s)
- Kevin C H Tsang
- Asthma UK Centre for Applied Research, Usher Institute, University of Edinburgh, Edinburgh, UK
| | - Hilary Pinnock
- Asthma UK Centre for Applied Research, Usher Institute, University of Edinburgh, Edinburgh, UK
| | - Andrew M Wilson
- Asthma UK Centre for Applied Research, and Norwich Medical School, University of East Anglia, Norwich, UK
| | - Syed Ahmar Shah
- Asthma UK Centre for Applied Research, Usher Institute, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
21
|
Hamdi S, Oussalah M, Moussaoui A, Saidi M. Attention-based hybrid CNN-LSTM and spectral data augmentation for COVID-19 diagnosis from cough sound. J Intell Inf Syst 2022; 59:367-389. [PMID: 35498369 PMCID: PMC9034264 DOI: 10.1007/s10844-022-00707-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 03/23/2022] [Accepted: 03/27/2022] [Indexed: 12/11/2022]
Abstract
COVID-19 pandemic has fueled the interest in artificial intelligence tools for quick diagnosis to limit virus spreading. Over 60% of people who are infected complain of a dry cough. Cough and other respiratory sounds were used to build diagnosis models in much recent research. We propose in this work, an augmentation pipeline which is applied on the pre-filtered data and uses i) pitch-shifting technique to augment the raw signal and, ii) spectral data augmentation technique SpecAugment to augment the computed mel-spectrograms. A deep learning based architecture that hybridizes convolution neural networks and long-short term memory with an attention mechanism is proposed for building the classification model. The feasibility of the proposed is demonstrated through a set of testing scenarios using the large-scale COUGHVID cough dataset and through a comparison with three baselines models. We have shown that our classification model achieved 91.13% of testing accuracy, 90.93% of sensitivity and an area under the curve of receiver operating characteristic of 91.13%.
Collapse
Affiliation(s)
- Skander Hamdi
- Department of Computer Science, University of Ferhat Abbes Setif, 19000 Setif, Algeria
| | - Mourad Oussalah
- Department of Computer Science and Engineering, University of Oulu, 90570 Oulu, Finland
| | - Abdelouahab Moussaoui
- Department of Computer Science, University of Ferhat Abbes Setif, 19000 Setif, Algeria
| | - Mohamed Saidi
- Department of Computer Science, University of Ferhat Abbes Setif, 19000 Setif, Algeria
| |
Collapse
|
22
|
Serrurier A, Neuschaefer-Rube C, Röhrig R. Past and Trends in Cough Sound Acquisition, Automatic Detection and Automatic Classification: A Comparative Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:2896. [PMID: 35458885 PMCID: PMC9027375 DOI: 10.3390/s22082896] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 04/07/2022] [Accepted: 04/08/2022] [Indexed: 11/16/2022]
Abstract
Cough is a very common symptom and the most frequent reason for seeking medical advice. Optimized care goes inevitably through an adapted recording of this symptom and automatic processing. This study provides an updated exhaustive quantitative review of the field of cough sound acquisition, automatic detection in longer audio sequences and automatic classification of the nature or disease. Related studies were analyzed and metrics extracted and processed to create a quantitative characterization of the state-of-the-art and trends. A list of objective criteria was established to select a subset of the most complete detection studies in the perspective of deployment in clinical practice. One hundred and forty-four studies were short-listed, and a picture of the state-of-the-art technology is drawn. The trend shows an increasing number of classification studies, an increase of the dataset size, in part from crowdsourcing, a rapid increase of COVID-19 studies, the prevalence of smartphones and wearable sensors for the acquisition, and a rapid expansion of deep learning. Finally, a subset of 12 detection studies is identified as the most complete ones. An unequaled quantitative overview is presented. The field shows a remarkable dynamic, boosted by the research on COVID-19 diagnosis, and a perfect adaptation to mobile health.
Collapse
Affiliation(s)
- Antoine Serrurier
- Institute of Medical Informatics, University Hospital of the RWTH Aachen, 52057 Aachen, Germany;
- Clinic for Phoniatrics, Pedaudiology & Communication Disorders, University Hospital of the RWTH Aachen, 52057 Aachen, Germany;
| | - Christiane Neuschaefer-Rube
- Clinic for Phoniatrics, Pedaudiology & Communication Disorders, University Hospital of the RWTH Aachen, 52057 Aachen, Germany;
| | - Rainer Röhrig
- Institute of Medical Informatics, University Hospital of the RWTH Aachen, 52057 Aachen, Germany;
| |
Collapse
|
23
|
Gabaldón-Figueira JC, Keen E, Giménez G, Orrillo V, Blavia I, Doré DH, Armendáriz N, Chaccour J, Fernandez-Montero A, Bartolomé J, Umashankar N, Small P, Grandjean Lapierre S, Chaccour C. Acoustic surveillance of cough for detecting respiratory disease using artificial intelligence. ERJ Open Res 2022; 8:00053-2022. [PMID: 35651361 PMCID: PMC9149391 DOI: 10.1183/23120541.00053-2022] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 03/24/2022] [Indexed: 12/12/2022] Open
Abstract
Research question Can smartphones be used to detect individual and population-level changes in cough frequency that correlate with the incidence of coronavirus disease 2019 (COVID-19) and other respiratory infections? Methods This was a prospective cohort study carried out in Pamplona (Spain) between 2020 and 2021 using artificial intelligence cough detection software. Changes in cough frequency around the time of medical consultation were evaluated using a randomisation routine; significance was tested by comparing the distribution of cough frequencies to that obtained from a model of no difference. The correlation between changes of cough frequency and COVID-19 incidence was studied using an autoregressive moving average analysis, and its strength determined by calculating its autocorrelation function (ACF). Predictors for the regular use of the system were studied using a linear regression. Overall user experience was evaluated using a satisfaction questionnaire and through focused group discussions. Results We followed-up 616 participants and collected >62 000 coughs. Coughs per hour surged around the time cohort subjects sought medical care (difference +0.77 coughs·h-1; p=0.00001). There was a weak temporal correlation between aggregated coughs and the incidence of COVID-19 in the local population (ACF 0.43). Technical issues affected uptake and regular use of the system. Interpretation Artificial intelligence systems can detect changes in cough frequency that temporarily correlate with the onset of clinical disease at the individual level. A clearer correlation with population-level COVID-19 incidence, or other respiratory conditions, could be achieved with better penetration and compliance with cough monitoring.
Collapse
Affiliation(s)
- Juan C. Gabaldón-Figueira
- Dept of Microbiology and Infectious Diseases, Clinica Universidad de Navarra, Pamplona, Spain
- ISGlobal, Hospital Clinic, University of Barcelona, Barcelona, Spain
| | - Eric Keen
- Research and Development Dept, Hyfe Inc, Wilmington, DE, USA
| | - Gerard Giménez
- Research and Development Dept, Hyfe Inc, Wilmington, DE, USA
| | - Virginia Orrillo
- School of Pharmacy and Nutrition, University of Navarra, Pamplona, Spain
| | - Isabel Blavia
- School of Pharmacy and Nutrition, University of Navarra, Pamplona, Spain
| | - Dominique Hélène Doré
- Immunopathology Axis, Research Center of the University of Montreal Hospital Center, Montréal, QC, Canada
| | - Nuria Armendáriz
- Primary Healthcare, Navarra Health Service-Osasunbidea, Zizur Mayor, Spain
| | - Juliane Chaccour
- Dept of Microbiology and Infectious Diseases, Clinica Universidad de Navarra, Pamplona, Spain
| | | | - Javier Bartolomé
- Primary Healthcare, Navarra Health Service-Osasunbidea, Zizur Mayor, Spain
| | - Nita Umashankar
- Fowler College of Business, San Diego State University, San Diego, CA, USA
| | - Peter Small
- Research and Development Dept, Hyfe Inc, Wilmington, DE, USA
- Dept of Global Health, University of Washington, Seattle, WA, USA
| | - Simon Grandjean Lapierre
- Immunopathology Axis, Research Center of the University of Montreal Hospital Center, Montréal, QC, Canada
- Dept of Microbiology, Infectious Diseases and Immunology, Research Center of the University of Montreal Hospital Center, Montreal, QC, Canada
- These authors contributed equally
| | - Carlos Chaccour
- Dept of Microbiology and Infectious Diseases, Clinica Universidad de Navarra, Pamplona, Spain
- ISGlobal, Hospital Clinic, University of Barcelona, Barcelona, Spain
- Centro de Investigación Biomédica en Red de Enfermedades Infecciosas, Madrid, Spain
- These authors contributed equally
| |
Collapse
|
24
|
Kruizinga MD, Zhuparris A, Dessing E, Krol FJ, Sprij AJ, Doll RJ, Stuurman FE, Exadaktylos V, Driessen GJA, Cohen AF. Development and technical validation of a smartphone-based pediatric cough detection algorithm. Pediatr Pulmonol 2022; 57:761-767. [PMID: 34964557 PMCID: PMC9306830 DOI: 10.1002/ppul.25801] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/09/2021] [Revised: 11/17/2021] [Accepted: 12/13/2021] [Indexed: 11/06/2022]
Abstract
INTRODUCTION Coughing is a common symptom in pediatric lung disease and cough frequency has been shown to be correlated to disease activity in several conditions. Automated cough detection could provide a noninvasive digital biomarker for pediatric clinical trials or care. The aim of this study was to develop a smartphone-based algorithm that objectively and automatically counts cough sounds of children. METHODS The training set was composed of 3228 pediatric cough sounds and 480,780 noncough sounds from various publicly available sources and continuous sound recordings of 7 patients admitted due to respiratory disease. A Gradient Boost Classifier was fitted on the training data, which was subsequently validated on recordings from 14 additional patients aged 0-14 admitted to the pediatric ward due to respiratory disease. The robustness of the algorithm was investigated by repeatedly classifying a recording with the smartphone-based algorithm during various conditions. RESULTS The final algorithm obtained an accuracy of 99.7%, sensitivity of 47.6%, specificity of 99.96%, positive predictive value of 82.2% and negative predictive value 99.8% in the validation dataset. The correlation coefficient between manual- and automated cough counts in the validation dataset was 0.97 (p < .001). The intra- and interdevice reliability of the algorithm was adequate, and the algorithm performed best at an unobstructed distance of 0.5-1 m from the audio source. CONCLUSION This novel smartphone-based pediatric cough detection application can be used for longitudinal follow-up in clinical care or as digital endpoint in clinical trials.
Collapse
Affiliation(s)
- Matthijs D Kruizinga
- Centre for Human Drug Research, Leiden, The Netherlands.,Juliana Children's Hospital, HAGA Teaching Hospital, The Hague, The Netherlands.,Leiden University Medical Centre, Leiden, The Netherlands
| | | | - Eva Dessing
- Centre for Human Drug Research, Leiden, The Netherlands.,Juliana Children's Hospital, HAGA Teaching Hospital, The Hague, The Netherlands
| | - Fas J Krol
- Centre for Human Drug Research, Leiden, The Netherlands.,Leiden University Medical Centre, Leiden, The Netherlands
| | - Arwen J Sprij
- Juliana Children's Hospital, HAGA Teaching Hospital, The Hague, The Netherlands
| | | | | | | | - Gertjan J A Driessen
- Juliana Children's Hospital, HAGA Teaching Hospital, The Hague, The Netherlands.,Department of pediatrics, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Adam F Cohen
- Centre for Human Drug Research, Leiden, The Netherlands.,Leiden University Medical Centre, Leiden, The Netherlands
| |
Collapse
|
25
|
Ijaz A, Nabeel M, Masood U, Mahmood T, Hashmi MS, Posokhova I, Rizwan A, Imran A. Towards using cough for respiratory disease diagnosis by leveraging Artificial Intelligence: A survey. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2021.100832] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
|
26
|
Ponomarchuk A, Burenko I, Malkin E, Nazarov I, Kokh V, Avetisian M, Zhukov L. Project Achoo: A Practical Model and Application for COVID-19 Detection From Recordings of Breath, Voice, and Cough. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING 2022; 16:175-187. [PMID: 35582703 PMCID: PMC9088778 DOI: 10.1109/jstsp.2022.3142514] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 11/23/2021] [Accepted: 01/06/2022] [Indexed: 05/27/2023]
Abstract
The COVID-19 pandemic created significant interest and demand for infection detection and monitoring solutions. In this paper, we propose a machine learning method to quickly detect COVID-19 using audio recordings made on consumer devices. The approach combines signal processing and noise removal methods with an ensemble of fine-tuned deep learning networks and enables COVID detection on coughs. We have also developed and deployed a mobile application that uses a symptoms checker together with voice, breath, and cough signals to detect COVID-19 infection. The application showed robust performance on both openly sourced datasets and the noisy data collected during beta testing by the end users.
Collapse
|
27
|
Wang X, Gao N, Wen J, Li J, Ma Y, Sun M, Liang J, Shi L. Immunogenicity of a Candidate DTacP-sIPV Combined Vaccine and Its Protection Efficacy against Pertussis in a Rhesus Macaque Model. Vaccines (Basel) 2021; 10:47. [PMID: 35062708 PMCID: PMC8779802 DOI: 10.3390/vaccines10010047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 12/27/2021] [Accepted: 12/28/2021] [Indexed: 11/16/2022] Open
Abstract
The research and development of a pertussis-combined vaccine using a novel inactivated poliovirus vaccine made from the Sabin strain (sIPV) is of great significance in the polio eradication project and to address the recent resurge in pertussis. In the present study, we compared the immunogenicity and efficacy of a candidate DTacP-sIPV with those of a commercial DTacP-wIPV/Hib, DTaP/Hib, pertussis vaccine, and aluminum hydroxide adjuvant control in the rhesus macaque model with a 0-, 1-, and 2-month immunization schedule. At day 28 after the third dose, rhesus macaques were challenged with aerosol pertussis and the antibody and cellular response together with pertussis clinical symptoms were determined. The production of anti-PT, anti-PRN, anti-FHA, anti-DT, anti-TT, and polio type I, II, III antibodies was induced by the candidate DTacP-sIPV, which was as potent as commercial vaccines. In comparison with the control group that showed typical pertussis symptoms of humans after the aerosol challenge, the DTacP-sIPV group did not exhibit obvious clinical pertussis symptoms and had higher neutralization titers of anti-PT, anti-PRN, and anti-FHA. In conclusion, the DTacP-sIPV vaccine was able to induce immunity in rhesus macaques to prevent pertussis infections after immunization. The developed vaccine was as efficient as other commercial vaccines.
Collapse
Affiliation(s)
- Xiaoyu Wang
- Laboratory of Vaccine Development, Institute of Medical Biology, Chinese Academy of Medical Science & Peking Union Medical College, Kunming 650118, China; (X.W.); (J.W.); (Y.M.); (M.S.)
| | - Na Gao
- Key Laboratory of Vaccine Research and Development on Severe Infectious Diseases, Institute of Medical Biology, Chinese Academy of Medical Science & Peking Union Medical College, Kunming 650118, China; (N.G.); (J.L.)
| | - Jiana Wen
- Laboratory of Vaccine Development, Institute of Medical Biology, Chinese Academy of Medical Science & Peking Union Medical College, Kunming 650118, China; (X.W.); (J.W.); (Y.M.); (M.S.)
| | - Jingyan Li
- Key Laboratory of Vaccine Research and Development on Severe Infectious Diseases, Institute of Medical Biology, Chinese Academy of Medical Science & Peking Union Medical College, Kunming 650118, China; (N.G.); (J.L.)
| | - Yan Ma
- Laboratory of Vaccine Development, Institute of Medical Biology, Chinese Academy of Medical Science & Peking Union Medical College, Kunming 650118, China; (X.W.); (J.W.); (Y.M.); (M.S.)
| | - Mingbo Sun
- Laboratory of Vaccine Development, Institute of Medical Biology, Chinese Academy of Medical Science & Peking Union Medical College, Kunming 650118, China; (X.W.); (J.W.); (Y.M.); (M.S.)
| | - Jiangli Liang
- Key Laboratory of Vaccine Research and Development on Severe Infectious Diseases, Institute of Medical Biology, Chinese Academy of Medical Science & Peking Union Medical College, Kunming 650118, China; (N.G.); (J.L.)
| | - Li Shi
- Laboratory of Immunogenetics, Institute of Medical Biology, Chinese Academy of Medical Science & Peking Union Medical College, Kunming 650118, China
| |
Collapse
|
28
|
Sharma NK, Muguli A, Krishnan P, Kumar R, Chetupalli SR, Ganapathy S. Towards sound based testing of COVID-19-Summary of the first Diagnostics of COVID-19 using Acoustics (DiCOVA) Challenge. COMPUT SPEECH LANG 2021; 73:101320. [PMID: 34840419 PMCID: PMC8610834 DOI: 10.1016/j.csl.2021.101320] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 09/20/2021] [Accepted: 10/28/2021] [Indexed: 11/30/2022]
Abstract
The technology development for point-of-care tests (POCTs) targeting respiratory diseases has witnessed a growing demand in the recent past. Investigating the presence of acoustic biomarkers in modalities such as cough, breathing and speech sounds, and using them for building POCTs can offer fast, contactless and inexpensive testing. In view of this, over the past year, we launched the “Coswara” project to collect cough, breathing and speech sound recordings via worldwide crowdsourcing. With this data, a call for development of diagnostic tools was announced in the Interspeech 2021 as a special session titled “Diagnostics of COVID-19 using Acoustics (DiCOVA) Challenge”. The goal was to bring together researchers and practitioners interested in developing acoustics-based COVID-19 POCTs by enabling them to work on the same set of development and test datasets. As part of the challenge, datasets with breathing, cough, and speech sound samples from COVID-19 and non-COVID-19 individuals were released to the participants. The challenge consisted of two tracks. The Track-1 focused only on cough sounds, and participants competed in a leaderboard setting. In Track-2, breathing and speech samples were provided for the participants, without a competitive leaderboard. The challenge attracted 85 plus registrations with 29 final submissions for Track-1. This paper describes the challenge (datasets, tasks, baseline system), and presents a focused summary of the various systems submitted by the participating teams. An analysis of the results from the top four teams showed that a fusion of the scores from these teams yields an area-under-the-receiver operating curve (AUC-ROC) of 95.1% on the blind test data. By summarizing the lessons learned, we foresee the challenge overview in this paper to help accelerate technological development of acoustic-based POCTs.
Collapse
Affiliation(s)
- Neeraj Kumar Sharma
- Learning and Extraction of Acoustic Patterns (LEAP) Lab, Electrical Engineering, Indian Institute of Science, Bangalore, India
| | - Ananya Muguli
- Learning and Extraction of Acoustic Patterns (LEAP) Lab, Electrical Engineering, Indian Institute of Science, Bangalore, India
| | - Prashant Krishnan
- Learning and Extraction of Acoustic Patterns (LEAP) Lab, Electrical Engineering, Indian Institute of Science, Bangalore, India
| | - Rohit Kumar
- Learning and Extraction of Acoustic Patterns (LEAP) Lab, Electrical Engineering, Indian Institute of Science, Bangalore, India
| | - Srikanth Raj Chetupalli
- Learning and Extraction of Acoustic Patterns (LEAP) Lab, Electrical Engineering, Indian Institute of Science, Bangalore, India
| | - Sriram Ganapathy
- Learning and Extraction of Acoustic Patterns (LEAP) Lab, Electrical Engineering, Indian Institute of Science, Bangalore, India
| |
Collapse
|
29
|
Pahar M, Klopper M, Reeve B, Warren R, Theron G, Niesler T. Automatic cough classification for tuberculosis screening in a real-world environment. Physiol Meas 2021; 42. [PMID: 34649231 DOI: 10.1088/1361-6579/ac2fb8] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 10/14/2021] [Indexed: 11/12/2022]
Abstract
Objective.The automatic discrimination between the coughing sounds produced by patients with tuberculosis (TB) and those produced by patients with other lung ailments.Approach.We present experiments based on a dataset of 1358 forced cough recordings obtained in a developing-world clinic from 16 patients with confirmed active pulmonary TB and 35 patients suffering from respiratory conditions suggestive of TB but confirmed to be TB negative. Using nested cross-validation, we have trained and evaluated five machine learning classifiers: logistic regression (LR), support vector machines, k-nearest neighbour, multilayer perceptrons and convolutional neural networks.Main Results.Although classification is possible in all cases, the best performance is achieved using LR. In combination with feature selection by sequential forward selection, our best LR system achieves an area under the ROC curve (AUC) of 0.94 using 23 features selected from a set of 78 high-resolution mel-frequency cepstral coefficients. This system achieves a sensitivity of 93% at a specificity of 95% and thus exceeds the 90% sensitivity at 70% specificity specification considered by the World Health Organisation (WHO) as a minimal requirement for a community-based TB triage test.Significance.The automatic classification of cough audio sounds, when applied to symptomatic patients requiring investigation for TB, can meet the WHO triage specifications for the identification of patients who should undergo expensive molecular downstream testing. This makes it a promising and viable means of low cost, easily deployable frontline screening for TB, which can benefit especially developing countries with a heavy TB burden.
Collapse
Affiliation(s)
- Madhurananda Pahar
- Department of Electrical and Electronic Engineering, Stellenbosch University, South Africa
| | - Marisa Klopper
- SAMRC Centre for Tuberculosis Research, Division of Molecular Biology and Human Genetics, DSI/NRF Centre of Excellence for Biomedical Tuberculosis Research, Faculty of Medicine and Health Sciences, Stellenbosch University, South Africa
| | - Byron Reeve
- SAMRC Centre for Tuberculosis Research, Division of Molecular Biology and Human Genetics, DSI/NRF Centre of Excellence for Biomedical Tuberculosis Research, Faculty of Medicine and Health Sciences, Stellenbosch University, South Africa
| | - Rob Warren
- SAMRC Centre for Tuberculosis Research, Division of Molecular Biology and Human Genetics, DSI/NRF Centre of Excellence for Biomedical Tuberculosis Research, Faculty of Medicine and Health Sciences, Stellenbosch University, South Africa
| | - Grant Theron
- SAMRC Centre for Tuberculosis Research, Division of Molecular Biology and Human Genetics, DSI/NRF Centre of Excellence for Biomedical Tuberculosis Research, Faculty of Medicine and Health Sciences, Stellenbosch University, South Africa
| | - Thomas Niesler
- Department of Electrical and Electronic Engineering, Stellenbosch University, South Africa
| |
Collapse
|
30
|
Loey M, Mirjalili S. COVID-19 cough sound symptoms classification from scalogram image representation using deep learning models. Comput Biol Med 2021; 139:105020. [PMID: 34775155 PMCID: PMC8628520 DOI: 10.1016/j.compbiomed.2021.105020] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Revised: 11/02/2021] [Accepted: 11/02/2021] [Indexed: 12/23/2022]
Abstract
Deep Learning shows promising performance in diverse fields and has become an emerging technology in Artificial Intelligence. Recent visual recognition is based on the ranking of photographs and the finding of artefacts in those images. The aim of this research is to classify the different cough sounds of COVID-19 artefacts in the signals of altered real-life environments. The introduced model takes into consideration two major steps. The first step is the transformation phase from sound to image that is optimized by the scalogram technique. The second step involves feature extraction and classification based on six deep transfer models (GoogleNet, ResNet18, ResNet50, ResNet101, MobileNetv2, and NasNetmobile). The dataset used contains 1457 (755 of COVID-19 and 702 of healthy) wave cough sounds. Although our recognition model performs the best, its accuracy only reaches 94.9% based on SGDM optimizer. The accuracy is promising enough for a wide set of labeled cough data to test the potential for generalization. The outcomes show that ResNet18 is the most stable model to classify the cough sounds from a limited dataset with a sensitivity of 94.44% and a specificity of 95.37%. Finally, a comparison of the research with a similar analysis is made. It is observed that the proposed model is more reliable and accurate than any current models. Cough research precision is promising enough to test the ability for extrapolation and generalization.
Collapse
Affiliation(s)
- Mohamed Loey
- Department of Computer Science, Faculty of Computers and Artificial Intelligence, Benha University, Benha, 13518, Egypt; Information Technology Program, New Cairo Technological University, New Cairo, Egypt.
| | - Seyedali Mirjalili
- Center for Artificial Intelligence Research and Optimization, Torrens University Australia, Fortitude Valley, Brisbane, QLD, 4006, Australia; Yonsei Frontier Lab, Yonsei University, Seoul, South Korea.
| |
Collapse
|
31
|
Chung Y, Jin J, Jo HI, Lee H, Kim SH, Chung SJ, Yoon HJ, Park J, Jeon JY. Diagnosis of Pneumonia by Cough Sounds Analyzed with Statistical Features and AI. SENSORS 2021; 21:s21217036. [PMID: 34770341 PMCID: PMC8586978 DOI: 10.3390/s21217036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 10/21/2021] [Accepted: 10/21/2021] [Indexed: 11/28/2022]
Abstract
Pneumonia is a serious disease often accompanied by complications, sometimes leading to death. Unfortunately, diagnosis of pneumonia is frequently delayed until physical and radiologic examinations are performed. Diagnosing pneumonia with cough sounds would be advantageous as a non-invasive test that could be performed outside a hospital. We aimed to develop an artificial intelligence (AI)-based pneumonia diagnostic algorithm. We collected cough sounds from thirty adult patients with pneumonia or the other causative diseases of cough. To quantify the cough sounds, loudness and energy ratio were used to represent the level and its spectral variations. These two features were used for constructing the diagnostic algorithm. To estimate the performance of developed algorithm, we assessed the diagnostic accuracy by comparing with the diagnosis by pulmonologists based on cough sound alone. The algorithm showed 90.0% sensitivity, 78.6% specificity and 84.9% overall accuracy for the 70 cases of cough sound in pneumonia group and 56 cases in non-pneumonia group. For same cases, pulmonologists correctly diagnosed the cough sounds with 56.4% accuracy. These findings showed that the proposed AI algorithm has value as an effective assistant technology to diagnose adult pneumonia patients with significant reliability.
Collapse
Affiliation(s)
- Youngbeen Chung
- Department of Mechanical Engineering, Hanyang University, 222 Wangsimri-ro, Seongdong-gu, Seoul 04763, Korea;
| | - Jie Jin
- School of Electromechanical and Automotive Engineering, Yantai University, 30 Qingquan Road, Laishan District, Yantai 264005, China;
| | - Hyun In Jo
- Department of Architectural Engineering, Hanyang University, 222 Wangsimri-ro, Seongdong-gu, Seoul 04763, Korea;
| | - Hyun Lee
- Department of Internal Medicine, Hanyang University Hospital, Hanyang University College of Medicine, 222 Wangsimri-ro, Seongdong-gu, Seoul 04763, Korea; (H.L.); (S.J.C.); (H.J.Y.)
| | - Sang-Heon Kim
- Department of Internal Medicine, Hanyang University Hospital, Hanyang University College of Medicine, 222 Wangsimri-ro, Seongdong-gu, Seoul 04763, Korea; (H.L.); (S.J.C.); (H.J.Y.)
- Correspondence: (S.-H.K.); (J.P.); Tel.: +82-02-2220-8336 (S.-H.K.); +82-02-2220-0424 (J.P.)
| | - Sung Jun Chung
- Department of Internal Medicine, Hanyang University Hospital, Hanyang University College of Medicine, 222 Wangsimri-ro, Seongdong-gu, Seoul 04763, Korea; (H.L.); (S.J.C.); (H.J.Y.)
| | - Ho Joo Yoon
- Department of Internal Medicine, Hanyang University Hospital, Hanyang University College of Medicine, 222 Wangsimri-ro, Seongdong-gu, Seoul 04763, Korea; (H.L.); (S.J.C.); (H.J.Y.)
| | - Junhong Park
- Department of Mechanical Engineering, Hanyang University, 222 Wangsimri-ro, Seongdong-gu, Seoul 04763, Korea;
- Correspondence: (S.-H.K.); (J.P.); Tel.: +82-02-2220-8336 (S.-H.K.); +82-02-2220-0424 (J.P.)
| | - Jin Yong Jeon
- Department of Medical and Digital Engineering, Hanyang University, 222 Wangsimri-ro, Seongdong-gu, Seoul 04763, Korea;
| |
Collapse
|
32
|
Tena A, Clarià F, Solsona F. Automated detection of COVID-19 cough. Biomed Signal Process Control 2021; 71:103175. [PMID: 34539811 PMCID: PMC8435366 DOI: 10.1016/j.bspc.2021.103175] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 08/26/2021] [Accepted: 09/07/2021] [Indexed: 12/04/2022]
Abstract
Easy detection of COVID-19 is a challenge. Quick biological tests do not give enough accuracy. Success in the fight against new outbreaks depends not only on the efficiency of the tests used, but also on the cost, time elapsed and the number of tests that can be done massively. Our proposal provides a solution to this challenge. The main objective is to design a freely available, quick and efficient methodology for the automatic detection of COVID-19 in raw audio files. Our proposal is based on automated extraction of time–frequency cough features and selection of the more significant ones to be used to diagnose COVID-19 using a supervised machine-learning algorithm. Random Forest has performed better than the other models analysed in this study. An accuracy close to 90% was obtained. This study demonstrates the feasibility of the automatic diagnose of COVID-19 from coughs, and its applicability to detecting new outbreaks.
Collapse
Affiliation(s)
- Alberto Tena
- CIMNE, Building C1, North Campus, UPC. Gran Capità, 08034 Barcelona, Spain
| | - Francesc Clarià
- Dept. of Computer Science & INSPIRES, University of Lleida. Jaume II 69, E-25001 Lleida, Spain
| | - Francesc Solsona
- Dept. of Computer Science & INSPIRES, University of Lleida. Jaume II 69, E-25001 Lleida, Spain
| |
Collapse
|
33
|
Melek M. Diagnosis of COVID-19 and non-COVID-19 patients by classifying only a single cough sound. Neural Comput Appl 2021; 33:17621-17632. [PMID: 34345119 PMCID: PMC8323961 DOI: 10.1007/s00521-021-06346-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 07/18/2021] [Indexed: 12/24/2022]
Abstract
In the last month of 2019, a new virus emerged in China, spreading rapidly and affecting the whole world. This virus, which is called corona, is the most contagious type of virus that humanity has ever encountered. The virus has caused a huge crisis worldwide as it leads to severe infections and eventually death in humans. On March 11, 2020, it was announced by the World Health Organization that a COVID-19 outbreak has occurred. Computer-aided digital technologies, which eliminate many problems and provide convenience in people's lives, did not leave humanity alone in this regard and rushed to provide a solution for this unfortunate event. One of the important aspects in which computer-aided digital technologies can be effective is the diagnosis of the disease. Reverse transcription-polymerase chain reaction (RT-PCR), which is a standard and precise technique for diagnosing the disease, is an expensive and time-consuming method. Moreover, its availability is not the same all over the world. For this reason, it can be very attractive and important to distinguish the COVID-19 disease from a cold or flu through a cough sound analysis via smartphones which have entered into the lives of many people in recent years. In this study, we proposed a machine learning-based system to distinguish patients with COVID-19 from non-COVID-19 patients by analyzing only a single cough sound. Two different data sets were used, one accessible for the public and the other available on request. After combining the data sets, the features were obtained from the cough sounds using the mel-frequency cepstral coefficients (MFCCs) method, and then, they were classified with seven different machine learning classifiers. To determine the optimum values of hyperparameters for MFCCs and classifiers, the leave-one-out cross-validation (LOO-CV) strategy was implemented. Based on the results, the k-nearest neighbors classifier based on the Euclidean distance (kNN Euclidean) with the accuracy rate, sensitivity of COVID-19, sensitivity of non-COVID-19, F-measure, and area under the ROC curve (AUC) of 0.9833, 1.0000, 0.9720, 0.9799, and 0.9860, respectively, is more successful than other classifiers. Finally, the best and most effective features were determined for each classifier using the sequential forward selection (SFS) method. According to the results, the proposed system is excellent compared with similar studies in the literature and can be easily used in smartphones and facilitate the diagnosis of COVID-19 patients. In addition, since the used data set includes reflex and unconscious coughs, the results showed that conscious or unconscious coughing has no effect on the diagnosis of COVID-19 patients based on the cough sound.
Collapse
Affiliation(s)
- Mesut Melek
- Department of Electronics and Automation, Gumushane University, 29100 Gumushane, Turkey
| |
Collapse
|
34
|
Alqudaihi KS, Aslam N, Khan IU, Almuhaideb AM, Alsunaidi SJ, Ibrahim NMAR, Alhaidari FA, Shaikh FS, Alsenbel YM, Alalharith DM, Alharthi HM, Alghamdi WM, Alshahrani MS. Cough Sound Detection and Diagnosis Using Artificial Intelligence Techniques: Challenges and Opportunities. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:102327-102344. [PMID: 34786317 PMCID: PMC8545201 DOI: 10.1109/access.2021.3097559] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 07/09/2021] [Indexed: 06/02/2023]
Abstract
Coughing is a common symptom of several respiratory diseases. The sound and type of cough are useful features to consider when diagnosing a disease. Respiratory infections pose a significant risk to human lives worldwide as well as a significant economic downturn, particularly in countries with limited therapeutic resources. In this study we reviewed the latest proposed technologies that were used to control the impact of respiratory diseases. Artificial Intelligence (AI) is a promising technology that aids in data analysis and prediction of results, thereby ensuring people's well-being. We conveyed that the cough symptom can be reliably used by AI algorithms to detect and diagnose different types of known diseases including pneumonia, pulmonary edema, asthma, tuberculosis (TB), COVID19, pertussis, and other respiratory diseases. We also identified different techniques that produced the best results for diagnosing respiratory disease using cough samples. This study presents the most recent challenges, solutions, and opportunities in respiratory disease detection and diagnosis, allowing practitioners and researchers to develop better techniques.
Collapse
Affiliation(s)
- Kawther S. Alqudaihi
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Nida Aslam
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Irfan Ullah Khan
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Abdullah M. Almuhaideb
- Department of Networks and CommunicationsCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Shikah J. Alsunaidi
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Nehad M. Abdel Rahman Ibrahim
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Fahd A. Alhaidari
- Department of Networks and CommunicationsCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Fatema S. Shaikh
- Department of Computer Information SystemsCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Yasmine M. Alsenbel
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Dima M. Alalharith
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Hajar M. Alharthi
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Wejdan M. Alghamdi
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Mohammed S. Alshahrani
- Department of Emergency MedicineCollege of MedicineImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| |
Collapse
|
35
|
Detecting pertussis in the pediatric population using respiratory sound events and CNN. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102722] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
36
|
Orlandic L, Teijeiro T, Atienza D. The COUGHVID crowdsourcing dataset, a corpus for the study of large-scale cough analysis algorithms. Sci Data 2021; 8:156. [PMID: 34162883 PMCID: PMC8222356 DOI: 10.1038/s41597-021-00937-4] [Citation(s) in RCA: 71] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 04/29/2021] [Indexed: 11/09/2022] Open
Abstract
Cough audio signal classification has been successfully used to diagnose a variety of respiratory conditions, and there has been significant interest in leveraging Machine Learning (ML) to provide widespread COVID-19 screening. The COUGHVID dataset provides over 25,000 crowdsourced cough recordings representing a wide range of participant ages, genders, geographic locations, and COVID-19 statuses. First, we contribute our open-sourced cough detection algorithm to the research community to assist in data robustness assessment. Second, four experienced physicians labeled more than 2,800 recordings to diagnose medical abnormalities present in the coughs, thereby contributing one of the largest expert-labeled cough datasets in existence that can be used for a plethora of cough audio classification tasks. Finally, we ensured that coughs labeled as symptomatic and COVID-19 originate from countries with high infection rates. As a result, the COUGHVID dataset contributes a wealth of cough recordings for training ML models to address the world's most urgent health crises.
Collapse
Affiliation(s)
- Lara Orlandic
- Embedded Systems Laboratory (ESL), EPFL, Lausanne, 1015, Switzerland.
| | - Tomas Teijeiro
- Embedded Systems Laboratory (ESL), EPFL, Lausanne, 1015, Switzerland
| | - David Atienza
- Embedded Systems Laboratory (ESL), EPFL, Lausanne, 1015, Switzerland
| |
Collapse
|
37
|
Pahar M, Klopper M, Warren R, Niesler T. COVID-19 cough classification using machine learning and global smartphone recordings. Comput Biol Med 2021; 135:104572. [PMID: 34182331 PMCID: PMC8213969 DOI: 10.1016/j.compbiomed.2021.104572] [Citation(s) in RCA: 94] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 06/09/2021] [Accepted: 06/09/2021] [Indexed: 12/15/2022]
Abstract
We present a machine learning based COVID-19 cough classifier which can discriminate COVID-19 positive coughs from both COVID-19 negative and healthy coughs recorded on a smartphone. This type of screening is non-contact, easy to apply, and can reduce the workload in testing centres as well as limit transmission by recommending early self-isolation to those who have a cough suggestive of COVID-19. The datasets used in this study include subjects from all six continents and contain both forced and natural coughs, indicating that the approach is widely applicable. The publicly available Coswara dataset contains 92 COVID-19 positive and 1079 healthy subjects, while the second smaller dataset was collected mostly in South Africa and contains 18 COVID-19 positive and 26 COVID-19 negative subjects who have undergone a SARS-CoV laboratory test. Both datasets indicate that COVID-19 positive coughs are 15%–20% shorter than non-COVID coughs. Dataset skew was addressed by applying the synthetic minority oversampling technique (SMOTE). A leave-p-out cross-validation scheme was used to train and evaluate seven machine learning classifiers: logistic regression (LR), k-nearest neighbour (KNN), support vector machine (SVM), multilayer perceptron (MLP), convolutional neural network (CNN), long short-term memory (LSTM) and a residual-based neural network architecture (Resnet50). Our results show that although all classifiers were able to identify COVID-19 coughs, the best performance was exhibited by the Resnet50 classifier, which was best able to discriminate between the COVID-19 positive and the healthy coughs with an area under the ROC curve (AUC) of 0.98. An LSTM classifier was best able to discriminate between the COVID-19 positive and COVID-19 negative coughs, with an AUC of 0.94 after selecting the best 13 features from a sequential forward selection (SFS). Since this type of cough audio classification is cost-effective and easy to deploy, it is potentially a useful and viable means of non-contact COVID-19 screening.
Collapse
Affiliation(s)
- Madhurananda Pahar
- Department of Electrical and Electronic Engineering, Stellenbosch University, South Africa.
| | - Marisa Klopper
- SAMRC Centre for Tuberculosis Research, DSI-NRF Centre of Excellence for Biomedical Tuberculosis Research, Division of Molecular Biology and Human Genetics, Faculty of Medicine and Health Sciences, Stellenbosch University, South Africa.
| | - Robin Warren
- SAMRC Centre for Tuberculosis Research, DSI-NRF Centre of Excellence for Biomedical Tuberculosis Research, Division of Molecular Biology and Human Genetics, Faculty of Medicine and Health Sciences, Stellenbosch University, South Africa.
| | - Thomas Niesler
- Department of Electrical and Electronic Engineering, Stellenbosch University, South Africa.
| |
Collapse
|
38
|
Laguarta J, Subirana B. Longitudinal Speech Biomarkers for Automated Alzheimer's Detection. FRONTIERS IN COMPUTER SCIENCE 2021. [DOI: 10.3389/fcomp.2021.624694] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
We introduce a novel audio processing architecture, the Open Voice Brain Model (OVBM), improving detection accuracy for Alzheimer's (AD) longitudinal discrimination from spontaneous speech. We also outline the OVBM design methodology leading us to such architecture, which in general can incorporate multimodal biomarkers and target simultaneously several diseases and other AI tasks. Key in our methodology is the use of multiple biomarkers complementing each other, and when two of them uniquely identify different subjects in a target disease we say they are orthogonal. We illustrate the OBVM design methodology by introducing sixteen biomarkers, three of which are orthogonal, demonstrating simultaneous above state-of-the-art discrimination for two apparently unrelated diseases such as AD and COVID-19. Depending on the context, throughout the paper we use OVBM indistinctly to refer to the specific architecture or to the broader design methodology. Inspired by research conducted at the MIT Center for Brain Minds and Machines (CBMM), OVBM combines biomarker implementations of the four modules of intelligence: The brain OS chunks and overlaps audio samples and aggregates biomarker features from the sensory stream and cognitive core creating a multi-modal graph neural network of symbolic compositional models for the target task. In this paper we apply the OVBM design methodology to the automated diagnostic of Alzheimer's Dementia (AD) patients, achieving above state-of-the-art accuracy of 93.8% using only raw audio, while extracting a personalized subject saliency map designed to longitudinally track relative disease progression using multiple biomarkers, 16 in the reported AD task. The ultimate aim is to help medical practice by detecting onset and treatment impact so that intervention options can be longitudinally tested. Using the OBVM design methodology, we introduce a novel lung and respiratory tract biomarker created using 200,000+ cough samples to pre-train a model discriminating cough cultural origin. Transfer Learning is subsequently used to incorporate features from this model into various other biomarker-based OVBM architectures. This biomarker yields consistent improvements in AD detection in all the starting OBVM biomarker architecture combinations we tried. This cough dataset sets a new benchmark as the largest audio health dataset with 30,000+ subjects participating in April 2020, demonstrating for the first time cough cultural bias.
Collapse
|
39
|
Belkacem AN, Ouhbi S, Lakas A, Benkhelifa E, Chen C. End-to-End AI-Based Point-of-Care Diagnosis System for Classifying Respiratory Illnesses and Early Detection of COVID-19: A Theoretical Framework. Front Med (Lausanne) 2021; 8:585578. [PMID: 33869239 PMCID: PMC8044874 DOI: 10.3389/fmed.2021.585578] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Accepted: 03/08/2021] [Indexed: 01/10/2023] Open
Abstract
Respiratory symptoms can be caused by different underlying conditions, and are often caused by viral infections, such as Influenza-like illnesses or other emerging viruses like the Coronavirus. These respiratory viruses, often, have common symptoms: coughing, high temperature, congested nose, and difficulty breathing. However, early diagnosis of the type of the virus, can be crucial, especially in cases, such as the COVID-19 pandemic. Among the factors that contributed to the spread of the COVID-19 pandemic were the late diagnosis or misinterpretation of COVID-19 symptoms as regular flu-like symptoms. Research has shown that one of the possible differentiators of the underlying causes of different respiratory diseases could be the cough sound, which comes in different types and forms. A reliable lab-free tool for early and accurate diagnosis, which can differentiate between different respiratory diseases is therefore very much needed, particularly during the current pandemic. This concept paper discusses a medical hypothesis of an end-to-end portable system that can record data from patients with symptoms, including coughs (voluntary or involuntary) and translate them into health data for diagnosis, and with the aid of machine learning, classify them into different respiratory illnesses, including COVID-19. With the ongoing efforts to stop the spread of the COVID-19 disease everywhere today, and against similar diseases in the future, our proposed low cost and user-friendly theoretical solution could play an important part in the early diagnosis.
Collapse
Affiliation(s)
- Abdelkader Nasreddine Belkacem
- Department of Computer and Network Engineering, College of Information Technology, UAE University, Al Ain, United Arab Emirates
| | - Sofia Ouhbi
- Department of Computer Science and Software Engineering, College of Information Technology, UAE University, Al Ain, United Arab Emirates
| | - Abderrahmane Lakas
- Department of Computer and Network Engineering, College of Information Technology, UAE University, Al Ain, United Arab Emirates
| | - Elhadj Benkhelifa
- Cloud Computing and Applications Research Lab, Staffordshire University, Stoke-on-Trent, United Kingdom
| | - Chao Chen
- Key Laboratory of Complex System Control Theory and Application, Tianjin University of Technology, Tianjin, China
| |
Collapse
|
40
|
Lonini L, Shawen N, Botonis O, Fanton M, Jayaraman C, Mummidisetty CK, Shin SY, Rushin C, Jenz S, Xu S, Rogers JA, Jayaraman A. Rapid Screening of Physiological Changes Associated With COVID-19 Using Soft-Wearables and Structured Activities: A Pilot Study. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2021; 9:4900311. [PMID: 33665044 PMCID: PMC7924653 DOI: 10.1109/jtehm.2021.3058841] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 01/15/2021] [Accepted: 02/06/2021] [Indexed: 12/20/2022]
Abstract
OBJECTIVE Controlling the spread of the COVID-19 pandemic largely depends on scaling up the testing infrastructure for identifying infected individuals. Consumer-grade wearables may present a solution to detect the presence of infections in the population, but the current paradigm requires collecting physiological data continuously and for long periods of time on each individual, which poses limitations in the context of rapid screening. Technology: Here, we propose a novel paradigm based on recording the physiological responses elicited by a short (~2 minutes) sequence of activities (i.e. "snapshot"), to detect symptoms associated with COVID-19. We employed a novel body-conforming soft wearable sensor placed on the suprasternal notch to capture data on physical activity, cardio-respiratory function, and cough sounds. RESULTS We performed a pilot study in a cohort of individuals (n=14) who tested positive for COVID-19 and detected altered heart rate, respiration rate and heart rate variability, relative to a group of healthy individuals (n=14) with no known exposure. Logistic regression classifiers were trained on individual and combined sets of physiological features (heartbeat and respiration dynamics, walking cadence, and cough frequency spectrum) at discriminating COVID-positive participants from the healthy group. Combining features yielded an AUC of 0.94 (95% CI=[0.92, 0.96]) using a leave-one-subject-out cross validation scheme. Conclusions and Clinical Impact: These results, although preliminary, suggest that a sensor-based snapshot paradigm may be a promising approach for non-invasive and repeatable testing to alert individuals that need further screening.
Collapse
Affiliation(s)
- Luca Lonini
- Shirley Ryan AbilityLabChicagoIL60611USA
- Department of Physical Medicine and RehabilitationFeinberg School of MedicineNorthwestern UniversityChicagoIL60611USA
| | - Nicholas Shawen
- Shirley Ryan AbilityLabChicagoIL60611USA
- Department of Physical Medicine and RehabilitationFeinberg School of MedicineNorthwestern UniversityChicagoIL60611USA
| | | | - Michael Fanton
- Shirley Ryan AbilityLabChicagoIL60611USA
- Department of Biomedical EngineeringMcCormick School of EngineeringNorthwestern UniversityChicagoIL60611USA
| | - Chadrasekaran Jayaraman
- Shirley Ryan AbilityLabChicagoIL60611USA
- Department of Physical Medicine and RehabilitationFeinberg School of MedicineNorthwestern UniversityChicagoIL60611USA
| | | | - Sung Yul Shin
- Shirley Ryan AbilityLabChicagoIL60611USA
- Department of Physical Medicine and RehabilitationFeinberg School of MedicineNorthwestern UniversityChicagoIL60611USA
| | | | | | - Shuai Xu
- Simpson Querrey Institute, Northwestern UniversityChicagoIL60611USA
| | - John A. Rogers
- Simpson Querrey Institute, Northwestern UniversityChicagoIL60611USA
| | - Arun Jayaraman
- Shirley Ryan AbilityLabChicagoIL60611USA
- Department of Physical Medicine and RehabilitationFeinberg School of MedicineNorthwestern UniversityChicagoIL60611USA
| |
Collapse
|
41
|
Lee KK, Davenport PW, Smith JA, Irwin RS, McGarvey L, Mazzone SB, Birring SS. Global Physiology and Pathophysiology of Cough: Part 1: Cough Phenomenology - CHEST Guideline and Expert Panel Report. Chest 2021; 159:282-293. [PMID: 32888932 PMCID: PMC8640837 DOI: 10.1016/j.chest.2020.08.2086] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 08/17/2020] [Accepted: 08/20/2020] [Indexed: 12/12/2022] Open
Abstract
The purpose of this state-of-the-art review is to update the American College of Chest Physicians 2006 guideline on global physiology and pathophysiology of cough. A review of the literature was conducted using PubMed and MEDLINE databases from 1951 to 2019 and using prespecified search terms. We describe the basic phenomenology of cough patterns, behaviors, and morphological features. We update the understanding of mechanical and physiological characteristics of cough, adding a contemporary view of the types of cough and their associated behaviors and sensations. New information about acoustic characteristics is presented, and recent insights into cough triggers and the patient cough hypersensitivity phenotype are explored. Lastly, because the clinical assessment of patients largely focuses on the duration rather than morphological features of cough, we review the morphological features of cough that can be measured in the clinic. This is the first of a two-part update to the American College of Chest Physicians 2006 cough guideline; it provides a more global consideration of cough phenomenology, beyond simply the mechanical aspects of a cough. A greater understanding of the typical features of cough, and their variations, may allow a more informed interpretation of cough measurements and the clinical relevance for patients.
Collapse
Affiliation(s)
- Kai K Lee
- School of Immunology and Microbial Sciences, Faculty of Life Sciences and Medicine, King's College London, London, England
| | - Paul W Davenport
- Department of Physiological Sciences, University of Florida, Gainesville, FL
| | - Jaclyn A Smith
- Division of Infection, Immunity and Respiratory Medicine, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, England
| | - Richard S Irwin
- Division of Pulmonary, Allergy, and Critical Care Medicine, Department of Medicine, UMass Memorial Medical Center, Worcester, MA
| | - Lorcan McGarvey
- Centre for Experimental Medicine, Department of Medicine, Queen's University Belfast, Belfast, Northern Ireland.
| | - Stuart B Mazzone
- Department of Anatomy and Neuroscience, School of Biomedical Sciences, The University of Melbourne, Melbourne, VIC, Australia.
| | - Surinder S Birring
- Centre for Human and Applied Physiological Sciences, Faculty of Life Sciences and Medicine, King's College London, London, England
| |
Collapse
|
42
|
Ramesh V, Vatanparvar K, Nemati E, Nathan V, Rahman MM, Kuang J. CoughGAN: Generating Synthetic Coughs that Improve Respiratory Disease Classification .. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:5682-5688. [PMID: 33019266 DOI: 10.1109/embc44109.2020.9175597] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Despite the prevalence of respiratory diseases, their diagnosis by clinicians is challenging. Accurately assessing airway sounds requires extensive clinical training and equipment that may not be easily available. Current methods that automate this diagnosis are hindered by their use of features that require pulmonary function tests. We leverage the audio characteristics of coughs to create classifiers that can distinguish common respiratory diseases in adults. Moreover, we build on recent advances in generative adversarial networks to augment our dataset with cleverly engineered synthetic cough samples for each class of major respiratory disease, to balance and increase our dataset size. We experimented on cough samples collected with a smartphone from 45 subjects in a clinic. Our CoughGAN-improved Support Vector Machine and Random Forest models show up to 76% test accuracy and 83% F1 score in classifying subjects' conditions between healthy and three major respiratory diseases. Adding our synthetic coughs improves the performance we can obtain from a relatively small unbalanced healthcare dataset by boosting the accuracy over 30%. Our data augmentation reduces overfitting and discourages the prediction of a single, dominant class. These results highlight the feasibility of automatic, cough-based respiratory disease diagnosis using smartphones or wearables in the wild.
Collapse
|
43
|
Laguarta J, Hueto F, Subirana B. COVID-19 Artificial Intelligence Diagnosis Using Only Cough Recordings. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2020; 1:275-281. [PMID: 34812418 PMCID: PMC8545024 DOI: 10.1109/ojemb.2020.3026928] [Citation(s) in RCA: 196] [Impact Index Per Article: 39.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 08/31/2020] [Accepted: 09/21/2020] [Indexed: 11/18/2022] Open
Abstract
Goal: We hypothesized that COVID-19 subjects, especially including asymptomatics, could be accurately discriminated only from a forced-cough cell phone recording using Artificial Intelligence. To train our MIT Open Voice model we built a data collection pipeline of COVID-19 cough recordings through our website (opensigma.mit.edu) between April and May 2020 and created the largest audio COVID-19 cough balanced dataset reported to date with 5,320 subjects. Methods: We developed an AI speech processing framework that leverages acoustic biomarker feature extractors to pre-screen for COVID-19 from cough recordings, and provide a personalized patient saliency map to longitudinally monitor patients in real-time, non-invasively, and at essentially zero variable cost. Cough recordings are transformed with Mel Frequency Cepstral Coefficient and inputted into a Convolutional Neural Network (CNN) based architecture made up of one Poisson biomarker layer and 3 pre-trained ResNet50's in parallel, outputting a binary pre-screening diagnostic. Our CNN-based models have been trained on 4256 subjects and tested on the remaining 1064 subjects of our dataset. Transfer learning was used to learn biomarker features on larger datasets, previously successfully tested in our Lab on Alzheimer's, which significantly improves the COVID-19 discrimination accuracy of our architecture. Results: When validated with subjects diagnosed using an official test, the model achieves COVID-19 sensitivity of 98.5% with a specificity of 94.2% (AUC: 0.97). For asymptomatic subjects it achieves sensitivity of 100% with a specificity of 83.2%. Conclusions: AI techniques can produce a free, non-invasive, real-time, any-time, instantly distributable, large-scale COVID-19 asymptomatic screening tool to augment current approaches in containing the spread of COVID-19. Practical use cases could be for daily screening of students, workers, and public as schools, jobs, and transport reopen, or for pool testing to quickly alert of outbreaks in groups. General speech biomarkers may exist that cover several disease categories, as we demonstrated using the same ones for COVID-19 and Alzheimer's.
Collapse
Affiliation(s)
| | - Ferran Hueto
- MIT AutoID LaboratoryCambridgeMA02139USA
- Harvard UniversityCambridgeMA02138USA
| | - Brian Subirana
- MIT AutoID LaboratoryCambridgeMA02139USA
- Harvard UniversityCambridgeMA02138USA
| |
Collapse
|
44
|
Hall JI, Lozano M, Estrada-Petrocelli L, Birring S, Turner R. The present and future of cough counting tools. J Thorac Dis 2020; 12:5207-5223. [PMID: 33145097 PMCID: PMC7578475 DOI: 10.21037/jtd-2020-icc-003] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
The widespread use of cough counting tools has, to date, been limited by a reliance on human input to determine cough frequency. However, over the last two decades advances in digital technology and audio capture have reduced this dependence. As a result, cough frequency is increasingly recognised as a measurable parameter of respiratory disease. Cough frequency is now the gold standard primary endpoint for trials of new treatments for chronic cough, has been investigated as a marker of infectiousness in tuberculosis (TB), and used to demonstrate recovery in exacerbations of chronic obstructive pulmonary disease (COPD). This review discusses the principles of automatic cough detection and summarises key currently and recently used cough counting technology in clinical research. It additionally makes some predictions on future directions in the field based on recent developments. It seems likely that newer approaches to signal processing, the adoption of techniques from automatic speech recognition, and the widespread ownership of mobile devices will help drive forward the development of real-time fully automated ambulatory cough frequency monitoring over the coming years. These changes should allow cough counting systems to transition from their current status as a niche research tool in chronic cough to a much more widely applicable method for assessing, investigating and understanding respiratory disease.
Collapse
Affiliation(s)
- Jocelin Isabel Hall
- Centre for Human and Applied Physiological Sciences, King's College London, London, UK
| | - Manuel Lozano
- Institute for Bioengineering of Catalonia (IBEC), The Barcelona Institute of Science and Technology (BIST), Barcelona, Spain.,Biomedical Research Networking Centre in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Barcelona, Spain.,Department of Automatic Control (ESAII), Universitat Politècnica de Catalunya (UPC)-Barcelona Tech, Barcelona, Spain
| | - Luis Estrada-Petrocelli
- Institute for Bioengineering of Catalonia (IBEC), The Barcelona Institute of Science and Technology (BIST), Barcelona, Spain.,Biomedical Research Networking Centre in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Barcelona, Spain.,Facultad de Ingeniería, Universidad Latina de Panamá, Panama City, Panama
| | - Surinder Birring
- Centre for Human and Applied Physiological Sciences, King's College London, London, UK.,Department of Respiratory Medicine, King's College Hospital NHS Foundation Trust, London, UK
| | - Richard Turner
- Department of Respiratory Medicine, Charing Cross Hospital, Imperial College Healthcare NHS Trust, London, UK
| |
Collapse
|
45
|
Abstract
BACKGROUND Contactless symptom tracking is essential for the diagnosis of COVID-19 cases that need hospitalization. Indications from sensors and user descriptions have to be combined in order to make the right decisions. METHODS The proposed multipurpose platform Coronario combines sensory information from different sources for a valid diagnosis following a dynamically adaptable protocol. The information exchanged can also be exploited for the advancement of research on COVID-19. The platform consists of mobile and desktop applications, sensor infrastructure, and cloud services. It may be used by patients in pre- and post-hospitalization stages, vulnerable populations, medical practitioners, and researchers. RESULTS The supported audio processing is used to demonstrate how the Coronario platform can assist research on the nature of COVID-19. Cough sounds are classified as a case study, with 90% accuracy. DISCUSSION/CONCLUSIONS The dynamic adaptation to new medical protocols is one of the main advantages of the developed platform, making it particularly useful for several target groups of patients that require different screening methods. A medical protocol determines the structure of the questionnaires, the medical sensor sampling strategy and, the alert rules.
Collapse
Affiliation(s)
- Nikos Petrellis
- Electrical and Computer Engineering Department, University of Peloponnese, Patras, Greece
| |
Collapse
|
46
|
Cohen-McFarlane M, Goubran R, Knoefel F. Novel Coronavirus Cough Database: NoCoCoDa. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:154087-154094. [PMID: 34786285 PMCID: PMC8545298 DOI: 10.1109/access.2020.3018028] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 08/16/2020] [Indexed: 05/14/2023]
Abstract
The current pandemic associated with the novel coronavirus (COVID-19) presents a new area of research with its own set of challenges. Creating unobtrusive remote monitoring tools for medical professionals that may aid in diagnosis, monitoring and contact tracing could lead to more efficient and accurate treatments, especially in this time of physical distancing. Audio based sensing methods can address this by measuring the frequency, severity and characteristics of the COVID-19 cough. However, the feasibility of accumulating coughs directly from patients is low in the short term. This article introduces a novel database (NoCoCoDa), which contains COVID-19 cough events obtained through public media interviews with COVID-19 patients, as an interim solution. After manual segmentation of the interviews, a total of 73 individual cough events were extracted and cough phase annotation was performed. Furthermore, the COVID-19 cough is typically dry but can present as a more productive cough in severe cases. Therefore, an investigation of cough sub-type (productive vs. dry) of the NoCoCoDa was performed using methods previously published by our research group. Most of the NoCoCoDa cough events were recorded either during or after a severe period of the disease, which is supported by the fact that 77% of the COVID-19 coughs were classified as productive based on our previous work. The NoCoCoDa is designed to be used for rapid exploration and algorithm development, which can then be applied to more extensive datasets and potentially real time applications. The NoCoCoDa is available for free to the research community upon request.
Collapse
Affiliation(s)
| | - Rafik Goubran
- Department of Systems and Computer EngineeringCarleton UniversityOttawaONK1S 5B6Canada
- Bruyére Research InstituteOttawaONK1R 6M1Canada
| | - Frank Knoefel
- Department of Systems and Computer EngineeringCarleton UniversityOttawaONK1S 5B6Canada
- Bruyére Research InstituteOttawaONK1R 6M1Canada
- Bruyére Continuing CareOttawaONK1N 5C8Canada
- Elisabeth Bruyére HospitalOttawaONK1N 5C8Canada
| |
Collapse
|
47
|
Bisballe-Müller N, Chang AB, Plumb EJ, Oguoma VM, Halken S, McCallum GB. Can Acute Cough Characteristics From Sound Recordings Differentiate Common Respiratory Illnesses in Children?: A Comparative Prospective Study. Chest 2020; 159:259-269. [PMID: 32653569 DOI: 10.1016/j.chest.2020.06.067] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Revised: 06/21/2020] [Accepted: 06/24/2020] [Indexed: 11/29/2022] Open
Abstract
BACKGROUND Acute respiratory illnesses cause substantial morbidity worldwide. Cough is a common symptom in these childhood respiratory illnesses, but no large cohort data are available on whether various cough characteristics can differentiate between these etiologies. RESEARCH QUESTION Can various clinically based cough characteristics (frequency [daytime/ nighttime], the sound itself, or type [wet/dry]) be used to differentiate common etiologies (asthma, bronchiolitis, pneumonia, other acute respiratory infections) of acute cough in children? STUDY DESIGN AND METHODS Between 2017 and 2019, children aged 2 weeks to ≤16 years, hospitalized with asthma, bronchiolitis, pneumonia, other acute respiratory infections, or control subjects were enrolled. Spontaneous coughs were digitally recorded over 24 hours except for the control subjects, who provided three voluntary coughs. Coughs were extracted and frequency defined (coughs/hour). Cough sounds and type were assessed independently by two observers blinded to the clinical data. Cough scored by a respiratory specialist was compared with discharge diagnosis using agreement (Cohen's kappa coefficient [қ]), sensitivity, and specificity. Caregiver-reported cough scores were related with objective cough frequency using Spearman coefficient (rs). RESULTS A cohort of 148 children (n = 118 with respiratory illnesses, n = 30 control subjects), median age = 2.0 years (interquartile range, 0.7-3.9), 58% males, and 50% First Nations children were enrolled. In those with respiratory illnesses, caregiver-reported cough scores and wet cough (range, 42%-63%) was similar. Overall agreement in diagnosis between the respiratory specialist and discharge diagnosis was slight (қ = 0.13; 95% CI, 0.03 to 0.22). Among diagnoses, specificity (8%-74%) and sensitivity (53%-100%) varied. Interrater agreement in cough type (wet/dry) between blinded observers was almost perfect (қ = 0.89; 95% CI, 0.81 to 0.97). Objective cough frequency was significantly correlated with reported cough scores using visual analog scale (rs = 0.43; bias-corrected 95% CI, 0.25 to 0.56) and verbal categorical description daytime score (rs = 0.39; bias-corrected 95% CI, 0.22 to 0.54). INTERPRETATION Cough characteristics alone are not distinct enough to accurately differentiate between common acute respiratory illnesses in children.
Collapse
Affiliation(s)
- Nina Bisballe-Müller
- Child Health Division, Menzies School of Health Research, Charles Darwin University, Darwin, NT, Australia; Department for Clinical Research, University of Southern Denmark, Odense, Denmark.
| | - Anne B Chang
- Child Health Division, Menzies School of Health Research, Charles Darwin University, Darwin, NT, Australia; Centre for Children's Health Research, Queensland University of Technology, Brisbane, QLD, Australia; Department of Respiratory and Sleep Medicine, Queensland Children's Hospital, Brisbane, QLD, Australia
| | - Erin J Plumb
- Child Health Division, Menzies School of Health Research, Charles Darwin University, Darwin, NT, Australia
| | - Victor M Oguoma
- Child Health Division, Menzies School of Health Research, Charles Darwin University, Darwin, NT, Australia
| | - Susanne Halken
- Hans Christian Andersen Children's Hospital, Odense University Hospital, Odense, Denmark
| | - Gabrielle B McCallum
- Child Health Division, Menzies School of Health Research, Charles Darwin University, Darwin, NT, Australia
| |
Collapse
|
48
|
Imran A, Posokhova I, Qureshi HN, Masood U, Riaz MS, Ali K, John CN, Hussain MI, Nabeel M. AI4COVID-19: AI enabled preliminary diagnosis for COVID-19 from cough samples via an app. INFORMATICS IN MEDICINE UNLOCKED 2020; 20:100378. [PMID: 32839734 PMCID: PMC7318970 DOI: 10.1016/j.imu.2020.100378] [Citation(s) in RCA: 224] [Impact Index Per Article: 44.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Revised: 06/19/2020] [Accepted: 06/19/2020] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND The inability to test at scale has become humanity's Achille's heel in the ongoing war against the COVID-19 pandemic. A scalable screening tool would be a game changer. Building on the prior work on cough-based diagnosis of respiratory diseases, we propose, develop and test an Artificial Intelligence (AI)-powered screening solution for COVID-19 infection that is deployable via a smartphone app. The app, named AI4COVID-19 records and sends three 3-s cough sounds to an AI engine running in the cloud, and returns a result within 2 min. METHODS Cough is a symptom of over thirty non-COVID-19 related medical conditions. This makes the diagnosis of a COVID-19 infection by cough alone an extremely challenging multidisciplinary problem. We address this problem by investigating the distinctness of pathomorphological alterations in the respiratory system induced by COVID-19 infection when compared to other respiratory infections. To overcome the COVID-19 cough training data shortage we exploit transfer learning. To reduce the misdiagnosis risk stemming from the complex dimensionality of the problem, we leverage a multi-pronged mediator centered risk-averse AI architecture. RESULTS Results show AI4COVID-19 can distinguish among COVID-19 coughs and several types of non-COVID-19 coughs. The accuracy is promising enough to encourage a large-scale collection of labeled cough data to gauge the generalization capability of AI4COVID-19. AI4COVID-19 is not a clinical grade testing tool. Instead, it offers a screening tool deployable anytime, anywhere, by anyone. It can also be a clinical decision assistance tool used to channel clinical-testing and treatment to those who need it the most, thereby saving more lives.
Collapse
Affiliation(s)
- Ali Imran
- AI4Networks Research Center, Dept. of Electrical & Computer Engineering, University of Oklahoma, USA
- AI4Lyf LLC, USA
| | | | - Haneya N Qureshi
- AI4Networks Research Center, Dept. of Electrical & Computer Engineering, University of Oklahoma, USA
| | - Usama Masood
- AI4Networks Research Center, Dept. of Electrical & Computer Engineering, University of Oklahoma, USA
| | - Muhammad Sajid Riaz
- AI4Networks Research Center, Dept. of Electrical & Computer Engineering, University of Oklahoma, USA
| | - Kamran Ali
- Dept. of Computer Science & Engineering, Michigan State University, USA
| | - Charles N John
- AI4Networks Research Center, Dept. of Electrical & Computer Engineering, University of Oklahoma, USA
| | | | - Muhammad Nabeel
- AI4Networks Research Center, Dept. of Electrical & Computer Engineering, University of Oklahoma, USA
| |
Collapse
|
49
|
Adhi Pramono RX, Anas Imtiaz S, Rodriguez-Villegas E. Automatic Cough Detection in Acoustic Signal using Spectral Features. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:7153-7156. [PMID: 31947484 DOI: 10.1109/embc.2019.8857792] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Cough is a common symptom that manifests in numerous respiratory diseases. In chronic respiratory diseases, such as asthma and COPD, monitoring of cough is an integral part in managing the disease. This paper presents an algorithm for automatic detection of cough events from acoustic signals. The algorithm uses only three spectral features with a logistic regression model to separate sound segments into cough and non-cough events. The spectral features were derived using simple calculation from two frequency bands of the sound spectrum. The frequency bands of interest were chosen based on its characteristics in the spectrum. The algorithm achieved high sensitivity of 90.31%, specificity of 98.14%, and F1-score of 88.70%. Its low-complexity and high detection performance demonstrate its potential for use in remote patient monitoring systems for real-time, automatic cough detection.
Collapse
|
50
|
Cohen-McFarlane M, Goubran R, Knoefel F. Comparison of Silence Removal Methods for the Identification of Audio Cough Events. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:1263-1268. [PMID: 31946122 DOI: 10.1109/embc.2019.8857889] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Sensing technologies are embedded in our everyday lives. Smart homes typically use an Audio Virtual Assistant (AVA) (e.g. Alexa, Siri, and Google Home) interface that collects sensor information, which can provide security, assist in everyday activities and monitor health related information. One such measure is cough, changes of which can be a marker of worsening conditions for many respiratory diseases. Creating a reliable monitoring system utilizing technology that may already be present in the home (i.e. AVA) may provide an opportunity for early intervention and reductions in the number of long-term hospitalizations. This paper focuses on the optimization of the silence removal and segmentation step in an at home setting with low to moderate background noise to identify cough events. Three commonly used methods (Standard deviation (SD), Short-term Energy (SE), Zero-crossing rate (ZCR)) were compared to manual segmentations. Each method was applied to 209 audio files that were manually verified to contain at least one cough event and the average segmentation accuracy, over segmentation and under segmentation results were compared. The ZCR method had the highest accuracy (89%); however, it completely failed under moderate noise conditions. The SD method had the best combination of accuracy (86%), ability to perform under noisy conditions and low prevalence of over and under segmentation (22% and 15% respectively). Therefore, we recommend using an adaptive approach to silence removal among cough events based on the level of background noise (i.e use the ZCR method when the background noise is low and the SD method when it is higher) prior to implementation of a cough classification system.
Collapse
|