1
|
Chen Y, Attri P, Barahona J, Hernandez ML, Carpenter D, Bozkurt A, Lobaton E. Robust Cough Detection With Out-of-Distribution Detection. IEEE J Biomed Health Inform 2023; 27:3210-3221. [PMID: 37018102 DOI: 10.1109/jbhi.2023.3264783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/06/2023]
Abstract
Cough is an important defense mechanism of the respiratory system and is also a symptom of lung diseases, such as asthma. Acoustic cough detection collected by portable recording devices is a convenient way totrack potential condition worsening for patients who have asthma. However, the data used in building current cough detection models are often clean, containing a limited set of sound categories, and thus perform poorly when they are exposed to a variety of real-world sounds which could be picked up by portable recording devices. The sounds that are not learned by the model are referred to as Out-of-Distribution (OOD) data. In this work, we propose two robust cough detection methods combined with an OOD detection module, that removes OOD data without sacrificing the cough detection performance of the original system. These methods include adding a learning confidence parameter and maximizing entropy loss. Our experiments show that 1) the OOD system can produce dependable In-Distribution (ID) and OOD results at a sampling rate above 750 Hz; 2) the OOD sample detection tends to perform better for larger audio window sizes; 3) the model's overall accuracy and precision get better as the proportion of OOD samples increase in the acoustic signals; 4) a higher percentage of OOD data is needed to realize performance gains at lower sampling rates. The incorporation of OOD detection techniques improves cough detection performance by a significant margin and provides a valuable solution to real-world acoustic cough detection problems.
Collapse
|
2
|
Lagua EB, Mun HS, Ampode KMB, Chem V, Kim YH, Yang CJ. Artificial Intelligence for Automatic Monitoring of Respiratory Health Conditions in Smart Swine Farming. Animals (Basel) 2023; 13:1860. [PMID: 37889795 PMCID: PMC10251864 DOI: 10.3390/ani13111860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 05/31/2023] [Accepted: 05/31/2023] [Indexed: 10/29/2023] Open
Abstract
Porcine respiratory disease complex is an economically important disease in the swine industry. Early detection of the disease is crucial for immediate response to the disease at the farm level to prevent and minimize the potential damage that it may cause. In this paper, recent studies on the application of artificial intelligence (AI) in the early detection and monitoring of respiratory disease in swine have been reviewed. Most of the studies used coughing sounds as a feature of respiratory disease. The performance of different models and the methodologies used for cough recognition using AI were reviewed and compared. An AI technology available in the market was also reviewed. The device uses audio technology that can monitor and evaluate the herd's respiratory health status through cough-sound recognition and quantification. The device also has temperature and humidity sensors to monitor environmental conditions. It has an alarm system based on variations in coughing patterns and abrupt temperature changes. However, some limitations of the existing technology were identified. Substantial effort must be exerted to surmount the limitations to have a smarter AI technology for monitoring respiratory health status in swine.
Collapse
Affiliation(s)
- Eddiemar B. Lagua
- Animal Nutrition and Feed Science Laboratory, Department of Animal Science and Technology, Sunchon National University, Suncheon 57922, Republic of Korea; (E.B.L.); (H.-S.M.); (K.M.B.A.); (V.C.)
- Interdisciplinary Program in IT-Bio Convergence System (BK21 Plus), Sunchon National University, 255 Jungangno, Suncheon 57922, Republic of Korea
| | - Hong-Seok Mun
- Animal Nutrition and Feed Science Laboratory, Department of Animal Science and Technology, Sunchon National University, Suncheon 57922, Republic of Korea; (E.B.L.); (H.-S.M.); (K.M.B.A.); (V.C.)
- Department of Multimedia Engineering, Sunchon National University, Suncheon 57922, Republic of Korea
| | - Keiven Mark B. Ampode
- Animal Nutrition and Feed Science Laboratory, Department of Animal Science and Technology, Sunchon National University, Suncheon 57922, Republic of Korea; (E.B.L.); (H.-S.M.); (K.M.B.A.); (V.C.)
- Department of Animal Science, College of Agriculture, Sultan Kudarat State University, Tacurong City 9800, Philippines
| | - Veasna Chem
- Animal Nutrition and Feed Science Laboratory, Department of Animal Science and Technology, Sunchon National University, Suncheon 57922, Republic of Korea; (E.B.L.); (H.-S.M.); (K.M.B.A.); (V.C.)
| | - Young-Hwa Kim
- Interdisciplinary Program in IT-Bio Convergence System (BK21 Plus), Chonnam National University, Gwangju 61186, Republic of Korea;
| | - Chul-Ju Yang
- Animal Nutrition and Feed Science Laboratory, Department of Animal Science and Technology, Sunchon National University, Suncheon 57922, Republic of Korea; (E.B.L.); (H.-S.M.); (K.M.B.A.); (V.C.)
- Interdisciplinary Program in IT-Bio Convergence System (BK21 Plus), Sunchon National University, 255 Jungangno, Suncheon 57922, Republic of Korea
| |
Collapse
|
3
|
Peruzzi G, Pozzebon A, Van Der Meer M. Fight Fire with Fire: Detecting Forest Fires with Embedded Machine Learning Models Dealing with Audio and Images on Low Power IoT Devices. SENSORS (BASEL, SWITZERLAND) 2023; 23:783. [PMID: 36679579 PMCID: PMC9863941 DOI: 10.3390/s23020783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 01/03/2023] [Accepted: 01/05/2023] [Indexed: 06/17/2023]
Abstract
Forest fires are the main cause of desertification, and they have a disastrous impact on agricultural and forest ecosystems. Modern fire detection and warning systems rely on several techniques: satellite monitoring, sensor networks, image processing, data fusion, etc. Recently, Artificial Intelligence (AI) algorithms have been applied to fire recognition systems, enhancing their efficiency and reliability. However, these devices usually need constant data transmission along with a proper amount of computing power, entailing high costs and energy consumption. This paper presents the prototype of a Video Surveillance Unit (VSU) for recognising and signalling the presence of forest fires by exploiting two embedded Machine Learning (ML) algorithms running on a low power device. The ML models take audio samples and images as their respective inputs, allowing for timely fire detection. The main result is that while the performances of the two models are comparable when they work independently, their joint usage according to the proposed methodology provides a higher accuracy, precision, recall and F1 score (96.15%, 92.30%, 100.00%, and 96.00%, respectively). Eventually, each event is remotely signalled by making use of the Long Range Wide Area Network (LoRaWAN) protocol to ensure that the personnel in charge are able to operate promptly.
Collapse
Affiliation(s)
- Giacomo Peruzzi
- Department of Information Engineering, University of Padova, 35131 Padova, Italy
| | - Alessandro Pozzebon
- Department of Information Engineering, University of Padova, 35131 Padova, Italy
| | - Mattia Van Der Meer
- Department of Information Engineering and Mathematics, University of Siena, 53100 Siena, Italy
| |
Collapse
|
4
|
Askari Nasab K, Mirzaei J, Zali A, Gholizadeh S, Akhlaghdoust M. Coronavirus diagnosis using cough sounds: Artificial intelligence approaches. Front Artif Intell 2023; 6:1100112. [PMID: 36872932 PMCID: PMC9975504 DOI: 10.3389/frai.2023.1100112] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 01/24/2023] [Indexed: 02/17/2023] Open
Abstract
Introduction The Coronavirus disease 2019 (COVID-19) pandemic has caused irreparable damage to the world. In order to prevent the spread of pathogenicity, it is necessary to identify infected people for quarantine and treatment. The use of artificial intelligence and data mining approaches can lead to prevention and reduction of treatment costs. The purpose of this study is to create data mining models in order to diagnose people with the disease of COVID-19 through the sound of coughing. Method In this research, Supervised Learning classification algorithms have been used, which include Support Vector Machine (SVM), random forest, and Artificial Neural Networks, that based on the standard "Fully Connected" neural network, Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) recurrent neural networks have been established. The data used in this research was from the online site sorfeh.com/sendcough/en, which has data collected during the spread of COVID-19. Result With the data we have collected (about 40,000 people) in different networks, we have reached acceptable accuracies. Conclusion These findings show the reliability of this method for using and developing a tool as a screening and early diagnosis of people with COVID-19. This method can also be used with simple artificial intelligence networks so that acceptable results can be expected. Based on the findings, the average accuracy was 83% and the best model was 95%.
Collapse
Affiliation(s)
- Kazem Askari Nasab
- Materials Science and Engineering Department, Sharif University of Technology, Tehran, Iran
| | - Jamal Mirzaei
- Infectious Disease Research Center, Department of Infectious Diseases, Aja University of Medical Sciences, Tehran, Iran.,Infectious Disease Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Alireza Zali
- Functional Neurosurgery Research Center, Shohada Tajrish Comprehensive Neurosurgical Center of Excellence, Shahid Beheshti University of Medical Sciences, Tehran, Iran.,USERN Office, Functional Neurosurgery Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | | | - Meisam Akhlaghdoust
- Functional Neurosurgery Research Center, Shohada Tajrish Comprehensive Neurosurgical Center of Excellence, Shahid Beheshti University of Medical Sciences, Tehran, Iran.,USERN Office, Functional Neurosurgery Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
5
|
Lee GT, Nam H, Kim SH, Choi SM, Kim Y, Park YH. Deep learning based cough detection camera using enhanced features. EXPERT SYSTEMS WITH APPLICATIONS 2022; 206:117811. [PMID: 35712056 PMCID: PMC9181707 DOI: 10.1016/j.eswa.2022.117811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 05/24/2022] [Accepted: 06/06/2022] [Indexed: 06/15/2023]
Abstract
Coughing is a typical symptom of COVID-19. To detect and localize coughing sounds remotely, a convolutional neural network (CNN) based deep learning model was developed in this work and integrated with a sound camera for the visualization of the cough sounds. The cough detection model is a binary classifier of which the input is a two second acoustic feature and the output is one of two inferences (Cough or Others). Data augmentation was performed on the collected audio files to alleviate class imbalance and reflect various background noises in practical environments. For effective featuring of the cough sound, conventional features such as spectrograms, mel-scaled spectrograms, and mel-frequency cepstral coefficients (MFCC) were reinforced by utilizing their velocity (V) and acceleration (A) maps in this work. VGGNet, GoogLeNet, and ResNet were simplified to binary classifiers, and were named V-net, G-net, and R-net, respectively. To find the best combination of features and networks, training was performed for a total of 39 cases and the performance was confirmed using the test F1 score. Finally, a test F1 score of 91.9% (test accuracy of 97.2%) was achieved from G-net with the MFCC-V-A feature (named Spectroflow), an acoustic feature effective for use in cough detection. The trained cough detection model was integrated with a sound camera (i.e., one that visualizes sound sources using a beamforming microphone array). In a pilot test, the cough detection camera detected coughing sounds with an F1 score of 90.0% (accuracy of 96.0%), and the cough location in the camera image was tracked in real time.
Collapse
Affiliation(s)
- Gyeong-Tae Lee
- Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, South Korea
| | - Hyeonuk Nam
- Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, South Korea
| | - Seong-Hu Kim
- Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, South Korea
| | - Sang-Min Choi
- Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, South Korea
| | | | - Yong-Hwa Park
- Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, South Korea
| |
Collapse
|
6
|
Pahar M, Miranda I, Diacon A, Niesler T. Automatic Non-Invasive Cough Detection based on Accelerometer and Audio Signals. JOURNAL OF SIGNAL PROCESSING SYSTEMS 2022; 94:821-835. [PMID: 35341095 PMCID: PMC8934184 DOI: 10.1007/s11265-022-01748-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 01/09/2022] [Accepted: 02/23/2022] [Indexed: 12/01/2022]
Abstract
We present an automatic non-invasive way of detecting cough events based on both accelerometer and audio signals. The acceleration signals are captured by a smartphone firmly attached to the patient’s bed, using its integrated accelerometer. The audio signals are captured simultaneously by the same smartphone using an external microphone. We have compiled a manually-annotated dataset containing such simultaneously-captured acceleration and audio signals for approximately 6000 cough and 68000 non-cough events from 14 adult male patients. Logistic regression (LR), support vector machine (SVM) and multilayer perceptron (MLP) classifiers provide a baseline and are compared with three deep architectures, convolutional neural network (CNN), long short-term memory (LSTM) network, and residual-based architecture (Resnet50) using a leave-one-out cross-validation scheme. We find that it is possible to use either acceleration or audio signals to distinguish between coughing and other activities including sneezing, throat-clearing, and movement on the bed with high accuracy. However, in all cases, the deep neural networks outperform the shallow classifiers by a clear margin and the Resnet50 offers the best performance, achieving an area under the ROC curve (AUC) exceeding 0.98 and 0.99 for acceleration and audio signals respectively. While audio-based classification consistently offers better performance than acceleration-based classification, we observe that the difference is very small for the best systems. Since the acceleration signal requires less processing power, and since the need to record audio is sidestepped and thus privacy is inherently secured, and since the recording device is attached to the bed and not worn, an accelerometer-based highly accurate non-invasive cough detector may represent a more convenient and readily accepted method in long-term cough monitoring.
Collapse
Affiliation(s)
- Madhurananda Pahar
- Department of Electrical and Electronic Engineering, Stellenbosch University, Stellenbosch, 7600 Western Cape South Africa
| | - Igor Miranda
- Federal University of Recôncavo da Bahia, Cruz das Almas, 44.380-000 Bahia Brazil
| | - Andreas Diacon
- TASK Applied Science, Cape Town, Western Cape South Africa
| | - Thomas Niesler
- Department of Electrical and Electronic Engineering, Stellenbosch University, Stellenbosch, 7600 Western Cape South Africa
| |
Collapse
|
7
|
Dang T, Han J, Xia T, Spathis D, Bondareva E, Siegele-Brown C, Chauhan J, Grammenos A, Hasthanasombat A, Floto RA, Cicuta P, Mascolo C. Exploring Longitudinal Cough, Breath, and Voice Data for COVID-19 Progression Prediction via Sequential Deep Learning: Model Development and Validation. J Med Internet Res 2022; 24:e37004. [PMID: 35653606 PMCID: PMC9217153 DOI: 10.2196/37004] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 04/14/2022] [Accepted: 04/18/2022] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND Recent work has shown the potential of using audio data (eg, cough, breathing, and voice) in the screening for COVID-19. However, these approaches only focus on one-off detection and detect the infection, given the current audio sample, but do not monitor disease progression in COVID-19. Limited exploration has been put forward to continuously monitor COVID-19 progression, especially recovery, through longitudinal audio data. Tracking disease progression characteristics and patterns of recovery could bring insights and lead to more timely treatment or treatment adjustment, as well as better resource management in health care systems. OBJECTIVE The primary objective of this study is to explore the potential of longitudinal audio samples over time for COVID-19 progression prediction and, especially, recovery trend prediction using sequential deep learning techniques. METHODS Crowdsourced respiratory audio data, including breathing, cough, and voice samples, from 212 individuals over 5-385 days were analyzed, alongside their self-reported COVID-19 test results. We developed and validated a deep learning-enabled tracking tool using gated recurrent units (GRUs) to detect COVID-19 progression by exploring the audio dynamics of the individuals' historical audio biomarkers. The investigation comprised 2 parts: (1) COVID-19 detection in terms of positive and negative (healthy) tests using sequential audio signals, which was primarily assessed in terms of the area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity, with 95% CIs, and (2) longitudinal disease progression prediction over time in terms of probability of positive tests, which was evaluated using the correlation between the predicted probability trajectory and self-reported labels. RESULTS We first explored the benefits of capturing longitudinal dynamics of audio biomarkers for COVID-19 detection. The strong performance, yielding an AUROC of 0.79, a sensitivity of 0.75, and a specificity of 0.71 supported the effectiveness of the approach compared to methods that do not leverage longitudinal dynamics. We further examined the predicted disease progression trajectory, which displayed high consistency with longitudinal test results with a correlation of 0.75 in the test cohort and 0.86 in a subset of the test cohort with 12 (57.1%) of 21 COVID-19-positive participants who reported disease recovery. Our findings suggest that monitoring COVID-19 evolution via longitudinal audio data has potential in the tracking of individuals' disease progression and recovery. CONCLUSIONS An audio-based COVID-19 progression monitoring system was developed using deep learning techniques, with strong performance showing high consistency between the predicted trajectory and the test results over time, especially for recovery trend predictions. This has good potential in the postpeak and postpandemic era that can help guide medical treatment and optimize hospital resource allocations. The changes in longitudinal audio samples, referred to as audio dynamics, are associated with COVID-19 progression; thus, modeling the audio dynamics can potentially capture the underlying disease progression process and further aid COVID-19 progression prediction. This framework provides a flexible, affordable, and timely tool for COVID-19 tracking, and more importantly, it also provides a proof of concept of how telemonitoring could be applicable to respiratory diseases monitoring, in general.
Collapse
Affiliation(s)
- Ting Dang
- Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
| | - Jing Han
- Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
| | - Tong Xia
- Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
| | - Dimitris Spathis
- Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
| | - Erika Bondareva
- Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
| | - Chloë Siegele-Brown
- Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
| | - Jagmohan Chauhan
- Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
- Electronics and Computer Science, University of Southampton, Southampton, United Kingdom
| | - Andreas Grammenos
- Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
| | - Apinan Hasthanasombat
- Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
| | - R Andres Floto
- Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
| | - Pietro Cicuta
- Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
| | - Cecilia Mascolo
- Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
8
|
Santosh KC, Rasmussen N, Mamun M, Aryal S. A systematic review on cough sound analysis for Covid-19 diagnosis and screening: is my cough sound COVID-19? PeerJ Comput Sci 2022; 8:e958. [PMID: 35634112 PMCID: PMC9138020 DOI: 10.7717/peerj-cs.958] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 04/04/2022] [Indexed: 06/15/2023]
Abstract
For COVID-19, the need for robust, inexpensive, and accessible screening becomes critical. Even though symptoms present differently, cough is still taken as one of the primary symptoms in severe and non-severe infections alike. For mass screening in resource-constrained regions, artificial intelligence (AI)-guided tools have progressively contributed to detect/screen COVID-19 infections using cough sounds. Therefore, in this article, we review state-of-the-art works in both years 2020 and 2021 by considering AI-guided tools to analyze cough sound for COVID-19 screening primarily based on machine learning algorithms. In our study, we used PubMed central repository and Web of Science with key words: (Cough OR Cough Sounds OR Speech) AND (Machine learning OR Deep learning OR Artificial intelligence) AND (COVID-19 OR Coronavirus). For better meta-analysis, we screened for appropriate dataset (size and source), algorithmic factors (both shallow learning and deep learning models) and corresponding performance scores. Further, in order not to miss up-to-date experimental research-based articles, we also included articles outside of PubMed and Web of Science, but pre-print articles were strictly avoided as they are not peer-reviewed.
Collapse
Affiliation(s)
- KC Santosh
- 2AI: Applied Artificial Intelligence Lab, Computer Science, University of South Dakota, Vermiillion, South Dakota, United States
| | - Nicholas Rasmussen
- 2AI: Applied Artificial Intelligence Lab, Computer Science, University of South Dakota, Vermiillion, South Dakota, United States
| | - Muntasir Mamun
- 2AI: Applied Artificial Intelligence Lab, Computer Science, University of South Dakota, Vermiillion, South Dakota, United States
| | - Sunil Aryal
- School of Information Technology, Deakin University, Victoria, Australia
| |
Collapse
|
9
|
Eni M, Mordoh V, Zigel Y. Cough detection using a non-contact microphone: A nocturnal cough study. PLoS One 2022; 17:e0262240. [PMID: 35045111 PMCID: PMC8769326 DOI: 10.1371/journal.pone.0262240] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Accepted: 12/19/2021] [Indexed: 11/19/2022] Open
Abstract
An automatic non-contact cough detector designed especially for night audio recordings that can distinguish coughs from snores and other sounds is presented. Two different classifiers were implemented and tested: a Gaussian Mixture Model (GMM) and a Deep Neural Network (DNN). The detected coughs were analyzed and compared in different sleep stages and in terms of severity of Obstructive Sleep Apnea (OSA), along with age, Body Mass Index (BMI), and gender. The database was composed of nocturnal audio signals from 89 subjects recorded during a polysomnography study. The DNN-based system outperformed the GMM-based system, at 99.8% accuracy, with a sensitivity and specificity of 86.1% and 99.9%, respectively (Positive Predictive Value (PPV) of 78.4%). Cough events were significantly more frequent during wakefulness than in the sleep stages (p < 0.0001) and were significantly less frequent during deep sleep than in other sleep stages (p < 0.0001). A positive correlation was found between BMI and the number of nocturnal coughs (R = 0.232, p < 0.05), and between the number of nocturnal coughs and OSA severity in men (R = 0.278, p < 0.05). This non-contact cough detection system may thus be implemented to track the progression of respiratory illnesses and test reactions to different medications even at night when a contact sensor is uncomfortable or infeasible.
Collapse
Affiliation(s)
- Marina Eni
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- * E-mail:
| | - Valeria Mordoh
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Yaniv Zigel
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
10
|
Soltanian M, Borna K. Covid-19 recognition from cough sounds using lightweight separable-quadratic convolutional network. Biomed Signal Process Control 2021; 72:103333. [PMID: 34804190 PMCID: PMC8590951 DOI: 10.1016/j.bspc.2021.103333] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 10/22/2021] [Accepted: 11/02/2021] [Indexed: 11/25/2022]
Abstract
Automatic classification of cough data can play a vital role in early detection of Covid-19. Lots of Covid-19 symptoms are somehow related to the human respiratory system, which affect sound production organs. As a result, anomalies in cough sound is expected to be discovered in Covid-19 patients as a sign of infection. This drives the research towards detection of potential Covid-19 cases with inspecting cough sound. While there are several well-performing deep networks, which are capable of classifying sound with a high accuracy, they are not suitable for using in early detection of Covid-19 as they are huge and power/memory hungry. Actually, cough recognition algorithms need to be implemented in hand-held or wearable devices in order to generate early Covid-19 warning without the need to refer individuals to health centers. Therefore, accurate and at the same time lightweight classifiers are needed, in practice. So, there is a need to either compress the complicated models or design light-weight models from the beginning which are suitable for implementation on embedded devices. In this paper, we follow the second approach. We investigate a new lightweight deep learning model to distinguish Covid and Non-Covid cough data. This model not only achieves the state of the art on the well-known and publicly available Virufy dataset, but also is shown to be a good candidate for implementation in low-power devices suitable for hand-held applications.
Collapse
Affiliation(s)
- Mohammad Soltanian
- Faculty of Mathematics and Computer Science, Kharazmi University, Tehran, Iran
| | - Keivan Borna
- Faculty of Mathematics and Computer Science, Kharazmi University, Tehran, Iran
| |
Collapse
|
11
|
Mootassim‐Billah S, Van Nuffelen G, Schoentgen J, De Bodt M, Dragan T, Digonnet A, Roper N, Van Gestel D. Assessment of cough in head and neck cancer patients at risk for dysphagia-An overview. Cancer Rep (Hoboken) 2021; 4:e1395. [PMID: 33932152 PMCID: PMC8551981 DOI: 10.1002/cnr2.1395] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Revised: 02/22/2021] [Accepted: 03/26/2021] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND This literature review explores the terminology, the neurophysiology, and the assessment of cough in general, in the framework of dysphagia and regarding head and neck cancer patients at risk for dysphagia. In the dysphagic population, cough is currently assessed perceptually during a clinical swallowing evaluation or aerodynamically. RECENT FINDINGS Recent findings have shown intra and inter-rater disagreements regarding perceptual scoring of cough. Also, aerodynamic measurements are impractical in a routine bedside assessment. Coughing, however, is considered to be a clinically relevant sign of aspiration and dysphagia in head and cancer patients treated with concurrent chemoradiotherapy. CONCLUSION This article surveys the literature regarding the established cough assessment and stresses the need to implement innovative methods for assessing cough in head and neck cancer patients treated with concurrent chemoradiotherapy at risk for dysphagia.
Collapse
Affiliation(s)
- Sofiana Mootassim‐Billah
- Department of Radiation Oncology, Speech Therapy, Institut Jules BordetUniversité Libre de BruxellesBrusselsBelgium
| | - Gwen Van Nuffelen
- Department of Otolaryngology and Head and Neck Surgery, University Rehabilitation Center for Communication DisordersAntwerp University HospitalAntwerpBelgium
- Department of Translational Neurosciences, Faculty of Medicine and Health SciencesUniversity of AntwerpAntwerpBelgium
- Department of Logopaedics and Audiological Sciences, Faculty of Medicine and Health SciencesUniversity of GhentGhentBelgium
| | - Jean Schoentgen
- BEAMS (Bio‐, Electro‐ And Mechanical Systems)Université Libre de BruxellesBrusselsBelgium
| | - Marc De Bodt
- Department of Otolaryngology and Head and Neck Surgery, University Rehabilitation Center for Communication DisordersAntwerp University HospitalAntwerpBelgium
- Department of Translational Neurosciences, Faculty of Medicine and Health SciencesUniversity of AntwerpAntwerpBelgium
- Department of Logopaedics and Audiological Sciences, Faculty of Medicine and Health SciencesUniversity of GhentGhentBelgium
| | - Tatiana Dragan
- Department of Radiation Oncology, Head and Neck Unit, Institut Jules BordetUniversité Libre de BruxellesBrusselsBelgium
| | - Antoine Digonnet
- Department of Surgical Oncology, Head and Neck Surgery Unit, Institut Jules BordetUniversité Libre de BruxellesBrusselsBelgium
| | - Nicolas Roper
- Department of Oto‐Rhino‐Laryngology and Head & Neck Surgery, Erasme HospitalUniversité Libre de BruxellesBrusselsBelgium
| | - Dirk Van Gestel
- Department of Radiation Oncology, Head and Neck Unit, Institut Jules BordetUniversité Libre de BruxellesBrusselsBelgium
| |
Collapse
|
12
|
Asada K, Komatsu M, Shimoyama R, Takasawa K, Shinkai N, Sakai A, Bolatkan A, Yamada M, Takahashi S, Machino H, Kobayashi K, Kaneko S, Hamamoto R. Application of Artificial Intelligence in COVID-19 Diagnosis and Therapeutics. J Pers Med 2021; 11:886. [PMID: 34575663 PMCID: PMC8471764 DOI: 10.3390/jpm11090886] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 09/01/2021] [Accepted: 09/02/2021] [Indexed: 12/12/2022] Open
Abstract
The coronavirus disease 2019 (COVID-19) pandemic began at the end of December 2019, giving rise to a high rate of infections and causing COVID-19-associated deaths worldwide. It was first reported in Wuhan, China, and since then, not only global leaders, organizations, and pharmaceutical/biotech companies, but also researchers, have directed their efforts toward overcoming this threat. The use of artificial intelligence (AI) has recently surged internationally and has been applied to diverse aspects of many problems. The benefits of using AI are now widely accepted, and many studies have shown great success in medical research on tasks, such as the classification, detection, and prediction of disease, or even patient outcome. In fact, AI technology has been actively employed in various ways in COVID-19 research, and several clinical applications of AI-equipped medical devices for the diagnosis of COVID-19 have already been reported. Hence, in this review, we summarize the latest studies that focus on medical imaging analysis, drug discovery, and therapeutics such as vaccine development and public health decision-making using AI. This survey clarifies the advantages of using AI in the fight against COVID-19 and provides future directions for tackling the COVID-19 pandemic using AI techniques.
Collapse
Affiliation(s)
- Ken Asada
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Masaaki Komatsu
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Ryo Shimoyama
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Ken Takasawa
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Norio Shinkai
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Akira Sakai
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Amina Bolatkan
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Masayoshi Yamada
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
- Department of Endoscopy, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
| | - Satoshi Takahashi
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Hidenori Machino
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Kazuma Kobayashi
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Syuzo Kaneko
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Ryuji Hamamoto
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| |
Collapse
|
13
|
Alves AAC, Andrietta LT, Lopes RZ, Bussiman FO, Silva FFE, Carvalheiro R, Brito LF, Balieiro JCDC, Albuquerque LG, Ventura RV. Integrating Audio Signal Processing and Deep Learning Algorithms for Gait Pattern Classification in Brazilian Gaited Horses. FRONTIERS IN ANIMAL SCIENCE 2021. [DOI: 10.3389/fanim.2021.681557] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
This study focused on assessing the usefulness of using audio signal processing in the gaited horse industry. A total of 196 short-time audio files (4 s) were collected from video recordings of Brazilian gaited horses. These files were converted into waveform signals (196 samples by 80,000 columns) and divided into training (N = 164) and validation (N = 32) datasets. Twelve single-valued audio features were initially extracted to summarize the training data according to the gait patterns (Marcha Batida—MB and Marcha Picada—MP). After preliminary analyses, high-dimensional arrays of the Mel Frequency Cepstral Coefficients (MFCC), Onset Strength (OS), and Tempogram (TEMP) were extracted and used as input information in the classification algorithms. A principal component analysis (PCA) was performed using the 12 single-valued features set and each audio-feature dataset—AFD (MFCC, OS, and TEMP) for prior data visualization. Machine learning (random forest, RF; support vector machine, SVM) and deep learning (multilayer perceptron neural networks, MLP; convolution neural networks, CNN) algorithms were used to classify the gait types. A five-fold cross-validation scheme with 10 repetitions was employed for assessing the models' predictive performance. The classification performance across models and AFD was also validated with independent observations. The models and AFD were compared based on the classification accuracy (ACC), specificity (SPEC), sensitivity (SEN), and area under the curve (AUC). In the logistic regression analysis, five out of the 12 audio features extracted were significant (p < 0.05) between the gait types. ACC averages ranged from 0.806 to 0.932 for MFCC, from 0.758 to 0.948 for OS and, from 0.936 to 0.968 for TEMP. Overall, the TEMP dataset provided the best classification accuracies for all models. The most suitable method for audio-based horse gait pattern classification was CNN. Both cross and independent validation schemes confirmed that high values of ACC, SPEC, SEN, and AUC are expected for yet-to-be-observed labels, except for MFCC-based models, in which clear overfitting was observed. Using audio-generated data for describing gait phenotypes in Brazilian horses is a promising approach, as the two gait patterns were correctly distinguished. The highest classification performance was achieved by combining CNN and the rhythmic-descriptive AFD.
Collapse
|
14
|
Ni X, Ouyang W, Jeong H, Kim JT, Tzaveils A, Mirzazadeh A, Wu C, Lee JY, Keller M, Mummidisetty CK, Patel M, Shawen N, Huang J, Chen H, Ravi S, Chang JK, Lee K, Wu Y, Lie F, Kang YJ, Kim JU, Chamorro LP, Banks AR, Bharat A, Jayaraman A, Xu S, Rogers JA. Automated, multiparametric monitoring of respiratory biomarkers and vital signs in clinical and home settings for COVID-19 patients. Proc Natl Acad Sci U S A 2021; 118:e2026610118. [PMID: 33893178 PMCID: PMC8126790 DOI: 10.1073/pnas.2026610118] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 03/22/2021] [Indexed: 11/18/2022] Open
Abstract
Capabilities in continuous monitoring of key physiological parameters of disease have never been more important than in the context of the global COVID-19 pandemic. Soft, skin-mounted electronics that incorporate high-bandwidth, miniaturized motion sensors enable digital, wireless measurements of mechanoacoustic (MA) signatures of both core vital signs (heart rate, respiratory rate, and temperature) and underexplored biomarkers (coughing count) with high fidelity and immunity to ambient noises. This paper summarizes an effort that integrates such MA sensors with a cloud data infrastructure and a set of analytics approaches based on digital filtering and convolutional neural networks for monitoring of COVID-19 infections in sick and healthy individuals in the hospital and the home. Unique features are in quantitative measurements of coughing and other vocal events, as indicators of both disease and infectiousness. Systematic imaging studies demonstrate correlations between the time and intensity of coughing, speaking, and laughing and the total droplet production, as an approximate indicator of the probability for disease spread. The sensors, deployed on COVID-19 patients along with healthy controls in both inpatient and home settings, record coughing frequency and intensity continuously, along with a collection of other biometrics. The results indicate a decaying trend of coughing frequency and intensity through the course of disease recovery, but with wide variations across patient populations. The methodology creates opportunities to study patterns in biometrics across individuals and among different demographic groups.
Collapse
Affiliation(s)
- Xiaoyue Ni
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL 60208
- Department of Mechanical Engineering and Materials Science, Duke University, Durham, NC 27708
| | - Wei Ouyang
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL 60208
| | - Hyoyoung Jeong
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL 60208
| | - Jin-Tae Kim
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL 60208
| | - Andreas Tzaveils
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL 60208
- Department of Biomedical Engineering, Northwestern University, Evanston, IL 60208
- Medical Scientist Training Program, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611
| | - Ali Mirzazadeh
- College of Computing, Georgia Institute of Technology, Atlanta, GA 30332
| | - Changsheng Wu
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL 60208
| | | | | | - Chaithanya K Mummidisetty
- Max Nader Lab for Rehabilitation Technologies and Outcomes Research, Center for Bionic Medicine, Shirley Ryan AbilityLab, Chicago, IL 60611
| | - Manish Patel
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL 60208
- College of Medicine, University of Illinois at Chicago, Chicago, IL 60612
| | - Nicholas Shawen
- Max Nader Lab for Rehabilitation Technologies and Outcomes Research, Center for Bionic Medicine, Shirley Ryan AbilityLab, Chicago, IL 60611
| | - Joy Huang
- Feinberg School of Medicine, Northwestern University, Chicago, IL 60611
| | - Hope Chen
- Feinberg School of Medicine, Northwestern University, Chicago, IL 60611
| | - Sowmya Ravi
- Division of Thoracic Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611
| | - Jan-Kai Chang
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL 60208
- Wearifi Inc., Evanston, IL 60201
| | - KunHyuck Lee
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL 60208
- Department of Materials Science and Engineering, Northwestern University, Evanston, IL 60208
| | - Yixin Wu
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL 60208
- Department of Materials Science and Engineering, Northwestern University, Evanston, IL 60208
| | - Ferrona Lie
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL 60208
| | - Youn J Kang
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL 60208
| | - Jong Uk Kim
- School of Chemical Engineering, Sungkyunkwan University, Suwon, 16419, Republic of Korea
| | - Leonardo P Chamorro
- Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Champaign, IL 61801
| | - Anthony R Banks
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL 60208
| | - Ankit Bharat
- Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611
| | - Arun Jayaraman
- Max Nader Lab for Rehabilitation Technologies and Outcomes Research, Center for Bionic Medicine, Shirley Ryan AbilityLab, Chicago, IL 60611
| | - Shuai Xu
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL 60208;
- Department of Dermatology, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611
| | - John A Rogers
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL 60208;
- Department of Biomedical Engineering, Northwestern University, Evanston, IL 60208
- Department of Materials Science and Engineering, Northwestern University, Evanston, IL 60208
- Department of Mechanical Engineering, Northwestern University, Evanston, IL 60208
- Department of Chemistry, Northwestern University, Evanston, IL 60208
- Department of Electrical Engineering and Computer Science, Northwestern University, Evanston, IL 60208
- Department of Neurological Surgery, Northwestern University, Evanston, IL 60208
| |
Collapse
|
15
|
Alafif T, Tehame AM, Bajaba S, Barnawi A, Zia S. Machine and Deep Learning towards COVID-19 Diagnosis and Treatment: Survey, Challenges, and Future Directions. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:1117. [PMID: 33513984 PMCID: PMC7908539 DOI: 10.3390/ijerph18031117] [Citation(s) in RCA: 57] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Revised: 01/16/2021] [Accepted: 01/17/2021] [Indexed: 12/13/2022]
Abstract
With many successful stories, machine learning (ML) and deep learning (DL) have been widely used in our everyday lives in a number of ways. They have also been instrumental in tackling the outbreak of Coronavirus (COVID-19), which has been happening around the world. The SARS-CoV-2 virus-induced COVID-19 epidemic has spread rapidly across the world, leading to international outbreaks. The COVID-19 fight to curb the spread of the disease involves most states, companies, and scientific research institutions. In this research, we look at the Artificial Intelligence (AI)-based ML and DL methods for COVID-19 diagnosis and treatment. Furthermore, in the battle against COVID-19, we summarize the AI-based ML and DL methods and the available datasets, tools, and performance. This survey offers a detailed overview of the existing state-of-the-art methodologies for ML and DL researchers and the wider health community with descriptions of how ML and DL and data can improve the status of COVID-19, and more studies in order to avoid the outbreak of COVID-19. Details of challenges and future directions are also provided.
Collapse
Affiliation(s)
- Tarik Alafif
- Computer Science Department, Jamoum University College, Umm Al-Qura University, Jamoum 25375, Saudi Arabia
| | - Abdul Muneeim Tehame
- Department of Software Engineering, Sir Syed University of Engineering and Technology, Karachi 75300, Pakistan;
| | - Saleh Bajaba
- Business Administration Department, King Abdulaziz University, Jeddah 21589, Saudi Arabia;
| | - Ahmed Barnawi
- Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia;
| | - Saad Zia
- IT Department, Jeddah Cable Company, Jeddah 31248, Saudi Arabia;
| |
Collapse
|
16
|
Mitrofanova A, Mikhaylov D, Shaznaev I, Chumanskaia V, Saveliev V. Acoustery System for Differential Diagnosing of Coronavirus COVID-19 Disease. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2021; 2:299-303. [PMID: 35402972 PMCID: PMC8940188 DOI: 10.1109/ojemb.2021.3127078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Revised: 10/12/2021] [Accepted: 11/05/2021] [Indexed: 11/08/2022] Open
Abstract
Goal: Because of the outbreak of coronavirus infection, healthcare systems are faced with the lack of medical professionals. We present a system for the differential diagnosis of coronavirus disease, based on deep learning techniques, which can be implemented in clinics. Methods: A recurrent network with a convolutional neural network as an encoder and an attention mechanism is used. A database of about 3000 records of coughing was collected. The data was collected through the Acoustery mobile application in hospitals in Russia, Belarus, and Kazakhstan from April 2020 to October 2020. Results: The model classification accuracy reaches 85%. Values of precision and recall metrics are 78.5% and 73%. Conclusions: We reached satisfactory results in solving the problem. The proposed model is already being tested by doctors to understand the ways of improvement. Other architectures should be considered that use a larger training sample and all available patient information.
Collapse
Affiliation(s)
| | - Dmitry Mikhaylov
- Lebedev Physical InstituteRussian Academy of Sciences Moscow 119991 Russia
| | | | - Vera Chumanskaia
- Immanuel Kant Baltic Federal University Kaliningrad 236041 Russia
| | - Valeri Saveliev
- Huazhong University of Science and Technology Wuhan 430074 Hubei China
| |
Collapse
|
17
|
Hall JI, Lozano M, Estrada-Petrocelli L, Birring S, Turner R. The present and future of cough counting tools. J Thorac Dis 2020; 12:5207-5223. [PMID: 33145097 PMCID: PMC7578475 DOI: 10.21037/jtd-2020-icc-003] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
The widespread use of cough counting tools has, to date, been limited by a reliance on human input to determine cough frequency. However, over the last two decades advances in digital technology and audio capture have reduced this dependence. As a result, cough frequency is increasingly recognised as a measurable parameter of respiratory disease. Cough frequency is now the gold standard primary endpoint for trials of new treatments for chronic cough, has been investigated as a marker of infectiousness in tuberculosis (TB), and used to demonstrate recovery in exacerbations of chronic obstructive pulmonary disease (COPD). This review discusses the principles of automatic cough detection and summarises key currently and recently used cough counting technology in clinical research. It additionally makes some predictions on future directions in the field based on recent developments. It seems likely that newer approaches to signal processing, the adoption of techniques from automatic speech recognition, and the widespread ownership of mobile devices will help drive forward the development of real-time fully automated ambulatory cough frequency monitoring over the coming years. These changes should allow cough counting systems to transition from their current status as a niche research tool in chronic cough to a much more widely applicable method for assessing, investigating and understanding respiratory disease.
Collapse
Affiliation(s)
- Jocelin Isabel Hall
- Centre for Human and Applied Physiological Sciences, King's College London, London, UK
| | - Manuel Lozano
- Institute for Bioengineering of Catalonia (IBEC), The Barcelona Institute of Science and Technology (BIST), Barcelona, Spain.,Biomedical Research Networking Centre in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Barcelona, Spain.,Department of Automatic Control (ESAII), Universitat Politècnica de Catalunya (UPC)-Barcelona Tech, Barcelona, Spain
| | - Luis Estrada-Petrocelli
- Institute for Bioengineering of Catalonia (IBEC), The Barcelona Institute of Science and Technology (BIST), Barcelona, Spain.,Biomedical Research Networking Centre in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Barcelona, Spain.,Facultad de Ingeniería, Universidad Latina de Panamá, Panama City, Panama
| | - Surinder Birring
- Centre for Human and Applied Physiological Sciences, King's College London, London, UK.,Department of Respiratory Medicine, King's College Hospital NHS Foundation Trust, London, UK
| | - Richard Turner
- Department of Respiratory Medicine, Charing Cross Hospital, Imperial College Healthcare NHS Trust, London, UK
| |
Collapse
|
18
|
Barata F, Tinschert P, Rassouli F, Steurer-Stey C, Fleisch E, Puhan MA, Brutsche M, Kotz D, Kowatsch T. Automatic Recognition, Segmentation, and Sex Assignment of Nocturnal Asthmatic Coughs and Cough Epochs in Smartphone Audio Recordings: Observational Field Study. J Med Internet Res 2020; 22:e18082. [PMID: 32459641 PMCID: PMC7388043 DOI: 10.2196/18082] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Revised: 04/27/2020] [Accepted: 04/30/2020] [Indexed: 01/22/2023] Open
Abstract
Background Asthma is one of the most prevalent chronic respiratory diseases. Despite increased investment in treatment, little progress has been made in the early recognition and treatment of asthma exacerbations over the last decade. Nocturnal cough monitoring may provide an opportunity to identify patients at risk for imminent exacerbations. Recently developed approaches enable smartphone-based cough monitoring. These approaches, however, have not undergone longitudinal overnight testing nor have they been specifically evaluated in the context of asthma. Also, the problem of distinguishing partner coughs from patient coughs when two or more people are sleeping in the same room using contact-free audio recordings remains unsolved. Objective The objective of this study was to evaluate the automatic recognition and segmentation of nocturnal asthmatic coughs and cough epochs in smartphone-based audio recordings that were collected in the field. We also aimed to distinguish partner coughs from patient coughs in contact-free audio recordings by classifying coughs based on sex. Methods We used a convolutional neural network model that we had developed in previous work for automated cough recognition. We further used techniques (such as ensemble learning, minibatch balancing, and thresholding) to address the imbalance in the data set. We evaluated the classifier in a classification task and a segmentation task. The cough-recognition classifier served as the basis for the cough-segmentation classifier from continuous audio recordings. We compared automated cough and cough-epoch counts to human-annotated cough and cough-epoch counts. We employed Gaussian mixture models to build a classifier for cough and cough-epoch signals based on sex. Results We recorded audio data from 94 adults with asthma (overall: mean 43 years; SD 16 years; female: 54/94, 57%; male 40/94, 43%). Audio data were recorded by each participant in their everyday environment using a smartphone placed next to their bed; recordings were made over a period of 28 nights. Out of 704,697 sounds, we identified 30,304 sounds as coughs. A total of 26,166 coughs occurred without a 2-second pause between coughs, yielding 8238 cough epochs. The ensemble classifier performed well with a Matthews correlation coefficient of 92% in a pure classification task and achieved comparable cough counts to that of human annotators in the segmentation of coughing. The count difference between automated and human-annotated coughs was a mean –0.1 (95% CI –12.11, 11.91) coughs. The count difference between automated and human-annotated cough epochs was a mean 0.24 (95% CI –3.67, 4.15) cough epochs. The Gaussian mixture model cough epoch–based sex classification performed best yielding an accuracy of 83%. Conclusions Our study showed longitudinal nocturnal cough and cough-epoch recognition from nightly recorded smartphone-based audio from adults with asthma. The model distinguishes partner cough from patient cough in contact-free recordings by identifying cough and cough-epoch signals that correspond to the sex of the patient. This research represents a step towards enabling passive and scalable cough monitoring for adults with asthma.
Collapse
Affiliation(s)
- Filipe Barata
- Center for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Peter Tinschert
- Center for Digital Health Interventions, Institute of Technology Management, University of St. Gallen, St. Gallen, Switzerland
| | - Frank Rassouli
- Lung Center, Cantonal Hospital St. Gallen, St. Gallen, Switzerland
| | - Claudia Steurer-Stey
- Institute of Epidemiology, Biostatistics and Prevention, University of Zurich, Zurich, Switzerland.,mediX Group Practice, Zurich, Switzerland
| | - Elgar Fleisch
- Center for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland.,Center for Digital Health Interventions, Institute of Technology Management, University of St. Gallen, St. Gallen, Switzerland
| | - Milo Alan Puhan
- Institute of Epidemiology, Biostatistics and Prevention, University of Zurich, Zurich, Switzerland
| | - Martin Brutsche
- Lung Center, Cantonal Hospital St. Gallen, St. Gallen, Switzerland
| | - David Kotz
- Center for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland.,Department of Computer Science, Dartmouth College, Hanover, NH, United States.,Center for Technology and Digital Health, Dartmouth College, Hanover, NH, United States
| | - Tobias Kowatsch
- Center for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland.,Center for Digital Health Interventions, Institute of Technology Management, University of St. Gallen, St. Gallen, Switzerland
| |
Collapse
|
19
|
Imran A, Posokhova I, Qureshi HN, Masood U, Riaz MS, Ali K, John CN, Hussain MI, Nabeel M. AI4COVID-19: AI enabled preliminary diagnosis for COVID-19 from cough samples via an app. INFORMATICS IN MEDICINE UNLOCKED 2020; 20:100378. [PMID: 32839734 PMCID: PMC7318970 DOI: 10.1016/j.imu.2020.100378] [Citation(s) in RCA: 224] [Impact Index Per Article: 56.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Revised: 06/19/2020] [Accepted: 06/19/2020] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND The inability to test at scale has become humanity's Achille's heel in the ongoing war against the COVID-19 pandemic. A scalable screening tool would be a game changer. Building on the prior work on cough-based diagnosis of respiratory diseases, we propose, develop and test an Artificial Intelligence (AI)-powered screening solution for COVID-19 infection that is deployable via a smartphone app. The app, named AI4COVID-19 records and sends three 3-s cough sounds to an AI engine running in the cloud, and returns a result within 2 min. METHODS Cough is a symptom of over thirty non-COVID-19 related medical conditions. This makes the diagnosis of a COVID-19 infection by cough alone an extremely challenging multidisciplinary problem. We address this problem by investigating the distinctness of pathomorphological alterations in the respiratory system induced by COVID-19 infection when compared to other respiratory infections. To overcome the COVID-19 cough training data shortage we exploit transfer learning. To reduce the misdiagnosis risk stemming from the complex dimensionality of the problem, we leverage a multi-pronged mediator centered risk-averse AI architecture. RESULTS Results show AI4COVID-19 can distinguish among COVID-19 coughs and several types of non-COVID-19 coughs. The accuracy is promising enough to encourage a large-scale collection of labeled cough data to gauge the generalization capability of AI4COVID-19. AI4COVID-19 is not a clinical grade testing tool. Instead, it offers a screening tool deployable anytime, anywhere, by anyone. It can also be a clinical decision assistance tool used to channel clinical-testing and treatment to those who need it the most, thereby saving more lives.
Collapse
Affiliation(s)
- Ali Imran
- AI4Networks Research Center, Dept. of Electrical & Computer Engineering, University of Oklahoma, USA
- AI4Lyf LLC, USA
| | | | - Haneya N Qureshi
- AI4Networks Research Center, Dept. of Electrical & Computer Engineering, University of Oklahoma, USA
| | - Usama Masood
- AI4Networks Research Center, Dept. of Electrical & Computer Engineering, University of Oklahoma, USA
| | - Muhammad Sajid Riaz
- AI4Networks Research Center, Dept. of Electrical & Computer Engineering, University of Oklahoma, USA
| | - Kamran Ali
- Dept. of Computer Science & Engineering, Michigan State University, USA
| | - Charles N John
- AI4Networks Research Center, Dept. of Electrical & Computer Engineering, University of Oklahoma, USA
| | | | - Muhammad Nabeel
- AI4Networks Research Center, Dept. of Electrical & Computer Engineering, University of Oklahoma, USA
| |
Collapse
|