1
|
Tzavelis A, Palla J, Mathur R, Bedford B, Wu YH, Trueb J, Shin HS, Arafa H, Jeong H, Ouyang W, Kwak JY, Chiang J, Schulz S, Carter TM, Rangaraj V, Katsaggelos AK, McColley SA, Rogers JA. Development of a Miniaturized Mechanoacoustic Sensor for Continuous, Objective Cough Detection, Characterization and Physiologic Monitoring in Children With Cystic Fibrosis. IEEE J Biomed Health Inform 2024; 28:5941-5952. [PMID: 38885105 DOI: 10.1109/jbhi.2024.3415479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/20/2024]
Abstract
Cough is an important symptom in children with acute and chronic respiratory disease. Daily cough is common in Cystic Fibrosis (CF) and increased cough is a symptom of pulmonary exacerbation. To date, cough assessment is primarily subjective in clinical practice and research. Attempts to develop objective, automatic cough counting tools have faced reliability issues in noisy environments and practical barriers limiting long-term use. This single-center pilot study evaluated usability, acceptability and performance of a mechanoacoustic sensor (MAS), previously used for cough classification in adults, in 36 children with CF over brief and multi-day periods in four cohorts. Children whose health was at baseline and who had symptoms of pulmonary exacerbation were included. We trained, validated, and deployed custom deep learning algorithms for accurate cough detection and classification from other vocalization or artifacts with an overall area under the receiver-operator characteristic curve (AUROC) of 0.96 and average precision (AP) of 0.93. Child and parent feedback led to a redesign of the MAS towards a smaller, more discreet device acceptable for daily use in children. Additional improvements optimized power efficiency and data management. The MAS's ability to objectively measure cough and other physiologic signals across clinic, hospital, and home settings is demonstrated, particularly aided by an AUROC of 0.97 and AP of 0.96 for motion artifact rejection. Examples of cough frequency and physiologic parameter correlations with participant-reported outcomes and clinical measurements for individual patients are presented. The MAS is a promising tool in objective longitudinal evaluation of cough in children with CF.
Collapse
|
2
|
Wang Y, Yang K, Xu S, Rui S, Xie J, Wang J, Wang X. An automatic cough counting method and system construction for portable devices. Front Bioeng Biotechnol 2024; 12:1477694. [PMID: 39398643 PMCID: PMC11466865 DOI: 10.3389/fbioe.2024.1477694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2024] [Accepted: 09/12/2024] [Indexed: 10/15/2024] Open
Abstract
Introduction Cough is a common symptom of respiratory diseases, and prolonged monitoring of cough can help assist doctors in making judgments about patients' conditions, among which cough frequency is an indicator that characterizes the state of the patient's lungs. Therefore, the aim of this paper is to design an automatic cough counting system to monitor the number of coughs per minute for a long period of time. Methods In this paper, a complete cough counting process is proposed, including denoising, segment extraction, eigenvalue calculation, recognition, and counting process; and a wearable automatic cough counting device containing acquisition and reception software. The design and construction of the algorithm is based on realistically captured cough-containing audio from 50 patients, combined with short-time features, and Meier cepstrum coefficients as features characterizing the cough. Results The accuracy, sensitivity, specificity, and F1 score of the method were 93.24%, 97.58%, 86.97%, and 94.47%, respectively, with a Kappa value of 0.9209, an average counting error of 0.46 counts for a 60-s speech segment, and an average runtime of 2.80 ± 2.27 s. Discussion This method improves the double threshold method in terms of the threshold and eigenvalues of the cough segments' sensitivity and has better performance in terms of accuracy, real-time performance, and computing speed, which can be applied to real-time cough counting and monitoring in small portable devices with limited computing power. The developed wearable portable automatic cough counting device and the accompanying host computer software application can realize the long-term monitoring of patients' coughing condition.
Collapse
Affiliation(s)
- Yixuan Wang
- Engineering Training Centre, Beihang University, Beijing, China
| | - Kehaoyu Yang
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
- Liupanshan Laboratory, Yinchuan, China
| | - Shaofeng Xu
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
- Liupanshan Laboratory, Yinchuan, China
| | - Shuwang Rui
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
- Liupanshan Laboratory, Yinchuan, China
| | - Jiaxing Xie
- State Key Laboratory of Respiratory Disease, Guangzhou, China
| | - Juncheng Wang
- Institute of Stomatology, First Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Xin Wang
- Nineth Medical Center of PLA General Hospital Gynaecology and Obstetrics, Chinese PLA General Hospital, Beijing, China
| |
Collapse
|
3
|
Truong T, Lenga M, Serrurier A, Mohammadi S. Fused Audio Instance and Representation for Respiratory Disease Detection. SENSORS (BASEL, SWITZERLAND) 2024; 24:6176. [PMID: 39409216 PMCID: PMC11479208 DOI: 10.3390/s24196176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Revised: 09/17/2024] [Accepted: 09/20/2024] [Indexed: 10/20/2024]
Abstract
Audio-based classification techniques for body sounds have long been studied to aid in the diagnosis of respiratory diseases. While most research is centered on the use of coughs as the main acoustic biomarker, other body sounds also have the potential to detect respiratory diseases. Recent studies on the coronavirus disease 2019 (COVID-19) have suggested that breath and speech sounds, in addition to cough, correlate with the disease. Our study proposes fused audio instance and representation (FAIR) as a method for respiratory disease detection. FAIR relies on constructing a joint feature vector from various body sounds represented in waveform and spectrogram form. We conduct experiments on the use case of COVID-19 detection by combining waveform and spectrogram representation of body sounds. Our findings show that the use of self-attention to combine extracted features from cough, breath, and speech sounds leads to the best performance with an area under the receiver operating characteristic curve (AUC) score of 0.8658, a sensitivity of 0.8057, and a specificity of 0.7958. Compared to models trained solely on spectrograms or waveforms, the use of both representations results in an improved AUC score, demonstrating that combining spectrogram and waveform representation helps to enrich the extracted features and outperforms the models that use only one representation. While this study focuses on COVID-19, FAIR's flexibility allows it to combine various multi-modal and multi-instance features in many other diagnostic applications, potentially leading to more accurate diagnoses across a wider range of diseases.
Collapse
Affiliation(s)
- Tuan Truong
- Bayer AG, 13353 Berlin, Germany; (M.L.); (S.M.)
| | | | - Antoine Serrurier
- Clinic for Phoniatrics, Pedaudiology and Communication Disorders, University Hospital of RWTH Aachen, 52074 Aachen, Germany;
| | | |
Collapse
|
4
|
Isangula KG, Haule RJ. Leveraging AI and Machine Learning to Develop and Evaluate a Contextualized User-Friendly Cough Audio Classifier for Detecting Respiratory Diseases: Protocol for a Diagnostic Study in Rural Tanzania. JMIR Res Protoc 2024; 13:e54388. [PMID: 38652526 PMCID: PMC11077412 DOI: 10.2196/54388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 02/14/2024] [Accepted: 02/21/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND Respiratory diseases, including active tuberculosis (TB), asthma, and chronic obstructive pulmonary disease (COPD), constitute substantial global health challenges, necessitating timely and accurate diagnosis for effective treatment and management. OBJECTIVE This research seeks to develop and evaluate a noninvasive user-friendly artificial intelligence (AI)-powered cough audio classifier for detecting these respiratory conditions in rural Tanzania. METHODS This is a nonexperimental cross-sectional research with the primary objective of collection and analysis of cough sounds from patients with active TB, asthma, and COPD in outpatient clinics to generate and evaluate a noninvasive cough audio classifier. Specialized cough sound recording devices, designed to be nonintrusive and user-friendly, will facilitate the collection of diverse cough sound samples from patients attending outpatient clinics in 20 health care facilities in the Shinyanga region. The collected cough sound data will undergo rigorous analysis, using advanced AI signal processing and machine learning techniques. By comparing acoustic features and patterns associated with TB, asthma, and COPD, a robust algorithm capable of automated disease discrimination will be generated facilitating the development of a smartphone-based cough sound classifier. The classifier will be evaluated against the calculated reference standards including clinical assessments, sputum smear, GeneXpert, chest x-ray, culture and sensitivity, spirometry and peak expiratory flow, and sensitivity and predictive values. RESULTS This research represents a vital step toward enhancing the diagnostic capabilities available in outpatient clinics, with the potential to revolutionize the field of respiratory disease diagnosis. Findings from the 4 phases of the study will be presented as descriptions supported by relevant images, tables, and figures. The anticipated outcome of this research is the creation of a reliable, noninvasive diagnostic cough classifier that empowers health care professionals and patients themselves to identify and differentiate these respiratory diseases based on cough sound patterns. CONCLUSIONS Cough sound classifiers use advanced technology for early detection and management of respiratory conditions, offering a less invasive and more efficient alternative to traditional diagnostics. This technology promises to ease public health burdens, improve patient outcomes, and enhance health care access in under-resourced areas, potentially transforming respiratory disease management globally. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) PRR1-10.2196/54388.
Collapse
Affiliation(s)
- Kahabi Ganka Isangula
- School of Nursing and Midwifery, Aga Khan University, Dar Es Salaam, United Republic of Tanzania
| | - Rogers John Haule
- School of Nursing and Midwifery, Aga Khan University, Dar Es Salaam, United Republic of Tanzania
| |
Collapse
|
5
|
Hegde S, Sreeram S, Alter IL, Shor C, Valdez TA, Meister KD, Rameau A. Cough Sounds in Screening and Diagnostics: A Scoping Review. Laryngoscope 2024; 134:1023-1031. [PMID: 37672667 PMCID: PMC10915103 DOI: 10.1002/lary.31042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 08/04/2023] [Accepted: 08/29/2023] [Indexed: 09/08/2023]
Abstract
OBJECTIVE The aim of the study was to examine applications of cough sounds towards screening tools and diagnostics in the biomedical and engineering literature, with particular focus on disease types, acoustic data collection protocols, data processing and analytics, accuracy, and limitations. DATA SOURCES PubMed, EMBASE, Web of Science, Scopus, Cochrane Library, IEEE Xplore, Engineering Village, and ACM Digital Library were searched from inception to August 2021. REVIEW METHODS A scoping review was conducted on screening and diagnostic uses of cough sounds in adults, children, and animals, in English peer-reviewed and gray literature of any design. RESULTS From a total of 438 abstracts screened, 108 articles met inclusion criteria. Human studies were most common (77.8%); the majority focused on adults (57.3%). Single-modality acoustic data collection was most common (71.2%), with few multimodal studies, including plethysmography (15.7%) and clinico-demographic data (7.4%). Data analytics methods were highly variable, with 61.1% using machine learning, the majority of which (78.8%) were published after 2010. Studies commonly focused on cough detection (41.7%) and screening of COVID-19 (11.1%); among pediatric studies, the most common focus was diagnosis of asthma (52.6%). CONCLUSION Though the use of cough sounds in diagnostics is not new, academic interest has accelerated in the past decade. Cough sound offers the possibility of an accessible, noninvasive, and low-cost disease biomarker, particularly in the era of rapid development of machine learning capabilities in combination with the ubiquity of cellular technology with high-quality recording capability. However, most cough sound literature hinges on nonstandardized data collection protocols and small, nondiverse, single-modality datasets, with limited external validity. Laryngoscope, 134:1023-1031, 2024.
Collapse
Affiliation(s)
- Siddhi Hegde
- KVG Medical College and Hospital, Sullia, Karnataka, India
| | - Shreya Sreeram
- KVG Medical College and Hospital, Sullia, Karnataka, India
| | - Isaac L. Alter
- Columbia University Vagelos College of Physicians and Surgeons, New York, NY, U.S.A
| | - Chaya Shor
- Weill Cornell Medicine, Sean Parker Institute for the Voice, New York, NY, U.S.A
| | - Tulio A. Valdez
- Division of Pediatric Otolaryngology, Otolaryngology--Head & Neck Surgery, Stanford University, Stanford, California, U.S.A
| | - Kara D. Meister
- Division of Pediatric Otolaryngology, Otolaryngology--Head & Neck Surgery, Stanford University, Stanford, California, U.S.A
| | - Anaïs Rameau
- Weill Cornell Medicine, Sean Parker Institute for the Voice, New York, NY, U.S.A
| |
Collapse
|
6
|
Geronimo A. Turning up the volume on neuromuscular cough medicine. Muscle Nerve 2024; 69:129-130. [PMID: 38037436 DOI: 10.1002/mus.28016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 11/20/2023] [Accepted: 11/21/2023] [Indexed: 12/02/2023]
Abstract
See article on pages 213–217 in this issue.
Collapse
Affiliation(s)
- Andrew Geronimo
- Neurology, Penn State College of Medicine, Hershey, Pennsylvania, USA
| |
Collapse
|
7
|
Ghrabli S, Elgendi M, Menon C. Identifying unique spectral fingerprints in cough sounds for diagnosing respiratory ailments. Sci Rep 2024; 14:593. [PMID: 38182601 PMCID: PMC10770161 DOI: 10.1038/s41598-023-50371-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 12/19/2023] [Indexed: 01/07/2024] Open
Abstract
Coughing, a prevalent symptom of many illnesses, including COVID-19, has led researchers to explore the potential of cough sound signals for cost-effective disease diagnosis. Traditional diagnostic methods, which can be expensive and require specialized personnel, contrast with the more accessible smartphone analysis of coughs. Typically, coughs are classified as wet or dry based on their phase duration. However, the utilization of acoustic analysis for diagnostic purposes is not widespread. Our study examined cough sounds from 1183 COVID-19-positive patients and compared them with 341 non-COVID-19 cough samples, as well as analyzing distinctions between pneumonia and asthma-related coughs. After rigorous optimization across frequency ranges, specific frequency bands were found to correlate with each respiratory ailment. Statistical separability tests validated these findings, and machine learning algorithms, including linear discriminant analysis and k-nearest neighbors classifiers, were employed to confirm the presence of distinct frequency bands in the cough signal power spectrum associated with particular diseases. The identification of these acoustic signatures in cough sounds holds the potential to transform the classification and diagnosis of respiratory diseases, offering an affordable and widely accessible healthcare tool.
Collapse
Affiliation(s)
- Syrine Ghrabli
- Biomedical and Mobile Health Technology Lab, ETH Zurich, 8008, Zurich, Switzerland
- Department of Physics, ETH Zurich, 8093, Zurich, Switzerland
| | - Mohamed Elgendi
- Biomedical and Mobile Health Technology Lab, ETH Zurich, 8008, Zurich, Switzerland.
| | - Carlo Menon
- Biomedical and Mobile Health Technology Lab, ETH Zurich, 8008, Zurich, Switzerland.
| |
Collapse
|
8
|
Alvarado E, Grágeda N, Luzanto A, Mahu R, Wuth J, Mendoza L, Stern RM, Yoma NB. Automatic Detection of Dyspnea in Real Human-Robot Interaction Scenarios. SENSORS (BASEL, SWITZERLAND) 2023; 23:7590. [PMID: 37688044 PMCID: PMC10490721 DOI: 10.3390/s23177590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 08/20/2023] [Accepted: 08/25/2023] [Indexed: 09/10/2023]
Abstract
A respiratory distress estimation technique for telephony previously proposed by the authors is adapted and evaluated in real static and dynamic HRI scenarios. The system is evaluated with a telephone dataset re-recorded using the robotic platform designed and implemented for this study. In addition, the original telephone training data are modified using an environmental model that incorporates natural robot-generated and external noise sources and reverberant effects using room impulse responses (RIRs). The results indicate that the average accuracy and AUC are just 0.4% less than those obtained with matched training/testing conditions with simulated data. Quite surprisingly, there is not much difference in accuracy and AUC between static and dynamic HRI conditions. Moreover, the beamforming methods delay-and-sum and MVDR lead to average improvement in accuracy and AUC equal to 8% and 2%, respectively, when applied to training and testing data. Regarding the complementarity of time-dependent and time-independent features, the combination of both types of classifiers provides the best joint accuracy and AUC score.
Collapse
Affiliation(s)
- Eduardo Alvarado
- Speech Processing and Transmission Laboratory, Electrical Engineering Department, University of Chile, Santiago 8370451, Chile; (E.A.); (N.G.); (A.L.); (R.M.); (J.W.)
| | - Nicolás Grágeda
- Speech Processing and Transmission Laboratory, Electrical Engineering Department, University of Chile, Santiago 8370451, Chile; (E.A.); (N.G.); (A.L.); (R.M.); (J.W.)
| | - Alejandro Luzanto
- Speech Processing and Transmission Laboratory, Electrical Engineering Department, University of Chile, Santiago 8370451, Chile; (E.A.); (N.G.); (A.L.); (R.M.); (J.W.)
| | - Rodrigo Mahu
- Speech Processing and Transmission Laboratory, Electrical Engineering Department, University of Chile, Santiago 8370451, Chile; (E.A.); (N.G.); (A.L.); (R.M.); (J.W.)
| | - Jorge Wuth
- Speech Processing and Transmission Laboratory, Electrical Engineering Department, University of Chile, Santiago 8370451, Chile; (E.A.); (N.G.); (A.L.); (R.M.); (J.W.)
| | - Laura Mendoza
- Hospital Clínico Universidad de Chile, Santiago 8380420, Chile;
- Clínica Alemana, Santiago 7630000, Chile
| | - Richard M. Stern
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA;
| | - Néstor Becerra Yoma
- Speech Processing and Transmission Laboratory, Electrical Engineering Department, University of Chile, Santiago 8370451, Chile; (E.A.); (N.G.); (A.L.); (R.M.); (J.W.)
| |
Collapse
|
9
|
Sfayyih AH, Sulaiman N, Sabry AH. A review on lung disease recognition by acoustic signal analysis with deep learning networks. JOURNAL OF BIG DATA 2023; 10:101. [PMID: 37333945 PMCID: PMC10259357 DOI: 10.1186/s40537-023-00762-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 05/08/2023] [Indexed: 06/20/2023]
Abstract
Recently, assistive explanations for difficulties in the health check area have been made viable thanks in considerable portion to technologies like deep learning and machine learning. Using auditory analysis and medical imaging, they also increase the predictive accuracy for prompt and early disease detection. Medical professionals are thankful for such technological support since it helps them manage further patients because of the shortage of skilled human resources. In addition to serious illnesses like lung cancer and respiratory diseases, the plurality of breathing difficulties is gradually rising and endangering society. Because early prediction and immediate treatment are crucial for respiratory disorders, chest X-rays and respiratory sound audio are proving to be quite helpful together. Compared to related review studies on lung disease classification/detection using deep learning algorithms, only two review studies based on signal analysis for lung disease diagnosis have been conducted in 2011 and 2018. This work provides a review of lung disease recognition with acoustic signal analysis with deep learning networks. We anticipate that physicians and researchers working with sound-signal-based machine learning will find this material beneficial.
Collapse
Affiliation(s)
- Alyaa Hamel Sfayyih
- Department of Electrical and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia, 43400 Serdang, Malaysia
| | - Nasri Sulaiman
- Department of Electrical and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia, 43400 Serdang, Malaysia
| | - Ahmad H. Sabry
- Department of Computer Engineering, Al-Nahrain University, Al Jadriyah Bridge, 64074 Baghdad, Iraq
| |
Collapse
|
10
|
Kraman SS, Pasterkamp H, Wodicka GR. Smart Devices Are Poised to Revolutionize the Usefulness of Respiratory Sounds. Chest 2023; 163:1519-1528. [PMID: 36706908 PMCID: PMC10925548 DOI: 10.1016/j.chest.2023.01.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 01/10/2023] [Accepted: 01/17/2023] [Indexed: 01/26/2023] Open
Abstract
The association between breathing sounds and respiratory health or disease has been exceptionally useful in the practice of medicine since the advent of the stethoscope. Remote patient monitoring technology and artificial intelligence offer the potential to develop practical means of assessing respiratory function or dysfunction through continuous assessment of breathing sounds when patients are at home, at work, or even asleep. Automated reports such as cough counts or the percentage of the breathing cycles containing wheezes can be delivered to a practitioner via secure electronic means or returned to the clinical office at the first opportunity. This has not previously been possible. The four respiratory sounds that most lend themselves to this technology are wheezes, to detect breakthrough asthma at night and even occupational asthma when a patient is at work; snoring as an indicator of OSA or adequacy of CPAP settings; cough in which long-term recording can objectively assess treatment adequacy; and crackles, which, although subtle and often overlooked, can contain important clinical information when appearing in a home recording. In recent years, a flurry of publications in the engineering literature described construction, usage, and testing outcomes of such devices. Little of this has appeared in the medical literature. The potential value of this technology for pulmonary medicine is compelling. We expect that these tiny, smart devices soon will allow us to address clinical questions that occur away from the clinic.
Collapse
Affiliation(s)
- Steve S Kraman
- Department of Internal Medicine, Division of Pulmonary, Critical Care and Sleep Medicine, University of Kentucky, Lexington, KY.
| | - Hans Pasterkamp
- University of Manitoba, Department of Pediatrics and Child Health, Max Rady College of Medicine, University of Manitoba, Winnipeg, MB, Canada
| | - George R Wodicka
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN
| |
Collapse
|
11
|
Alvarado E, Grágeda N, Luzanto A, Mahu R, Wuth J, Mendoza L, Yoma NB. Dyspnea Severity Assessment Based on Vocalization Behavior with Deep Learning on the Telephone. SENSORS (BASEL, SWITZERLAND) 2023; 23:2441. [PMID: 36904646 PMCID: PMC10007248 DOI: 10.3390/s23052441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 02/08/2023] [Accepted: 02/17/2023] [Indexed: 06/18/2023]
Abstract
In this paper, a system to assess dyspnea with the mMRC scale, on the phone, via deep learning, is proposed. The method is based on modeling the spontaneous behavior of subjects while pronouncing controlled phonetization. These vocalizations were designed, or chosen, to deal with the stationary noise suppression of cellular handsets, to provoke different rates of exhaled air, and to stimulate different levels of fluency. Time-independent and time-dependent engineered features were proposed and selected, and a k-fold scheme with double validation was adopted to select the models with the greatest potential for generalization. Moreover, score fusion methods were also investigated to optimize the complementarity of the controlled phonetizations and features that were engineered and selected. The results reported here were obtained from 104 participants, where 34 corresponded to healthy individuals and 70 were patients with respiratory conditions. The subjects' vocalizations were recorded with a telephone call (i.e., with an IVR server). The system provided an accuracy of 59% (i.e., estimating the correct mMRC), a root mean square error equal to 0.98, false positive rate of 6%, false negative rate of 11%, and an area under the ROC curve equal to 0.97. Finally, a prototype was developed and implemented, with an ASR-based automatic segmentation scheme, to estimate dyspnea on line.
Collapse
Affiliation(s)
- Eduardo Alvarado
- Speech Processing and Transmission Laboratory, Electrical Engineering Department, University of Chile, Santiago 8370451, Chile
| | - Nicolás Grágeda
- Speech Processing and Transmission Laboratory, Electrical Engineering Department, University of Chile, Santiago 8370451, Chile
| | - Alejandro Luzanto
- Speech Processing and Transmission Laboratory, Electrical Engineering Department, University of Chile, Santiago 8370451, Chile
| | - Rodrigo Mahu
- Speech Processing and Transmission Laboratory, Electrical Engineering Department, University of Chile, Santiago 8370451, Chile
| | - Jorge Wuth
- Speech Processing and Transmission Laboratory, Electrical Engineering Department, University of Chile, Santiago 8370451, Chile
| | - Laura Mendoza
- Clinical Hospital, University of Chile, Santiago 8380420, Chile
| | - Néstor Becerra Yoma
- Speech Processing and Transmission Laboratory, Electrical Engineering Department, University of Chile, Santiago 8370451, Chile
| |
Collapse
|
12
|
Shiomi M, Kubota A, Kimoto M, Iio T, Shimohara K. Stay away from me: Coughing increases social distance even in a virtual environment. PLoS One 2022; 17:e0279717. [PMID: 36576927 PMCID: PMC9797075 DOI: 10.1371/journal.pone.0279717] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 12/12/2022] [Indexed: 12/29/2022] Open
Abstract
This study investigated whether the coughing behaviors of virtual agents encourage infection avoidance behavior, i.e., distancing behaviors. We hypothesized that the changes in people's lifestyles in physical environments due to COVID-19 probably influence their behaviors, even in virtual environments where no infection risk is present. We focused on different types of virtual agents because non-human agents, such as robot-like agents, cannot spread a virus by coughing. We prepared four kinds of virtual agents (human-like/robot-like and male/female) and coughing behaviors for them and experimentally measured the personal distance maintained by participants toward them. Our experiment results showed that participants chose a greater distance from coughing agents, regardless of the types, and negatively evaluated them. They also chose a greater distance from male agents than from female agents.
Collapse
Affiliation(s)
- Masahiro Shiomi
- Department of Agent Interaction Design Laboratory, Advanced Telecommunications Research Institute International, Kyoto, Japan
- * E-mail:
| | - Atsumu Kubota
- Department of Agent Interaction Design Laboratory, Advanced Telecommunications Research Institute International, Kyoto, Japan
- Faculty of Science and Engineering, Doshisha University, Kyoto, Japan
| | - Mitsuhiko Kimoto
- Department of Agent Interaction Design Laboratory, Advanced Telecommunications Research Institute International, Kyoto, Japan
| | - Takamasa Iio
- Department of Agent Interaction Design Laboratory, Advanced Telecommunications Research Institute International, Kyoto, Japan
- Faculty of Culture and Information Science, Doshisha University, Kyoto, Japan
| | - Katsunori Shimohara
- Department of Agent Interaction Design Laboratory, Advanced Telecommunications Research Institute International, Kyoto, Japan
- Faculty of Science and Engineering, Doshisha University, Kyoto, Japan
| |
Collapse
|
13
|
Zhang M, Sykes DL, Brindle K, Sadofsky LR, Morice AH. Chronic cough-the limitation and advances in assessment techniques. J Thorac Dis 2022; 14:5097-5119. [PMID: 36647459 PMCID: PMC9840016 DOI: 10.21037/jtd-22-874] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Accepted: 11/04/2022] [Indexed: 11/23/2022]
Abstract
Accurate and consistent assessments of cough are essential to advance the understanding of the mechanisms of cough and individualised the management of patients. Considerable progress has been made in this work. Here we reviewed the currently available tools for subjectively and objectively measuring both cough sensitivity and severity. We also provided some opinions on the new techniques and future directions. The simple and practical Visual Analogue Scale (VAS), the Leicester Cough Questionnaire (LCQ), and the Cough Specific Quality of Life Questionnaire (CQLQ) are the most widely used self-reported questionnaires for evaluating and quantifying cough severity. The Hull Airway Reflux Questionnaire (HARQ) is a tool to elucidate the constellation of symptoms underlying the diagnosis of chronic cough. Chemical excitation tests are widely used to explore the pathophysiological mechanisms of the cough reflex, such as capsaicin, citric acid and adenosine triphosphate (ATP) challenge test. Cough frequency is an ideal primary endpoint for clinical research, but the application of cough counters has been limited in clinical practice by the high cost and reliance on aural validation. The ongoing development of cough detection technology for smartphone apps and wearable devices will hopefully simplify cough counting, thus transitioning it from niche research to a widely available clinical application.
Collapse
Affiliation(s)
- Mengru Zhang
- Centre for Clinical Science, Respiratory Medicine, Hull York Medical School, University of Hull, Castle Hill Hospital, Cottingham, East Yorkshire, UK;,Department of Pulmonary and Critical Care Medicine, Tongji Hospital, Tongji University School of Medicine, Shanghai, China
| | - Dominic L. Sykes
- Centre for Clinical Science, Respiratory Medicine, Hull York Medical School, University of Hull, Castle Hill Hospital, Cottingham, East Yorkshire, UK
| | - Kayleigh Brindle
- Centre for Clinical Science, Respiratory Medicine, Hull York Medical School, University of Hull, Castle Hill Hospital, Cottingham, East Yorkshire, UK
| | - Laura R. Sadofsky
- Centre for Clinical Science, Respiratory Medicine, Hull York Medical School, University of Hull, Castle Hill Hospital, Cottingham, East Yorkshire, UK
| | - Alyn H. Morice
- Centre for Clinical Science, Respiratory Medicine, Hull York Medical School, University of Hull, Castle Hill Hospital, Cottingham, East Yorkshire, UK
| |
Collapse
|
14
|
Nallakaruppan MK, Ramalingam S, Somayaji SRK, Prathiba SB. Comparative Analysis of Deep Learning Models Used in Impact Analysis of Coronavirus Chest X-ray Imaging. Biomedicines 2022; 10:2791. [PMID: 36359310 PMCID: PMC9687278 DOI: 10.3390/biomedicines10112791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/19/2022] [Accepted: 10/22/2022] [Indexed: 11/06/2022] Open
Abstract
The impact analysis of deep learning models for COVID-19-infected X-ray images is an extremely challenging task. Every model has unique capabilities that can provide suitable solutions for some given problem. The prescribed work analyzes various deep learning models that are used for capturing the chest X-ray images. Their performance-defining factors, such as accuracy, f1-score, training and the validation loss, are tested with the support of the training dataset. These deep learning models are multi-layered architectures. These parameters fluctuate based on the behavior of these layers, learning rate, training efficiency, or over-fitting of models. This may in turn introduce sudden changes in the values of training accuracy, testing accuracy, loss or validation loss, f1-score, etc. Some models produce linear responses with respect to the training and testing data, such as Xception, but most of the models provide a variation of these parameters either in the accuracy or the loss functions. The prescribed work performs detailed experimental analysis of deep learning image neural network models and compares them with the above said parameters with detailed analysis of these parameters with their responses regarding accuracy and loss functions. This work also analyses the suitability of these model based on the various parameters, such as the accuracy and loss functions to various applications. This prescribed work also lists out various challenges on the implementation and experimentation of these models. Solutions are provided for enhancing the performance of these deep learning models. The deep learning models that are used in the prescribed work are Resnet, VGG16, Resnet with VGG, Inception V3, Xception with transfer learning, and CNN. The model is trained with more than 1500 images of the chest-X-ray data and tested with around 132 samples of the X-ray image dataset. The prescribed work analyzes the accuracy, f1-score, recall, and precision of these models and analyzes these parameters. It also measures parameters such as training accuracy, testing accuracy, loss, and validation loss. Each epoch of every model is recorded to measure the changes in these parameters during the experimental analysis. The prescribed work provides insight for future research through various challenges and research findings with future directions.
Collapse
Affiliation(s)
| | - Subhashini Ramalingam
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | | | - Sahaya Beni Prathiba
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
| |
Collapse
|
15
|
Aleixandre JG, Elgendi M, Menon C. The Use of Audio Signals for Detecting COVID-19: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:8114. [PMID: 36365811 PMCID: PMC9653621 DOI: 10.3390/s22218114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 10/17/2022] [Accepted: 10/20/2022] [Indexed: 06/16/2023]
Abstract
A systematic review on the topic of automatic detection of COVID-19 using audio signals was performed. A total of 48 papers were obtained after screening 659 records identified in the PubMed, IEEE Xplore, Embase, and Google Scholar databases. The reviewed studies employ a mixture of open-access and self-collected datasets. Because COVID-19 has only recently been investigated, there is a limited amount of available data. Most of the data are crowdsourced, which motivated a detailed study of the various pre-processing techniques used by the reviewed studies. Although 13 of the 48 identified papers show promising results, several have been performed with small-scale datasets (<200). Among those papers, convolutional neural networks and support vector machine algorithms were the best-performing methods. The analysis of the extracted features showed that Mel-frequency cepstral coefficients and zero-crossing rate continue to be the most popular choices. Less common alternatives, such as non-linear features, have also been proven to be effective. The reported values for sensitivity range from 65.0% to 99.8% and those for accuracy from 59.0% to 99.8%.
Collapse
Affiliation(s)
- José Gómez Aleixandre
- Biomedical and Mobile Health Technology Lab, ETH Zurich, 8008 Zurich, Switzerland
- Department of Physics, ETH Zurich, 8093 Zurich, Switzerland
| | - Mohamed Elgendi
- Biomedical and Mobile Health Technology Lab, ETH Zurich, 8008 Zurich, Switzerland
| | - Carlo Menon
- Biomedical and Mobile Health Technology Lab, ETH Zurich, 8008 Zurich, Switzerland
| |
Collapse
|