1
|
Chételat O, Rapin M, Bonnal B, Fivaz A, Sporrer B, Rosenthal J, Wacker J. Remotely Powered Two-Wire Cooperative Sensors for Bioimpedance Imaging Wearables. SENSORS (BASEL, SWITZERLAND) 2024; 24:5896. [PMID: 39338640 PMCID: PMC11435524 DOI: 10.3390/s24185896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Revised: 09/03/2024] [Accepted: 09/09/2024] [Indexed: 09/30/2024]
Abstract
Bioimpedance imaging aims to generate a 3D map of the resistivity and permittivity of biological tissue from multiple impedance channels measured with electrodes applied to the skin. When the electrodes are distributed around the body (for example, by delineating a cross section of the chest or a limb), bioimpedance imaging is called electrical impedance tomography (EIT) and results in functional 2D images. Conventional EIT systems rely on individually cabling each electrode to master electronics in a star configuration. This approach works well for rack-mounted equipment; however, the bulkiness of the cabling is unsuitable for a wearable system. Previously presented cooperative sensors solve this cabling problem using active (dry) electrodes connected via a two-wire parallel bus. The bus can be implemented with two unshielded wires or even two conductive textile layers, thus replacing the cumbersome wiring of the conventional star arrangement. Prior research demonstrated cooperative sensors for measuring bioimpedances, successfully realizing a measurement reference signal, sensor synchronization, and data transfer though still relying on individual batteries to power the sensors. Subsequent research using cooperative sensors for biopotential measurements proposed a method to remove batteries from the sensors and have the central unit supply power over the two-wire bus. Building from our previous research, this paper presents the application of this method to the measurement of bioimpedances. Two different approaches are discussed, one using discrete, commercially available components, and the other with an application-specific integrated circuit (ASIC). The initial experimental results reveal that both approaches are feasible, but the ASIC approach offers advantages for medical safety, as well as lower power consumption and a smaller size.
Collapse
Affiliation(s)
- Olivier Chételat
- Medtech Business Unit, Swiss Center for Electronics and Microtechnology (CSEM), Jaquet-Droz 1, 2002 Neuchâtel, Switzerland
| | - Michaël Rapin
- Medtech Business Unit, Swiss Center for Electronics and Microtechnology (CSEM), Jaquet-Droz 1, 2002 Neuchâtel, Switzerland
| | - Benjamin Bonnal
- Medtech Business Unit, Swiss Center for Electronics and Microtechnology (CSEM), Jaquet-Droz 1, 2002 Neuchâtel, Switzerland
| | - André Fivaz
- Medtech Business Unit, Swiss Center for Electronics and Microtechnology (CSEM), Jaquet-Droz 1, 2002 Neuchâtel, Switzerland
| | - Benjamin Sporrer
- Integrated & Wireless Business Unit, Swiss Center for Electronics and Microtechnology (CSEM), Technopark, Technoparkstrasse 1, 8005 Zürich, Switzerland
| | - James Rosenthal
- Medtech Business Unit, Swiss Center for Electronics and Microtechnology (CSEM), Jaquet-Droz 1, 2002 Neuchâtel, Switzerland
| | - Josias Wacker
- Medtech Business Unit, Swiss Center for Electronics and Microtechnology (CSEM), Jaquet-Droz 1, 2002 Neuchâtel, Switzerland
| |
Collapse
|
2
|
Khan R, Khan SU, Saeed U, Koo IS. Auscultation-Based Pulmonary Disease Detection through Parallel Transformation and Deep Learning. Bioengineering (Basel) 2024; 11:586. [PMID: 38927822 PMCID: PMC11200393 DOI: 10.3390/bioengineering11060586] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2024] [Revised: 06/05/2024] [Accepted: 06/06/2024] [Indexed: 06/28/2024] Open
Abstract
Respiratory diseases are among the leading causes of death, with many individuals in a population frequently affected by various types of pulmonary disorders. Early diagnosis and patient monitoring (traditionally involving lung auscultation) are essential for the effective management of respiratory diseases. However, the interpretation of lung sounds is a subjective and labor-intensive process that demands considerable medical expertise, and there is a good chance of misclassification. To address this problem, we propose a hybrid deep learning technique that incorporates signal processing techniques. Parallel transformation is applied to adventitious respiratory sounds, transforming lung sound signals into two distinct time-frequency scalograms: the continuous wavelet transform and the mel spectrogram. Furthermore, parallel convolutional autoencoders are employed to extract features from scalograms, and the resulting latent space features are fused into a hybrid feature pool. Finally, leveraging a long short-term memory model, a feature from the latent space is used as input for classifying various types of respiratory diseases. Our work is evaluated using the ICBHI-2017 lung sound dataset. The experimental findings indicate that our proposed method achieves promising predictive performance, with average values for accuracy, sensitivity, specificity, and F1-score of 94.16%, 89.56%, 99.10%, and 89.56%, respectively, for eight-class respiratory diseases; 79.61%, 78.55%, 92.49%, and 78.67%, respectively, for four-class diseases; and 85.61%, 83.44%, 83.44%, and 84.21%, respectively, for binary-class (normal vs. abnormal) lung sounds.
Collapse
Affiliation(s)
- Rehan Khan
- Department of Electrical Electronic and Computer Engineering, University of Ulsan, Ulsan 44610, Republic of Korea; (R.K.); (S.U.K.)
| | - Shafi Ullah Khan
- Department of Electrical Electronic and Computer Engineering, University of Ulsan, Ulsan 44610, Republic of Korea; (R.K.); (S.U.K.)
| | - Umer Saeed
- Research Centre for Intelligent Healthcare, Coventry University, Coventry CV1 5FB, UK;
| | - In-Soo Koo
- Department of Electrical Electronic and Computer Engineering, University of Ulsan, Ulsan 44610, Republic of Korea; (R.K.); (S.U.K.)
| |
Collapse
|
3
|
Kapetanidis P, Kalioras F, Tsakonas C, Tzamalis P, Kontogiannis G, Karamanidou T, Stavropoulos TG, Nikoletseas S. Respiratory Diseases Diagnosis Using Audio Analysis and Artificial Intelligence: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:1173. [PMID: 38400330 PMCID: PMC10893010 DOI: 10.3390/s24041173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 02/03/2024] [Accepted: 02/04/2024] [Indexed: 02/25/2024]
Abstract
Respiratory diseases represent a significant global burden, necessitating efficient diagnostic methods for timely intervention. Digital biomarkers based on audio, acoustics, and sound from the upper and lower respiratory system, as well as the voice, have emerged as valuable indicators of respiratory functionality. Recent advancements in machine learning (ML) algorithms offer promising avenues for the identification and diagnosis of respiratory diseases through the analysis and processing of such audio-based biomarkers. An ever-increasing number of studies employ ML techniques to extract meaningful information from audio biomarkers. Beyond disease identification, these studies explore diverse aspects such as the recognition of cough sounds amidst environmental noise, the analysis of respiratory sounds to detect respiratory symptoms like wheezes and crackles, as well as the analysis of the voice/speech for the evaluation of human voice abnormalities. To provide a more in-depth analysis, this review examines 75 relevant audio analysis studies across three distinct areas of concern based on respiratory diseases' symptoms: (a) cough detection, (b) lower respiratory symptoms identification, and (c) diagnostics from the voice and speech. Furthermore, publicly available datasets commonly utilized in this domain are presented. It is observed that research trends are influenced by the pandemic, with a surge in studies on COVID-19 diagnosis, mobile data acquisition, and remote diagnosis systems.
Collapse
Affiliation(s)
- Panagiotis Kapetanidis
- Computer Engineering and Informatics Department, University of Patras, 26504 Patras, Greece (C.T.); (G.K.); (S.N.)
| | - Fotios Kalioras
- Computer Engineering and Informatics Department, University of Patras, 26504 Patras, Greece (C.T.); (G.K.); (S.N.)
| | - Constantinos Tsakonas
- Computer Engineering and Informatics Department, University of Patras, 26504 Patras, Greece (C.T.); (G.K.); (S.N.)
| | - Pantelis Tzamalis
- Computer Engineering and Informatics Department, University of Patras, 26504 Patras, Greece (C.T.); (G.K.); (S.N.)
| | - George Kontogiannis
- Computer Engineering and Informatics Department, University of Patras, 26504 Patras, Greece (C.T.); (G.K.); (S.N.)
| | - Theodora Karamanidou
- Pfizer Center for Digital Innovation, 55535 Thessaloniki, Greece; (T.K.); (T.G.S.)
| | | | - Sotiris Nikoletseas
- Computer Engineering and Informatics Department, University of Patras, 26504 Patras, Greece (C.T.); (G.K.); (S.N.)
| |
Collapse
|
4
|
Mang LD, González Martínez FD, Martinez Muñoz D, García Galán S, Cortina R. Classification of Adventitious Sounds Combining Cochleogram and Vision Transformers. SENSORS (BASEL, SWITZERLAND) 2024; 24:682. [PMID: 38276373 PMCID: PMC10818433 DOI: 10.3390/s24020682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 01/13/2024] [Accepted: 01/19/2024] [Indexed: 01/27/2024]
Abstract
Early identification of respiratory irregularities is critical for improving lung health and reducing global mortality rates. The analysis of respiratory sounds plays a significant role in characterizing the respiratory system's condition and identifying abnormalities. The main contribution of this study is to investigate the performance when the input data, represented by cochleogram, is used to feed the Vision Transformer (ViT) architecture, since this input-classifier combination is the first time it has been applied to adventitious sound classification to our knowledge. Although ViT has shown promising results in audio classification tasks by applying self-attention to spectrogram patches, we extend this approach by applying the cochleogram, which captures specific spectro-temporal features of adventitious sounds. The proposed methodology is evaluated on the ICBHI dataset. We compare the classification performance of ViT with other state-of-the-art CNN approaches using spectrogram, Mel frequency cepstral coefficients, constant-Q transform, and cochleogram as input data. Our results confirm the superior classification performance combining cochleogram and ViT, highlighting the potential of ViT for reliable respiratory sound classification. This study contributes to the ongoing efforts in developing automatic intelligent techniques with the aim to significantly augment the speed and effectiveness of respiratory disease detection, thereby addressing a critical need in the medical field.
Collapse
Affiliation(s)
- Loredana Daria Mang
- Department of Telecommunication Engineering, University of Jaen, 23700 Linares, Spain; (F.D.G.M.); (D.M.M.); (S.G.G.)
| | | | - Damian Martinez Muñoz
- Department of Telecommunication Engineering, University of Jaen, 23700 Linares, Spain; (F.D.G.M.); (D.M.M.); (S.G.G.)
| | - Sebastián García Galán
- Department of Telecommunication Engineering, University of Jaen, 23700 Linares, Spain; (F.D.G.M.); (D.M.M.); (S.G.G.)
| | - Raquel Cortina
- Department of Computer Science, University of Oviedo, 33003 Oviedo, Spain;
| |
Collapse
|
5
|
Sanchez-Perez JA, Gazi AH, Mabrouk SA, Berkebile JA, Ozmen GC, Kamaleswaran R, Inan OT. Enabling Continuous Breathing-Phase Contextualization via Wearable-Based Impedance Pneumography and Lung Sounds: A Feasibility Study. IEEE J Biomed Health Inform 2023; 27:5734-5744. [PMID: 37751335 PMCID: PMC10733967 DOI: 10.1109/jbhi.2023.3319381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/28/2023]
Abstract
Chronic respiratory diseases affect millions and are leading causes of death in the US and worldwide. Pulmonary auscultation provides clinicians with critical respiratory health information through the study of Lung Sounds (LS) and the context of the breathing-phase and chest location in which they are measured. Existing auscultation technologies, however, do not enable the simultaneous measurement of this context, thereby potentially limiting computerized LS analysis. In this work, LS and Impedance Pneumography (IP) measurements were obtained from 10 healthy volunteers while performing normal and forced-expiratory (FE) breathing maneuvers using our wearable IP and respiratory sounds (WIRS) system. Simultaneous auscultation was performed with the Eko CORE stethoscope (EKO). The breathing-phase context was extracted from the IP signals and used to compute phase-by-phase (Inspiratory (I), expiratory (E), and their ratio (I:E)) and breath-by-breath acoustic features. Their individual and added value was then elucidated through machine learning analysis. We found that the phase-contextualized features effectively captured the underlying acoustic differences between deep and FE breaths, yielding a maximum F1 Score of 84.1 ±11.4% with the phase-by-phase features as the strongest contributors to this performance. Further, the individual phase-contextualized models outperformed the traditional breath-by-breath models in all cases. The validity of the results was demonstrated for the LS obtained with WIRS, EKO, and their combination. These results suggest that incorporating breathing-phase context may enhance computerized LS analysis. Hence, multimodal sensing systems that enable this, such as WIRS, have the potential to advance LS clinical utility beyond traditional manual auscultation and improve patient care.
Collapse
|
6
|
Im S, Kim T, Min C, Kang S, Roh Y, Kim C, Kim M, Kim SH, Shim K, Koh JS, Han S, Lee J, Kim D, Kang D, Seo S. Real-time counting of wheezing events from lung sounds using deep learning algorithms: Implications for disease prediction and early intervention. PLoS One 2023; 18:e0294447. [PMID: 37983213 PMCID: PMC10659186 DOI: 10.1371/journal.pone.0294447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 10/23/2023] [Indexed: 11/22/2023] Open
Abstract
This pioneering study aims to revolutionize self-symptom management and telemedicine-based remote monitoring through the development of a real-time wheeze counting algorithm. Leveraging a novel approach that includes the detailed labeling of one breathing cycle into three types: break, normal, and wheeze, this study not only identifies abnormal sounds within each breath but also captures comprehensive data on their location, duration, and relationships within entire respiratory cycles, including atypical patterns. This innovative strategy is based on a combination of a one-dimensional convolutional neural network (1D-CNN) and a long short-term memory (LSTM) network model, enabling real-time analysis of respiratory sounds. Notably, it stands out for its capacity to handle continuous data, distinguishing it from conventional lung sound classification algorithms. The study utilizes a substantial dataset consisting of 535 respiration cycles from diverse sources, including the Child Sim Lung Sound Simulator, the EMTprep Open-Source Database, Clinical Patient Records, and the ICBHI 2017 Challenge Database. Achieving a classification accuracy of 90%, the exceptional result metrics encompass the identification of each breath cycle and simultaneous detection of the abnormal sound, enabling the real-time wheeze counting of all respirations. This innovative wheeze counter holds the promise of revolutionizing research on predicting lung diseases based on long-term breathing patterns and offers applicability in clinical and non-clinical settings for on-the-go detection and remote intervention of exacerbated respiratory symptoms.
Collapse
Affiliation(s)
- Sunghoon Im
- Department of Mechanical Engineering, Ajou University, Suwon-si, Gyeonggi-do, Republic of Korea
| | - Taewi Kim
- Department of Mechanical Engineering, Ajou University, Suwon-si, Gyeonggi-do, Republic of Korea
| | | | - Sanghun Kang
- Department of Mechanical Engineering, Ajou University, Suwon-si, Gyeonggi-do, Republic of Korea
| | - Yeonwook Roh
- Department of Mechanical Engineering, Ajou University, Suwon-si, Gyeonggi-do, Republic of Korea
| | - Changhwan Kim
- Department of Mechanical Engineering, Ajou University, Suwon-si, Gyeonggi-do, Republic of Korea
| | - Minho Kim
- Department of Mechanical Engineering, Ajou University, Suwon-si, Gyeonggi-do, Republic of Korea
| | - Seung Hyun Kim
- Department of Medical Humanities, Korea University College of Medicine, Seoul, Republic of Korea
| | - KyungMin Shim
- Industry-University Cooperation Foundation, Seogyeong University, Seoul, Republic of Korea
| | - Je-sung Koh
- Department of Mechanical Engineering, Ajou University, Suwon-si, Gyeonggi-do, Republic of Korea
| | - Seungyong Han
- Department of Mechanical Engineering, Ajou University, Suwon-si, Gyeonggi-do, Republic of Korea
| | - JaeWang Lee
- Department of Biomedical Laboratory Science, College of Health Science, Eulji University, Seongnam-si, Gyeonggi-do, Republic of Korea
| | - Dohyeong Kim
- University of Texas at Dallas, Richardson, TX, United States of America
| | - Daeshik Kang
- Department of Mechanical Engineering, Ajou University, Suwon-si, Gyeonggi-do, Republic of Korea
| | - SungChul Seo
- Department of Nano-Chemical, Biological and Environmental Engineering, Seogyeong University, Seoul, Republic of Korea
| |
Collapse
|
7
|
Garcia-Mendez JP, Lal A, Herasevich S, Tekin A, Pinevich Y, Lipatov K, Wang HY, Qamar S, Ayala IN, Khapov I, Gerberi DJ, Diedrich D, Pickering BW, Herasevich V. Machine Learning for Automated Classification of Abnormal Lung Sounds Obtained from Public Databases: A Systematic Review. Bioengineering (Basel) 2023; 10:1155. [PMID: 37892885 PMCID: PMC10604310 DOI: 10.3390/bioengineering10101155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 09/15/2023] [Accepted: 09/26/2023] [Indexed: 10/29/2023] Open
Abstract
Pulmonary auscultation is essential for detecting abnormal lung sounds during physical assessments, but its reliability depends on the operator. Machine learning (ML) models offer an alternative by automatically classifying lung sounds. ML models require substantial data, and public databases aim to address this limitation. This systematic review compares characteristics, diagnostic accuracy, concerns, and data sources of existing models in the literature. Papers published from five major databases between 1990 and 2022 were assessed. Quality assessment was accomplished with a modified QUADAS-2 tool. The review encompassed 62 studies utilizing ML models and public-access databases for lung sound classification. Artificial neural networks (ANN) and support vector machines (SVM) were frequently employed in the ML classifiers. The accuracy ranged from 49.43% to 100% for discriminating abnormal sound types and 69.40% to 99.62% for disease class classification. Seventeen public databases were identified, with the ICBHI 2017 database being the most used (66%). The majority of studies exhibited a high risk of bias and concerns related to patient selection and reference standards. Summarizing, ML models can effectively classify abnormal lung sounds using publicly available data sources. Nevertheless, inconsistent reporting and methodologies pose limitations to advancing the field, and therefore, public databases should adhere to standardized recording and labeling procedures.
Collapse
Affiliation(s)
- Juan P. Garcia-Mendez
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Amos Lal
- Department of Medicine, Division of Pulmonary and Critical Care Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Svetlana Herasevich
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Aysun Tekin
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Yuliya Pinevich
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
- Department of Cardiac Anesthesiology and Intensive Care, Republican Clinical Medical Center, 223052 Minsk, Belarus
| | - Kirill Lipatov
- Division of Pulmonary Medicine, Mayo Clinic Health Systems, Essentia Health, Duluth, MN 55805, USA
| | - Hsin-Yi Wang
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
- Department of Anesthesiology, Taipei Veterans General Hospital, National Yang Ming Chiao Tung University, Taipei 11217, Taiwan
- Department of Biomedical Sciences and Engineering, National Central University, Taoyuan 320317, Taiwan
| | - Shahraz Qamar
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Ivan N. Ayala
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Ivan Khapov
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | | | - Daniel Diedrich
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Brian W. Pickering
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Vitaly Herasevich
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| |
Collapse
|
8
|
Pessoa D, Rocha BM, Strodthoff C, Gomes M, Rodrigues G, Petmezas G, Cheimariotis GA, Kilintzis V, Kaimakamis E, Maglaveras N, Marques A, Frerichs I, Carvalho PD, Paiva RP. BRACETS: Bimodal repository of auscultation coupled with electrical impedance thoracic signals. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107720. [PMID: 37544061 DOI: 10.1016/j.cmpb.2023.107720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 06/27/2023] [Accepted: 07/10/2023] [Indexed: 08/08/2023]
Abstract
BACKGROUND AND OBJECTIVE Respiratory diseases are among the most significant causes of morbidity and mortality worldwide, causing substantial strain on society and health systems. Over the last few decades, there has been increasing interest in the automatic analysis of respiratory sounds and electrical impedance tomography (EIT). Nevertheless, no publicly available databases with both respiratory sound and EIT data are available. METHODS In this work, we have assembled the first open-access bimodal database focusing on the differential diagnosis of respiratory diseases (BRACETS: Bimodal Repository of Auscultation Coupled with Electrical Impedance Thoracic Signals). It includes simultaneous recordings of single and multi-channel respiratory sounds and EIT. Furthermore, we have proposed several machine learning-based baseline systems for automatically classifying respiratory diseases in six distinct evaluation tasks using respiratory sound and EIT (A1, A2, A3, B1, B2, B3). These tasks included classifying respiratory diseases at sample and subject levels. The performance of the classification models was evaluated using a 5-fold cross-validation scheme (with subject isolation between folds). RESULTS The resulting database consists of 1097 respiratory sounds and 795 EIT recordings acquired from 78 adult subjects in two countries (Portugal and Greece). In the task of automatically classifying respiratory diseases, the baseline classification models have achieved the following average balanced accuracy: Task A1 - 77.9±13.1%; Task A2 - 51.6±9.7%; Task A3 - 38.6±13.1%; Task B1 - 90.0±22.4%; Task B2 - 61.4±11.8%; Task B3 - 50.8±10.6%. CONCLUSION The creation of this database and its public release will aid the research community in developing automated methodologies to assess and monitor respiratory function, and it might serve as a benchmark in the field of digital medicine for managing respiratory diseases. Moreover, it could pave the way for creating multi-modal robust approaches for that same purpose.
Collapse
Affiliation(s)
- Diogo Pessoa
- University of Coimbra Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, 3030-290 Coimbra, Portugal.
| | - Bruno Machado Rocha
- University of Coimbra Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, 3030-290 Coimbra, Portugal
| | - Claas Strodthoff
- Department of Anesthesiology, and Intensive Care Medicine, University Medical Center Schleswig-Holstein Campus Kiel, Kiel 24105, Schleswig-Holstein, Germany
| | - Maria Gomes
- Lab3R - Respiratory Research and Rehabilitation Laboratory, School of Health Sciences (ESSUA), University of Aveiro, 3810-193 Aveiro, Portugal
| | - Guilherme Rodrigues
- Lab3R - Respiratory Research and Rehabilitation Laboratory, School of Health Sciences (ESSUA), University of Aveiro, 3810-193 Aveiro, Portugal
| | - Georgios Petmezas
- 2nd Department of Obstetrics and Gynaecology, The Medical School, 54124 Thessaloniki, Greece
| | | | - Vassilis Kilintzis
- 2nd Department of Obstetrics and Gynaecology, The Medical School, 54124 Thessaloniki, Greece
| | - Evangelos Kaimakamis
- 1st Intensive Care Unit, "G. Papanikolaou" General Hospital of Thessaloniki, 57010 Pilea Hortiatis, Greece
| | - Nicos Maglaveras
- 2nd Department of Obstetrics and Gynaecology, The Medical School, 54124 Thessaloniki, Greece
| | - Alda Marques
- Lab3R - Respiratory Research and Rehabilitation Laboratory, School of Health Sciences (ESSUA), University of Aveiro, 3810-193 Aveiro, Portugal; Institute of Biomedicine (iBiMED), University of Aveiro, 3810-193 Aveiro, Portugal
| | - Inéz Frerichs
- Department of Anesthesiology, and Intensive Care Medicine, University Medical Center Schleswig-Holstein Campus Kiel, Kiel 24105, Schleswig-Holstein, Germany
| | - Paulo de Carvalho
- University of Coimbra Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, 3030-290 Coimbra, Portugal
| | - Rui Pedro Paiva
- University of Coimbra Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, 3030-290 Coimbra, Portugal
| |
Collapse
|
9
|
Huang DM, Huang J, Qiao K, Zhong NS, Lu HZ, Wang WJ. Deep learning-based lung sound analysis for intelligent stethoscope. Mil Med Res 2023; 10:44. [PMID: 37749643 PMCID: PMC10521503 DOI: 10.1186/s40779-023-00479-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 09/05/2023] [Indexed: 09/27/2023] Open
Abstract
Auscultation is crucial for the diagnosis of respiratory system diseases. However, traditional stethoscopes have inherent limitations, such as inter-listener variability and subjectivity, and they cannot record respiratory sounds for offline/retrospective diagnosis or remote prescriptions in telemedicine. The emergence of digital stethoscopes has overcome these limitations by allowing physicians to store and share respiratory sounds for consultation and education. On this basis, machine learning, particularly deep learning, enables the fully-automatic analysis of lung sounds that may pave the way for intelligent stethoscopes. This review thus aims to provide a comprehensive overview of deep learning algorithms used for lung sound analysis to emphasize the significance of artificial intelligence (AI) in this field. We focus on each component of deep learning-based lung sound analysis systems, including the task categories, public datasets, denoising methods, and, most importantly, existing deep learning methods, i.e., the state-of-the-art approaches to convert lung sounds into two-dimensional (2D) spectrograms and use convolutional neural networks for the end-to-end recognition of respiratory diseases or abnormal lung sounds. Additionally, this review highlights current challenges in this field, including the variety of devices, noise sensitivity, and poor interpretability of deep models. To address the poor reproducibility and variety of deep learning in this field, this review also provides a scalable and flexible open-source framework that aims to standardize the algorithmic workflow and provide a solid basis for replication and future extension: https://github.com/contactless-healthcare/Deep-Learning-for-Lung-Sound-Analysis .
Collapse
Affiliation(s)
- Dong-Min Huang
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China
| | - Jia Huang
- The Third People's Hospital of Shenzhen, Shenzhen, 518112, Guangdong, China
| | - Kun Qiao
- The Third People's Hospital of Shenzhen, Shenzhen, 518112, Guangdong, China
| | - Nan-Shan Zhong
- Guangzhou Institute of Respiratory Health, China State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, 510120, China.
| | - Hong-Zhou Lu
- The Third People's Hospital of Shenzhen, Shenzhen, 518112, Guangdong, China.
| | - Wen-Jin Wang
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China.
| |
Collapse
|
10
|
Kala A, McCollum ED, Elhilali M. Reference free auscultation quality metric and its trends. Biomed Signal Process Control 2023; 85:104852. [PMID: 38274002 PMCID: PMC10809975 DOI: 10.1016/j.bspc.2023.104852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
Stethoscopes are used ubiquitously in clinical settings to 'listen' to lung sounds. The use of these systems in a variety of healthcare environments (hospitals, urgent care rooms, private offices, community sites, mobile clinics, etc.) presents a range of challenges in terms of ambient noise and distortions that mask lung signals from being heard clearly or processed accurately using auscultation devices. With advances in technology, computerized techniques have been developed to automate analysis or access a digital rendering of lung sounds. However, most approaches are developed and tested in controlled environments and do not reflect real-world conditions where auscultation signals are typically acquired. Without a priori access to a recording of the ambient noise (for signal-to-noise estimation) or a reference signal that reflects the true undistorted lung sound, it is difficult to evaluate the quality of the lung signal and its potential clinical interpretability. The current study proposes an objective reference-free Auscultation Quality Metric (AQM) which incorporates low-level signal attributes with high-level representational embeddings mapped to a nonlinear quality space to provide an independent evaluation of the auscultation quality. This metric is carefully designed to solely judge the signal based on its integrity relative to external distortions and masking effects and not confuse an adventitious breathing pattern as low-quality auscultation. The current study explores the robustness of the proposed AQM method across multiple clinical categorizations and different distortion types. It also evaluates the temporal sensitivity of this approach and its translational impact for deployment in digital auscultation devices.
Collapse
Affiliation(s)
- Annapurna Kala
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| | - Eric D. McCollum
- Global Program of Pediatric Respiratory Sciences, Eudowood Division of Pediatric Respiratory Sciences, Department of Pediatrics, Johns Hopkins School of Medicine, Baltimore, USA
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| |
Collapse
|
11
|
Kraman SS, Pasterkamp H, Wodicka GR. Smart Devices Are Poised to Revolutionize the Usefulness of Respiratory Sounds. Chest 2023; 163:1519-1528. [PMID: 36706908 PMCID: PMC10925548 DOI: 10.1016/j.chest.2023.01.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 01/10/2023] [Accepted: 01/17/2023] [Indexed: 01/26/2023] Open
Abstract
The association between breathing sounds and respiratory health or disease has been exceptionally useful in the practice of medicine since the advent of the stethoscope. Remote patient monitoring technology and artificial intelligence offer the potential to develop practical means of assessing respiratory function or dysfunction through continuous assessment of breathing sounds when patients are at home, at work, or even asleep. Automated reports such as cough counts or the percentage of the breathing cycles containing wheezes can be delivered to a practitioner via secure electronic means or returned to the clinical office at the first opportunity. This has not previously been possible. The four respiratory sounds that most lend themselves to this technology are wheezes, to detect breakthrough asthma at night and even occupational asthma when a patient is at work; snoring as an indicator of OSA or adequacy of CPAP settings; cough in which long-term recording can objectively assess treatment adequacy; and crackles, which, although subtle and often overlooked, can contain important clinical information when appearing in a home recording. In recent years, a flurry of publications in the engineering literature described construction, usage, and testing outcomes of such devices. Little of this has appeared in the medical literature. The potential value of this technology for pulmonary medicine is compelling. We expect that these tiny, smart devices soon will allow us to address clinical questions that occur away from the clinic.
Collapse
Affiliation(s)
- Steve S Kraman
- Department of Internal Medicine, Division of Pulmonary, Critical Care and Sleep Medicine, University of Kentucky, Lexington, KY.
| | - Hans Pasterkamp
- University of Manitoba, Department of Pediatrics and Child Health, Max Rady College of Medicine, University of Manitoba, Winnipeg, MB, Canada
| | - George R Wodicka
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN
| |
Collapse
|
12
|
Mang L, Canadas-Quesada F, Carabias-Orti J, Combarro E, Ranilla J. Cochleogram-based adventitious sounds classification using convolutional neural networks. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
13
|
Xia T, Han J, Mascolo C. Exploring machine learning for audio-based respiratory condition screening: A concise review of databases, methods, and open issues. Exp Biol Med (Maywood) 2022; 247:2053-2061. [PMID: 35974706 PMCID: PMC9791302 DOI: 10.1177/15353702221115428] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Auscultation plays an important role in the clinic, and the research community has been exploring machine learning (ML) to enable remote and automatic auscultation for respiratory condition screening via sounds. To give the big picture of what is going on in this field, in this narrative review, we describe publicly available audio databases that can be used for experiments, illustrate the developed ML methods proposed to date, and flag some under-considered issues which still need attention. Compared to existing surveys on the topic, we cover the latest literature, especially those audio-based COVID-19 detection studies which have gained extensive attention in the last two years. This work can help to facilitate the application of artificial intelligence in the respiratory auscultation field.
Collapse
|
14
|
Zhang Q, Zhang J, Yuan J, Huang H, Zhang Y, Zhang B, Lv G, Lin S, Wang N, Liu X, Tang M, Wang Y, Ma H, Liu L, Yuan S, Zhou H, Zhao J, Li Y, Yin Y, Zhao L, Wang G, Lian Y. SPRSound: Open-Source SJTU Paediatric Respiratory Sound Database. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2022; 16:867-881. [PMID: 36070274 DOI: 10.1109/tbcas.2022.3204910] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
It has proved that the auscultation of respiratory sound has advantage in early respiratory diagnosis. Various methods have been raised to perform automatic respiratory sound analysis to reduce subjective diagnosis and physicians' workload. However, these methods highly rely on the quality of respiratory sound database. In this work, we have developed the first open-access paediatric respiratory sound database, SPRSound. The database consists of 2,683 records and 9,089 respiratory sound events from 292 participants. Accurate label is important to achieve a good prediction for adventitious respiratory sound classification problem. A custom-made sound label annotation software (SoundAnn) has been developed to perform sound editing, sound annotation, and quality assurance evaluation. A team of 11 experienced paediatric physicians is involved in the entire process to establish golden standard reference for the dataset. To verify the robustness and accuracy of the classification model, we have investigated the effects of different feature extraction methods and machine learning classifiers on the classification performance of our dataset. As such, we have achieved a score of 75.22%, 61.57%, 56.71%, and 37.84% for the four different classification challenges at the event level and record level.
Collapse
|
15
|
Neili Z, Sundaraj K. A comparative study of the spectrogram, scalogram, melspectrogram and gammatonegram time-frequency representations for the classification of lung sounds using the ICBHI database based on CNNs. BIOMED ENG-BIOMED TE 2022; 67:367-390. [PMID: 35926850 DOI: 10.1515/bmt-2022-0180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Accepted: 06/21/2022] [Indexed: 11/15/2022]
Abstract
In lung sound classification using deep learning, many studies have considered the use of short-time Fourier transform (STFT) as the most commonly used 2D representation of the input data. Consequently, STFT has been widely used as an analytical tool, but other versions of the representation have also been developed. This study aims to evaluate and compare the performance of the spectrogram, scalogram, melspectrogram and gammatonegram representations, and provide comparative information to users regarding the suitability of these time-frequency (TF) techniques in lung sound classification. Lung sound signals used in this study were obtained from the ICBHI 2017 respiratory sound database. These lung sound recordings were converted into images of spectrogram, scalogram, melspectrogram and gammatonegram TF representations respectively. The four types of images were fed separately into the VGG16, ResNet-50 and AlexNet deep-learning architectures. Network performances were analyzed and compared based on accuracy, precision, recall and F1-score. The results of the analysis on the performance of the four representations using these three commonly used CNN deep-learning networks indicate that the generated gammatonegram and scalogram TF images coupled with ResNet-50 achieved maximum classification accuracies.
Collapse
Affiliation(s)
- Zakaria Neili
- Electronics Department, University of Badji Mokhtar Annaba, Annaba, Algeria
| | - Kenneth Sundaraj
- Faculty of Electronics and Computer Engineering, Universiti Teknikal Malaysia Melaka, Melaka, Malaysia
| |
Collapse
|
16
|
A Progressively Expanded Database for Automated Lung Sound Analysis: An Update. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157623] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
We previously established an open-access lung sound database, HF_Lung_V1, and developed deep learning models for inhalation, exhalation, continuous adventitious sound (CAS), and discontinuous adventitious sound (DAS) detection. The amount of data used for training contributes to model accuracy. In this study, we collected larger quantities of data to further improve model performance and explored issues of noisy labels and overlapping sounds. HF_Lung_V1 was expanded to HF_Lung_V2 with a 1.43× increase in the number of audio files. Convolutional neural network–bidirectional gated recurrent unit network models were trained separately using the HF_Lung_V1 (V1_Train) and HF_Lung_V2 (V2_Train) training sets. These were tested using the HF_Lung_V1 (V1_Test) and HF_Lung_V2 (V2_Test) test sets, respectively. Segment and event detection performance was evaluated. Label quality was assessed. Overlap ratios were computed between inhalation, exhalation, CAS, and DAS labels. The model trained using V2_Train exhibited improved performance in inhalation, exhalation, CAS, and DAS detection on both V1_Test and V2_Test. Poor CAS detection was attributed to the quality of CAS labels. DAS detection was strongly influenced by the overlapping of DAS with inhalation and exhalation. In conclusion, collecting greater quantities of lung sound data is vital for developing more accurate lung sound analysis models.
Collapse
|
17
|
Raj V, Swapna M, Sankararaman S. Bioacoustic signal analysis through complex network features. Comput Biol Med 2022; 145:105491. [DOI: 10.1016/j.compbiomed.2022.105491] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 03/30/2022] [Accepted: 04/01/2022] [Indexed: 11/03/2022]
|
18
|
Pancaldi F, Pezzuto GS, Cassone G, Morelli M, Manfredi A, D'Arienzo M, Vacchi C, Savorani F, Vinci G, Barsotti F, Mascia MT, Salvarani C, Sebastiani M. VECTOR: An algorithm for the detection of COVID-19 pneumonia from velcro-like lung sounds. Comput Biol Med 2022; 142:105220. [PMID: 35030495 PMCID: PMC8734059 DOI: 10.1016/j.compbiomed.2022.105220] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 01/03/2022] [Accepted: 01/03/2022] [Indexed: 11/25/2022]
Abstract
The coronavirus disease 2019 (COVID-19) has severely stressed the sanitary systems of all countries in the world. One of the main issues that physicians are called to tackle is represented by the monitoring of pauci-symptomatic COVID-19 patients at home and, generally speaking, everyone the access to the hospital might or should be severely reduced. Indeed, the early detection of interstitial pneumonia is particularly relevant for the survival of these patients. Recent studies on rheumatoid arthritis and interstitial lung diseases have shown that pathological pulmonary sounds can be automatically detected by suitably developed algorithms. The scope of this preliminary work consists of proving that the pathological lung sounds evidenced in patients affected by COVID-19 pneumonia can be automatically detected as well by the same class of algorithms. In particular the software VECTOR, suitably devised for interstitial lung diseases, has been employed to process the lung sounds of 28 patient recorded in the emergency room at the university hospital of Modena (Italy) during December 2020. The performance of VECTOR has been compared with diagnostic techniques based on imaging, namely lung ultrasound, chest X-ray and high resolution computed tomography, which have been assumed as ground truth. The results have evidenced a surprising overall diagnostic accuracy of 75% even if the staff of the emergency room has not been suitably trained for lung auscultation and the parameters of the software have not been optimized to detect interstitial pneumonia. These results pave the way to a new approach for monitoring the pulmonary implication in pauci-symptomatic COVID-19 patients.
Collapse
Affiliation(s)
- Fabrizio Pancaldi
- University of Modena and Reggio Emilia, Department of Sciences and Methods for Engineering, via Amendola 2, 42122, Reggio Emilia, Italy; University of Modena and Reggio Emilia, Artificial Intelligence Research and Innovation Center (AIRI), Via Pietro Vivarelli 10, 41125, Modena, Italy.
| | - Giuseppe Stefano Pezzuto
- Emergency Room and Emergency Medicine, Azienda Policlinico di Modena, via del Pozzo 71, 42124, Modena, Italy.
| | - Giulia Cassone
- University of Modena and Reggio Emilia, Department of Surgery, Medicine, Dentistry and Morphological Sciences with Transplant Surgery, Oncology and Regenerative Medicine Relevance, via del Pozzo 71, 42124, Modena, Italy; Rheumatology Unit, Azienda Policlinico di Modena, via del Pozzo 71, 42124, Modena, Italy.
| | - Marianna Morelli
- Emergency Room and Emergency Medicine, Azienda Policlinico di Modena, via del Pozzo 71, 42124, Modena, Italy.
| | - Andreina Manfredi
- University of Modena and Reggio Emilia, Department of Surgery, Medicine, Dentistry and Morphological Sciences with Transplant Surgery, Oncology and Regenerative Medicine Relevance, via del Pozzo 71, 42124, Modena, Italy; Rheumatology Unit, Azienda Policlinico di Modena, via del Pozzo 71, 42124, Modena, Italy.
| | - Matteo D'Arienzo
- Emergency Room and Emergency Medicine, Azienda Policlinico di Modena, via del Pozzo 71, 42124, Modena, Italy.
| | - Caterina Vacchi
- Rheumatology Unit, Azienda Policlinico di Modena, via del Pozzo 71, 42124, Modena, Italy.
| | - Fulvio Savorani
- Emergency Room and Emergency Medicine, Azienda Policlinico di Modena, via del Pozzo 71, 42124, Modena, Italy.
| | - Giovanni Vinci
- Emergency Room and Emergency Medicine, Azienda Policlinico di Modena, via del Pozzo 71, 42124, Modena, Italy.
| | - Francesco Barsotti
- Emergency Room and Emergency Medicine, Azienda Policlinico di Modena, via del Pozzo 71, 42124, Modena, Italy.
| | - Maria Teresa Mascia
- University of Modena and Reggio Emilia, Department of Surgery, Medicine, Dentistry and Morphological Sciences with Transplant Surgery, Oncology and Regenerative Medicine Relevance, via del Pozzo 71, 42124, Modena, Italy; Rheumatology Unit, Azienda Policlinico di Modena, via del Pozzo 71, 42124, Modena, Italy.
| | - Carlo Salvarani
- University of Modena and Reggio Emilia, Department of Surgery, Medicine, Dentistry and Morphological Sciences with Transplant Surgery, Oncology and Regenerative Medicine Relevance, via del Pozzo 71, 42124, Modena, Italy; Rheumatology Unit, Azienda Policlinico di Modena, via del Pozzo 71, 42124, Modena, Italy.
| | - Marco Sebastiani
- University of Modena and Reggio Emilia, Department of Surgery, Medicine, Dentistry and Morphological Sciences with Transplant Surgery, Oncology and Regenerative Medicine Relevance, via del Pozzo 71, 42124, Modena, Italy; Rheumatology Unit, Azienda Policlinico di Modena, via del Pozzo 71, 42124, Modena, Italy.
| |
Collapse
|
19
|
Petmezas G, Cheimariotis GA, Stefanopoulos L, Rocha B, Paiva RP, Katsaggelos AK, Maglaveras N. Automated Lung Sound Classification Using a Hybrid CNN-LSTM Network and Focal Loss Function. SENSORS (BASEL, SWITZERLAND) 2022; 22:1232. [PMID: 35161977 PMCID: PMC8838187 DOI: 10.3390/s22031232] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 02/02/2022] [Accepted: 02/03/2022] [Indexed: 11/16/2022]
Abstract
Respiratory diseases constitute one of the leading causes of death worldwide and directly affect the patient's quality of life. Early diagnosis and patient monitoring, which conventionally include lung auscultation, are essential for the efficient management of respiratory diseases. Manual lung sound interpretation is a subjective and time-consuming process that requires high medical expertise. The capabilities that deep learning offers could be exploited in order that robust lung sound classification models can be designed. In this paper, we propose a novel hybrid neural model that implements the focal loss (FL) function to deal with training data imbalance. Features initially extracted from short-time Fourier transform (STFT) spectrograms via a convolutional neural network (CNN) are given as input to a long short-term memory (LSTM) network that memorizes the temporal dependencies between data and classifies four types of lung sounds, including normal, crackles, wheezes, and both crackles and wheezes. The model was trained and tested on the ICBHI 2017 Respiratory Sound Database and achieved state-of-the-art results using three different data splitting strategies-namely, sensitivity 47.37%, specificity 82.46%, score 64.92% and accuracy 73.69% for the official 60/40 split, sensitivity 52.78%, specificity 84.26%, score 68.52% and accuracy 76.39% using interpatient 10-fold cross validation, and sensitivity 60.29% and accuracy 74.57% using leave-one-out cross validation.
Collapse
Affiliation(s)
- Georgios Petmezas
- Laboratory of Computing, Medical Informatics and Biomedical—Imaging Technologies, Medical School, Aristotle University of Thessaloniki, GR 54124 Thessaloniki, Greece; (G.P.); (G.-A.C.); (L.S.)
| | - Grigorios-Aris Cheimariotis
- Laboratory of Computing, Medical Informatics and Biomedical—Imaging Technologies, Medical School, Aristotle University of Thessaloniki, GR 54124 Thessaloniki, Greece; (G.P.); (G.-A.C.); (L.S.)
| | - Leandros Stefanopoulos
- Laboratory of Computing, Medical Informatics and Biomedical—Imaging Technologies, Medical School, Aristotle University of Thessaloniki, GR 54124 Thessaloniki, Greece; (G.P.); (G.-A.C.); (L.S.)
| | - Bruno Rocha
- Centre for Informatics and Systems, Department of Informatics Engineering, University of Coimbra, 3030-290 Coimbra, Portugal; (B.R.); (R.P.P.)
| | - Rui Pedro Paiva
- Centre for Informatics and Systems, Department of Informatics Engineering, University of Coimbra, 3030-290 Coimbra, Portugal; (B.R.); (R.P.P.)
| | - Aggelos K. Katsaggelos
- Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL 60208, USA;
| | - Nicos Maglaveras
- Laboratory of Computing, Medical Informatics and Biomedical—Imaging Technologies, Medical School, Aristotle University of Thessaloniki, GR 54124 Thessaloniki, Greece; (G.P.); (G.-A.C.); (L.S.)
| |
Collapse
|
20
|
Sanchez-Perez JA, Berkebile JA, Nevius BN, Ozmen GC, Nichols CJ, Ganti VG, Mabrouk SA, Clifford GD, Kamaleswaran R, Wright DW, Inan OT. A Wearable Multimodal Sensing System for Tracking Changes in Pulmonary Fluid Status, Lung Sounds, and Respiratory Markers. SENSORS 2022; 22:s22031130. [PMID: 35161876 PMCID: PMC8838360 DOI: 10.3390/s22031130] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 01/23/2022] [Accepted: 01/29/2022] [Indexed: 12/17/2022]
Abstract
Heart failure (HF) exacerbations, characterized by pulmonary congestion and breathlessness, require frequent hospitalizations, often resulting in poor outcomes. Current methods for tracking lung fluid and respiratory distress are unable to produce continuous, holistic measures of cardiopulmonary health. We present a multimodal sensing system that captures bioimpedance spectroscopy (BIS), multi-channel lung sounds from four contact microphones, multi-frequency impedance pneumography (IP), temperature, and kinematics to track changes in cardiopulmonary status. We first validated the system on healthy subjects (n = 10) and then conducted a feasibility study on patients (n = 14) with HF in clinical settings. Three measurements were taken throughout the course of hospitalization, and parameters relevant to lung fluid status—the ratio of the resistances at 5 kHz to those at 150 kHz (K)—and respiratory timings (e.g., respiratory rate) were extracted. We found a statistically significant increase in K (p < 0.05) from admission to discharge and observed respiratory timings in physiologically plausible ranges. The IP-derived respiratory signals and lung sounds were sensitive enough to detect abnormal respiratory patterns (Cheyne–Stokes) and inspiratory crackles from patient recordings, respectively. We demonstrated that the proposed system is suitable for detecting changes in pulmonary fluid status and capturing high-quality respiratory signals and lung sounds in a clinical setting.
Collapse
Affiliation(s)
- Jesus Antonio Sanchez-Perez
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30313, USA; (J.A.B.); (G.C.O.); (S.A.M.); (O.T.I.)
- Correspondence:
| | - John A. Berkebile
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30313, USA; (J.A.B.); (G.C.O.); (S.A.M.); (O.T.I.)
| | - Brandi N. Nevius
- Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA;
| | - Goktug C. Ozmen
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30313, USA; (J.A.B.); (G.C.O.); (S.A.M.); (O.T.I.)
| | - Christopher J. Nichols
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Emory University, Atlanta, GA 30332, USA; (C.J.N.); (G.D.C.); (R.K.)
| | - Venu G. Ganti
- Bioengineering Graduate Program, Georgia Institute of Technology, Atlanta, GA 30332, USA;
| | - Samer A. Mabrouk
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30313, USA; (J.A.B.); (G.C.O.); (S.A.M.); (O.T.I.)
| | - Gari D. Clifford
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Emory University, Atlanta, GA 30332, USA; (C.J.N.); (G.D.C.); (R.K.)
- Department of Biomedical Informatics, Emory University, Atlanta, GA 30332, USA
| | - Rishikesan Kamaleswaran
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Emory University, Atlanta, GA 30332, USA; (C.J.N.); (G.D.C.); (R.K.)
- Department of Biomedical Informatics, Emory University, Atlanta, GA 30332, USA
- Department of Emergency Medicine, Emory University, Atlanta, GA 30332, USA;
| | - David W. Wright
- Department of Emergency Medicine, Emory University, Atlanta, GA 30332, USA;
| | - Omer T. Inan
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30313, USA; (J.A.B.); (G.C.O.); (S.A.M.); (O.T.I.)
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Emory University, Atlanta, GA 30332, USA; (C.J.N.); (G.D.C.); (R.K.)
| |
Collapse
|
21
|
CoCross: An ICT Platform Enabling Monitoring Recording and Fusion of Clinical Information Chest Sounds and Imaging of COVID-19 ICU Patients. Healthcare (Basel) 2022; 10:healthcare10020276. [PMID: 35206889 PMCID: PMC8871733 DOI: 10.3390/healthcare10020276] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 01/24/2022] [Accepted: 01/28/2022] [Indexed: 12/04/2022] Open
Abstract
Monitoring and treatment of severely ill COVID-19 patients in the ICU poses many challenges. The effort to understand the pathophysiology and progress of the disease requires high-quality annotated multi-parameter databases. We present CoCross, a platform that enables the monitoring and fusion of clinical information from in-ICU COVID-19 patients into an annotated database. CoCross consists of three components: (1) The CoCross4Pros native android application, a modular application, managing the interaction with portable medical devices, (2) the cloud-based data management services built-upon HL7 FHIR and ontologies, (3) the web-based application for intensivists, providing real-time review and analytics of the acquired measurements and auscultations. The platform has been successfully deployed since June 2020 in two ICUs in Greece resulting in a dynamic unified annotated database integrating clinical information with chest sounds and diagnostic imaging. Until today multisource data from 176 ICU patients were acquired and imported in the CoCross database, corresponding to a five-day average monitoring period including a dataset with 3477 distinct auscultations. The platform is well accepted and positively rated by the users regarding the overall experience.
Collapse
|
22
|
Pessoa D, Rocha BM, Cheimariotis GA, Haris K, Strodthoff C, Kaimakamis E, Maglaveras N, Frerichs I, de Carvalho P, Paiva RP. Classification of Electrical Impedance Tomography Data Using Machine Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:349-353. [PMID: 34891307 DOI: 10.1109/embc46164.2021.9629961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Patients suffering from pulmonary diseases typically exhibit pathological lung ventilation in terms of homogeneity. Electrical Impedance Tomography (EIT) is a non- invasive imaging method that allows to analyze and quantify the distribution of ventilation in the lungs. In this article, we present a new approach to promote the use of EIT data and the implementation of new clinical applications for differential diagnosis, with the development of several machine learning models to discriminate between EIT data from healthy and nonhealthy subjects. EIT data from 16 subjects were acquired: 5 healthy and 11 non-healthy subjects (with multiple pulmonary conditions). Preliminary results have shown accuracy percentages of 66% in challenging evaluation scenarios. The results suggest that the pairing of EIT feature engineering methods with machine learning methods could be further explored and applied in the diagnostic and monitoring of patients suffering from lung diseases. Also, we introduce the use of a new feature in the context of EIT data analysis (Impedance Curve Correlation).
Collapse
|
23
|
Nikolaizik W, Wuensch L, Bauck M, Gross V, Sohrabi K, Weissflog A, Hildebrandt O, Koehler U, Weber S. Pilot study on nocturnal monitoring of crackles in children with pneumonia. ERJ Open Res 2021; 7:00284-2021. [PMID: 34853781 PMCID: PMC8628192 DOI: 10.1183/23120541.00284-2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 09/09/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND The clinical diagnosis of pneumonia is usually based on crackles at auscultation, but it is not yet clear what kind of crackles are the characteristic features of pneumonia in children. Lung sound monitoring can be used as a "longtime stethoscope". Therefore, it was the aim of this pilot study to use a lung sound monitor system to detect crackles and to differentiate between fine and coarse crackles in children with acute pneumonia. The change of crackles during the course of the disease shall be investigated in a follow-up study. PATIENTS AND METHODS Crackles were recorded overnight from 22:00 to 06:00 h in 30 children with radiographically confirmed pneumonia. The data for a total of 28 800 recorded 30-s epochs were audiovisually analysed for fine and coarse crackles. RESULTS Fine crackles and coarse crackles were recognised in every patient with pneumonia, but the number of epochs with and without crackles varied widely among the different patients: fine crackles were detected in 40±22% (mean±sd), coarse crackles in 76±20%. The predominant localisation of crackles as recorded during overnight monitoring was in accordance with the radiographic infiltrates and the classical auscultation in most patients. The distribution of crackles was fairly equal throughout the night. However, there were time periods without any crackle in the single patients so that the diagnosis of pneumonia might be missed at sporadic auscultation. CONCLUSION Nocturnal monitoring can be beneficial to reliably detect fine and coarse crackles in children with pneumonia.
Collapse
Affiliation(s)
- Wilfried Nikolaizik
- Dept of Pediatric Pulmonology, Children's Hospital, Philipps-University, Marburg, Germany
| | - Lisa Wuensch
- Dept of Pediatric Pulmonology, Children's Hospital, Philipps-University, Marburg, Germany
| | - Monika Bauck
- Dept of Pediatric Pulmonology, Children's Hospital, Philipps-University, Marburg, Germany
| | - Volker Gross
- Faculty of Health Sciences, University of Applied Sciences, Giessen, Germany
| | - Keywan Sohrabi
- Faculty of Health Sciences, University of Applied Sciences, Giessen, Germany
| | | | - Olaf Hildebrandt
- Division of Respiratory and Critical Care Medicine, Philipps-University, Marburg, Germany
| | - Ulrich Koehler
- Division of Respiratory and Critical Care Medicine, Philipps-University, Marburg, Germany
| | - Stefanie Weber
- Dept of Pediatric Pulmonology, Children's Hospital, Philipps-University, Marburg, Germany
| |
Collapse
|
24
|
Hsu FS, Huang SR, Huang CW, Huang CJ, Cheng YR, Chen CC, Hsiao J, Chen CW, Chen LC, Lai YC, Hsu BF, Lin NJ, Tsai WL, Wu YL, Tseng TL, Tseng CT, Chen YT, Lai F. Benchmarking of eight recurrent neural network variants for breath phase and adventitious sound detection on a self-developed open-access lung sound database-HF_Lung_V1. PLoS One 2021; 16:e0254134. [PMID: 34197556 PMCID: PMC8248710 DOI: 10.1371/journal.pone.0254134] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 06/20/2021] [Indexed: 01/15/2023] Open
Abstract
A reliable, remote, and continuous real-time respiratory sound monitor with automated respiratory sound analysis ability is urgently required in many clinical scenarios-such as in monitoring disease progression of coronavirus disease 2019-to replace conventional auscultation with a handheld stethoscope. However, a robust computerized respiratory sound analysis algorithm for breath phase detection and adventitious sound detection at the recording level has not yet been validated in practical applications. In this study, we developed a lung sound database (HF_Lung_V1) comprising 9,765 audio files of lung sounds (duration of 15 s each), 34,095 inhalation labels, 18,349 exhalation labels, 13,883 continuous adventitious sound (CAS) labels (comprising 8,457 wheeze labels, 686 stridor labels, and 4,740 rhonchus labels), and 15,606 discontinuous adventitious sound labels (all crackles). We conducted benchmark tests using long short-term memory (LSTM), gated recurrent unit (GRU), bidirectional LSTM (BiLSTM), bidirectional GRU (BiGRU), convolutional neural network (CNN)-LSTM, CNN-GRU, CNN-BiLSTM, and CNN-BiGRU models for breath phase detection and adventitious sound detection. We also conducted a performance comparison between the LSTM-based and GRU-based models, between unidirectional and bidirectional models, and between models with and without a CNN. The results revealed that these models exhibited adequate performance in lung sound analysis. The GRU-based models outperformed, in terms of F1 scores and areas under the receiver operating characteristic curves, the LSTM-based models in most of the defined tasks. Furthermore, all bidirectional models outperformed their unidirectional counterparts. Finally, the addition of a CNN improved the accuracy of lung sound analysis, especially in the CAS detection tasks.
Collapse
Affiliation(s)
- Fu-Shun Hsu
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
- Department of Critical Care Medicine, Far Eastern Memorial Hospital, New Taipei, Taiwan
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | | | | | - Chao-Jung Huang
- Joint Research Center for Artificial Intelligence Technology and All Vista Healthcare, National Taiwan University, Taipei, Taiwan
| | - Yuan-Ren Cheng
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
- Department of Life Science, College of Life Science, National Taiwan University, Taipei, Taiwan
- Institute of Biomedical Sciences, Academia Sinica, Taipei, Taiwan
| | | | - Jack Hsiao
- HCC Healthcare Group, New Taipei, Taiwan
| | - Chung-Wei Chen
- Department of Critical Care Medicine, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Li-Chin Chen
- Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan
| | - Yen-Chun Lai
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | - Bi-Fang Hsu
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | - Nian-Jhen Lin
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
- Division of Pulmonary Medicine, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Wan-Ling Tsai
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | - Yi-Lin Wu
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | | | | | - Yi-Tsun Chen
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | - Feipei Lai
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
| |
Collapse
|
25
|
De La Torre Cruz J, Cañadas Quesada FJ, Ruiz Reyes N, García Galán S, Carabias Orti JJ, Peréz Chica G. Monophonic and Polyphonic Wheezing Classification Based on Constrained Low-Rank Non-Negative Matrix Factorization. SENSORS (BASEL, SWITZERLAND) 2021; 21:1661. [PMID: 33670892 PMCID: PMC7957792 DOI: 10.3390/s21051661] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 02/17/2021] [Accepted: 02/22/2021] [Indexed: 11/21/2022]
Abstract
The appearance of wheezing sounds is widely considered by physicians as a key indicator to detect early pulmonary disorders or even the severity associated with respiratory diseases, as occurs in the case of asthma and chronic obstructive pulmonary disease. From a physician's point of view, monophonic and polyphonic wheezing classification is still a challenging topic in biomedical signal processing since both types of wheezes are sinusoidal in nature. Unlike most of the classification algorithms in which interference caused by normal respiratory sounds is not addressed in depth, our first contribution proposes a novel Constrained Low-Rank Non-negative Matrix Factorization (CL-RNMF) approach, never applied to classification of wheezing as far as the authors' knowledge, which incorporates several constraints (sparseness and smoothness) and a low-rank configuration to extract the wheezing spectral content, minimizing the acoustic interference from normal respiratory sounds. The second contribution automatically analyzes the harmonic structure of the energy distribution associated with the estimated wheezing spectrogram to classify the type of wheezing. Experimental results report that: (i) the proposed method outperforms the most recent and relevant state-of-the-art wheezing classification method by approximately 8% in accuracy; (ii) unlike state-of-the-art methods based on classifiers, the proposed method uses an unsupervised approach that does not require any training.
Collapse
Affiliation(s)
- Juan De La Torre Cruz
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Francisco Jesús Cañadas Quesada
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Nicolás Ruiz Reyes
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Sebastián García Galán
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Julio José Carabias Orti
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Gerardo Peréz Chica
- Pneumology Clinical Management Unit of the University Hospital of Jaen, Av. del Ejercito Espanol, 10, 23007 Jaen, Spain;
| |
Collapse
|