1
|
Hwang S, Lee HS, Park CH, Jung JY, Lee JC. Voice reduction in cardiac auscultation sounds with reference signals measured from vocal resonators. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:3822-3832. [PMID: 38874464 DOI: 10.1121/10.0026237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2024] [Accepted: 05/20/2024] [Indexed: 06/15/2024]
Abstract
This study proposes the use of vocal resonators to enhance cardiac auscultation signals and evaluates their performance for voice-noise suppression. Data were collected using two electronic stethoscopes while each study subject was talking. One collected auscultation signal from the chest while the other collected voice signals from one of the three voice resonators (cheek, back of the neck, and shoulder). The spectral subtraction method was applied to the signals. Both objective and subjective metrics were used to evaluate the quality of enhanced signals and to investigate the most effective vocal resonator for noise suppression. Our preliminary findings showed a significant improvement after enhancement and demonstrated the efficacy of vocal resonators. A listening survey was conducted with thirteen physicians to evaluate the quality of enhanced signals, and they have received significantly better scores regarding the sound quality than their original signals. The shoulder resonator group demonstrated significantly better sound quality than the cheek group when reducing voice sound in cardiac auscultation signals. The suggested method has the potential to be used for the development of an electronic stethoscope with a robust noise removal function. Significant clinical benefits are expected from the expedited preliminary diagnostic procedure.
Collapse
Affiliation(s)
- Soyun Hwang
- Department of Pediatrics, Severance Children's Hospital, Seoul 03722, Republic of Korea
- Department of Emergency Medicine, Seoul National University Hospital, Seoul 03080, Republic of Korea
- Department of Biomedical Engineering, College of Medicine and Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul 03080, Republic of Korea
| | - Hee Su Lee
- Interdisciplinary Program in Bioengineering, Seoul National University Graduate School, Seoul 03080, Republic of Korea
| | - Chan Hun Park
- Interdisciplinary Program in Bioengineering, Seoul National University Graduate School, Seoul 03080, Republic of Korea
| | - Jae Yun Jung
- Department of Emergency Medicine, Seoul National University Hospital, Seoul 03080, Republic of Korea
| | - Jung Chan Lee
- Department of Biomedical Engineering, College of Medicine and Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul 03080, Republic of Korea
| |
Collapse
|
2
|
Crisdayanti IAPA, Nam SW, Jung SK, Kim SE. Attention Feature Fusion Network via Knowledge Propagation for Automated Respiratory Sound Classification. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:383-392. [PMID: 38899013 PMCID: PMC11186653 DOI: 10.1109/ojemb.2024.3402139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Revised: 04/24/2024] [Accepted: 05/13/2024] [Indexed: 06/21/2024] Open
Abstract
Goal: In light of the COVID-19 pandemic, the early diagnosis of respiratory diseases has become increasingly crucial. Traditional diagnostic methods such as computed tomography (CT) and magnetic resonance imaging (MRI), while accurate, often face accessibility challenges. Lung auscultation, a simpler alternative, is subjective and highly dependent on the clinician's expertise. The pandemic has further exacerbated these challenges by restricting face-to-face consultations. This study aims to overcome these limitations by developing an automated respiratory sound classification system using deep learning, facilitating remote and accurate diagnoses. Methods: We developed a deep convolutional neural network (CNN) model that utilizes spectrographic representations of respiratory sounds within an image classification framework. Our model is enhanced with attention feature fusion of low-to-high-level information based on a knowledge propagation mechanism to increase classification effectiveness. This novel approach was evaluated using the ICBHI benchmark dataset and a larger, self-collected Pediatric dataset comprising outpatient children aged 1 to 6 years. Results: The proposed CNN model with knowledge propagation demonstrated superior performance compared to existing state-of-the-art models. Specifically, our model showed higher sensitivity in detecting abnormalities in the Pediatric dataset, indicating its potential for improving the accuracy of respiratory disease diagnosis. Conclusions: The integration of a knowledge propagation mechanism into a CNN model marks a significant advancement in the field of automated diagnosis of respiratory disease. This study paves the way for more accessible and precise healthcare solutions, which is especially crucial in pandemic scenarios.
Collapse
Affiliation(s)
- Ida A. P. A. Crisdayanti
- Department of Applied Artificial IntelligenceSeoul National University of Science and TechnologySeoul01811South Korea
| | - Sung Woo Nam
- Woorisoa Children's HospitalSeoul08291South Korea
| | | | - Seong-Eun Kim
- Department of Applied Artificial IntelligenceSeoul National University of Science and TechnologySeoul01811South Korea
| |
Collapse
|
3
|
Lauwers E, Stas T, McLane I, Snoeckx A, Van Hoorenbeeck K, De Backer W, Ides K, Steckel J, Verhulst S. Exploring the link between a novel approach for computer aided lung sound analysis and imaging biomarkers: a cross-sectional study. Respir Res 2024; 25:177. [PMID: 38658980 PMCID: PMC11044477 DOI: 10.1186/s12931-024-02810-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Accepted: 04/09/2024] [Indexed: 04/26/2024] Open
Abstract
BACKGROUND Computer Aided Lung Sound Analysis (CALSA) aims to overcome limitations associated with standard lung auscultation by removing the subjective component and allowing quantification of sound characteristics. In this proof-of-concept study, a novel automated approach was evaluated in real patient data by comparing lung sound characteristics to structural and functional imaging biomarkers. METHODS Patients with cystic fibrosis (CF) aged > 5y were recruited in a prospective cross-sectional study. CT scans were analyzed by the CF-CT scoring method and Functional Respiratory Imaging (FRI). A digital stethoscope was used to record lung sounds at six chest locations. Following sound characteristics were determined: expiration-to-inspiration (E/I) signal power ratios within different frequency ranges, number of crackles per respiratory phase and wheeze parameters. Linear mixed-effects models were computed to relate CALSA parameters to imaging biomarkers on a lobar level. RESULTS 222 recordings from 25 CF patients were included. Significant associations were found between E/I ratios and structural abnormalities, of which the ratio between 200 and 400 Hz appeared to be most clinically relevant due to its relation with bronchiectasis, mucus plugging, bronchial wall thickening and air trapping on CT. The number of crackles was also associated with multiple structural abnormalities as well as regional airway resistance determined by FRI. Wheeze parameters were not considered in the statistical analysis, since wheezing was detected in only one recording. CONCLUSIONS The present study is the first to investigate associations between auscultatory findings and imaging biomarkers, which are considered the gold standard to evaluate the respiratory system. Despite the exploratory nature of this study, the results showed various meaningful associations that highlight the potential value of automated CALSA as a novel non-invasive outcome measure in future research and clinical practice.
Collapse
Affiliation(s)
- Eline Lauwers
- Laboratory of Experimental Medicine and Pediatrics and member of Infla-Med Research Consortium of Excellence, University of Antwerp, Wilrijk, Belgium.
- Fluidda NV, Kontich, Belgium.
| | - Toon Stas
- CoSys-Lab Research Group, University of Antwerp and Flanders Make Strategic Research Center, Wilrijk, Lommel, Belgium
| | - Ian McLane
- Sonavi Labs, Baltimore, MD, USA
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Annemiek Snoeckx
- Department of Radiology, Antwerp University Hospital, Edegem, Belgium
- Faculty of Medicine and Health Sciences, University of Antwerp, Wilrijk, Belgium
| | - Kim Van Hoorenbeeck
- Laboratory of Experimental Medicine and Pediatrics and member of Infla-Med Research Consortium of Excellence, University of Antwerp, Wilrijk, Belgium
- Department of Pediatrics, Antwerp University Hospital, Edegem, Belgium
| | - Wilfried De Backer
- Faculty of Medicine and Health Sciences, University of Antwerp, Wilrijk, Belgium
- Fluidda NV, Kontich, Belgium
- MedImprove BV, Kontich, Belgium
| | - Kris Ides
- Laboratory of Experimental Medicine and Pediatrics and member of Infla-Med Research Consortium of Excellence, University of Antwerp, Wilrijk, Belgium
- CoSys-Lab Research Group, University of Antwerp and Flanders Make Strategic Research Center, Wilrijk, Lommel, Belgium
- Department of Pediatrics, Antwerp University Hospital, Edegem, Belgium
- MedImprove BV, Kontich, Belgium
| | - Jan Steckel
- CoSys-Lab Research Group, University of Antwerp and Flanders Make Strategic Research Center, Wilrijk, Lommel, Belgium
| | - Stijn Verhulst
- Laboratory of Experimental Medicine and Pediatrics and member of Infla-Med Research Consortium of Excellence, University of Antwerp, Wilrijk, Belgium
- Department of Pediatrics, Antwerp University Hospital, Edegem, Belgium
| |
Collapse
|
4
|
Kono Y, Miura K, Kasai H, Ito S, Asahina M, Tanabe M, Nomura Y, Nakaguchi T. Breath Measurement Method for Synchronized Reproduction of Biological Tones in an Augmented Reality Auscultation Training System. SENSORS (BASEL, SWITZERLAND) 2024; 24:1626. [PMID: 38475162 DOI: 10.3390/s24051626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Revised: 02/22/2024] [Accepted: 02/23/2024] [Indexed: 03/14/2024]
Abstract
An educational augmented reality auscultation system (EARS) is proposed to enhance the reality of auscultation training using a simulated patient. The conventional EARS cannot accurately reproduce breath sounds according to the breathing of a simulated patient because the system instructs the breathing rhythm. In this study, we propose breath measurement methods that can be integrated into the chest piece of a stethoscope. We investigate methods using the thoracic variations and frequency characteristics of breath sounds. An accelerometer, a magnetic sensor, a gyro sensor, a pressure sensor, and a microphone were selected as the sensors. For measurement with the magnetic sensor, we proposed a method by detecting the breathing waveform in terms of changes in the magnetic field accompanying the surface deformation of the stethoscope based on thoracic variations using a magnet. During breath sound measurement, the frequency spectra of the breath sounds acquired by the built-in microphone were calculated. The breathing waveforms were obtained from the difference in characteristics between the breath sounds during exhalation and inhalation. The result showed the average value of the correlation coefficient with the reference value reached 0.45, indicating the effectiveness of this method as a breath measurement method. And the evaluations suggest more accurate breathing waveforms can be obtained by selecting the measurement method according to breathing method and measurement point.
Collapse
Affiliation(s)
- Yukiko Kono
- Department of Medical Engineering, Graduate School of Science and Engineering, Chiba University, 1-33 Yayoicho, Inage-ku, Chiba-shi 263-8522, Chiba, Japan
| | - Keiichiro Miura
- Department of Cardiovascular Medicine, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba-shi 260-8670, Chiba, Japan
| | - Hajime Kasai
- Department of Respirology, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba-shi 260-8670, Chiba, Japan
- Department of Medical Education, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba-shi 260-8670, Chiba, Japan
| | - Shoichi Ito
- Department of Medical Education, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba-shi 260-8670, Chiba, Japan
- Chiba University Hospital, 1-8-1 Inohana, Chuo-ku, Chiba-shi 260-8677, Chiba, Japan
| | - Mayumi Asahina
- Chiba University Hospital, 1-8-1 Inohana, Chuo-ku, Chiba-shi 260-8677, Chiba, Japan
| | - Masahiro Tanabe
- Chiba University, 1-33 Yayoicho, Inage-ku, Chiba-shi 263-8522, Chiba, Japan
| | - Yukihiro Nomura
- Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoicho, Inage-ku, Chiba-shi 263-8522, Chiba, Japan
| | - Toshiya Nakaguchi
- Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoicho, Inage-ku, Chiba-shi 263-8522, Chiba, Japan
| |
Collapse
|
5
|
Sabry AH, I. Dallal Bashi O, Nik Ali N, Mahmood Al Kubaisi Y. Lung disease recognition methods using audio-based analysis with machine learning. Heliyon 2024; 10:e26218. [PMID: 38420389 PMCID: PMC10900411 DOI: 10.1016/j.heliyon.2024.e26218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/11/2023] [Accepted: 02/08/2024] [Indexed: 03/02/2024] Open
Abstract
The use of computer-based automated approaches and improvements in lung sound recording techniques have made lung sound-based diagnostics even better and devoid of subjectivity errors. Using a computer to evaluate lung sound features more thoroughly with the use of analyzing changes in lung sound behavior, recording measurements, suppressing the presence of noise contaminations, and graphical representations are all made possible by computer-based lung sound analysis. This paper starts with a discussion of the need for this research area, providing an overview of the field and the motivations behind it. Following that, it details the survey methodology used in this work. It presents a discussion on the elements of sound-based lung disease classification using machine learning algorithms. This includes commonly prior considered datasets, feature extraction techniques, pre-processing methods, artifact removal methods, lung-heart sound separation, deep learning algorithms, and wavelet transform of lung audio signals. The study introduces studies that review lung screening including a summary table of these references and discusses the literature gaps in the existing studies. It is concluded that the use of sound-based machine learning in the classification of respiratory diseases has promising results. While we believe this material will prove valuable to physicians and researchers exploring sound-signal-based machine learning, large-scale investigations remain essential to solidify the findings and foster wider adoption within the medical community.
Collapse
Affiliation(s)
- Ahmad H. Sabry
- Department of Medical Instrumentation Engineering Techniques, Shatt Al-Arab University College, Basra, Iraq
| | - Omar I. Dallal Bashi
- Medical Technical Institute, Northern Technical University, 95G2+P34, Mosul, 41002, Iraq
| | - N.H. Nik Ali
- School of Electrical Engineering, College of Engineering, Universiti Teknologi MARA, 40450 Shah Alam, Selangor, Malaysia
| | - Yasir Mahmood Al Kubaisi
- Department of Sustainability Management, Dubai Academic Health Corporation, Dubai, 4545, United Arab Emirates
| |
Collapse
|
6
|
Sang B, Wen H, Junek G, Neveu W, Di Francesco L, Ayazi F. An Accelerometer-Based Wearable Patch for Robust Respiratory Rate and Wheeze Detection Using Deep Learning. BIOSENSORS 2024; 14:118. [PMID: 38534225 DOI: 10.3390/bios14030118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 02/17/2024] [Accepted: 02/20/2024] [Indexed: 03/28/2024]
Abstract
Wheezing is a critical indicator of various respiratory conditions, including asthma and chronic obstructive pulmonary disease (COPD). Current diagnosis relies on subjective lung auscultation by physicians. Enabling this capability via a low-profile, objective wearable device for remote patient monitoring (RPM) could offer pre-emptive, accurate respiratory data to patients. With this goal as our aim, we used a low-profile accelerometer-based wearable system that utilizes deep learning to objectively detect wheezing along with respiration rate using a single sensor. The miniature patch consists of a sensitive wideband MEMS accelerometer and low-noise CMOS interface electronics on a small board, which was then placed on nine conventional lung auscultation sites on the patient's chest walls to capture the pulmonary-induced vibrations (PIVs). A deep learning model was developed and compared with a deterministic time-frequency method to objectively detect wheezing in the PIV signals using data captured from 52 diverse patients with respiratory diseases. The wearable accelerometer patch, paired with the deep learning model, demonstrated high fidelity in capturing and detecting respiratory wheezes and patterns across diverse and pertinent settings. It achieved accuracy, sensitivity, and specificity of 95%, 96%, and 93%, respectively, with an AUC of 0.99 on the test set-outperforming the deterministic time-frequency approach. Furthermore, the accelerometer patch outperforms the digital stethoscopes in sound analysis while offering immunity to ambient sounds, which not only enhances data quality and performance for computational wheeze detection by a significant margin but also provides a robust sensor solution that can quantify respiration patterns simultaneously.
Collapse
Affiliation(s)
- Brian Sang
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Haoran Wen
- StethX Microsystems Inc., Atlanta, GA 30308, USA
| | | | - Wendy Neveu
- Department of Medicine, Emory University School of Medicine, Atlanta, GA 30322, USA
| | - Lorenzo Di Francesco
- Department of Medicine, Emory University School of Medicine, Atlanta, GA 30322, USA
| | - Farrokh Ayazi
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
- StethX Microsystems Inc., Atlanta, GA 30308, USA
| |
Collapse
|
7
|
Mang LD, González Martínez FD, Martinez Muñoz D, García Galán S, Cortina R. Classification of Adventitious Sounds Combining Cochleogram and Vision Transformers. SENSORS (BASEL, SWITZERLAND) 2024; 24:682. [PMID: 38276373 PMCID: PMC10818433 DOI: 10.3390/s24020682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 01/13/2024] [Accepted: 01/19/2024] [Indexed: 01/27/2024]
Abstract
Early identification of respiratory irregularities is critical for improving lung health and reducing global mortality rates. The analysis of respiratory sounds plays a significant role in characterizing the respiratory system's condition and identifying abnormalities. The main contribution of this study is to investigate the performance when the input data, represented by cochleogram, is used to feed the Vision Transformer (ViT) architecture, since this input-classifier combination is the first time it has been applied to adventitious sound classification to our knowledge. Although ViT has shown promising results in audio classification tasks by applying self-attention to spectrogram patches, we extend this approach by applying the cochleogram, which captures specific spectro-temporal features of adventitious sounds. The proposed methodology is evaluated on the ICBHI dataset. We compare the classification performance of ViT with other state-of-the-art CNN approaches using spectrogram, Mel frequency cepstral coefficients, constant-Q transform, and cochleogram as input data. Our results confirm the superior classification performance combining cochleogram and ViT, highlighting the potential of ViT for reliable respiratory sound classification. This study contributes to the ongoing efforts in developing automatic intelligent techniques with the aim to significantly augment the speed and effectiveness of respiratory disease detection, thereby addressing a critical need in the medical field.
Collapse
Affiliation(s)
- Loredana Daria Mang
- Department of Telecommunication Engineering, University of Jaen, 23700 Linares, Spain; (F.D.G.M.); (D.M.M.); (S.G.G.)
| | | | - Damian Martinez Muñoz
- Department of Telecommunication Engineering, University of Jaen, 23700 Linares, Spain; (F.D.G.M.); (D.M.M.); (S.G.G.)
| | - Sebastián García Galán
- Department of Telecommunication Engineering, University of Jaen, 23700 Linares, Spain; (F.D.G.M.); (D.M.M.); (S.G.G.)
| | - Raquel Cortina
- Department of Computer Science, University of Oviedo, 33003 Oviedo, Spain;
| |
Collapse
|
8
|
Yoo JY, Oh S, Shalish W, Maeng WY, Cerier E, Jeanne E, Chung MK, Lv S, Wu Y, Yoo S, Tzavelis A, Trueb J, Park M, Jeong H, Okunzuwa E, Smilkova S, Kim G, Kim J, Chung G, Park Y, Banks A, Xu S, Sant'Anna GM, Weese-Mayer DE, Bharat A, Rogers JA. Wireless broadband acousto-mechanical sensing system for continuous physiological monitoring. Nat Med 2023; 29:3137-3148. [PMID: 37973946 DOI: 10.1038/s41591-023-02637-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 10/06/2023] [Indexed: 11/19/2023]
Abstract
The human body generates various forms of subtle, broadband acousto-mechanical signals that contain information on cardiorespiratory and gastrointestinal health with potential application for continuous physiological monitoring. Existing device options, ranging from digital stethoscopes to inertial measurement units, offer useful capabilities but have disadvantages such as restricted measurement locations that prevent continuous, longitudinal tracking and that constrain their use to controlled environments. Here we present a wireless, broadband acousto-mechanical sensing network that circumvents these limitations and provides information on processes including slow movements within the body, digestive activity, respiratory sounds and cardiac cycles, all with clinical grade accuracy and independent of artifacts from ambient sounds. This system can also perform spatiotemporal mapping of the dynamics of gastrointestinal processes and airflow into and out of the lungs. To demonstrate the capabilities of this system we used it to monitor constrained respiratory airflow and intestinal motility in neonates in the neonatal intensive care unit (n = 15), and to assess regional lung function in patients undergoing thoracic surgery (n = 55). This broadband acousto-mechanical sensing system holds the potential to help mitigate cardiorespiratory instability and manage disease progression in patients through continuous monitoring of physiological signals, in both the clinical and nonclinical setting.
Collapse
Affiliation(s)
- Jae-Young Yoo
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL, USA
| | - Seyong Oh
- Division of Electrical Engineering, Hanyang University ERICA, Ansan, Republic of Korea
| | - Wissam Shalish
- Neonatal Division, Department of Pediatrics, McGill University Health Center, Montreal, Quebec, Canada
| | - Woo-Youl Maeng
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL, USA
| | - Emily Cerier
- Division of Thoracic Surgery, Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| | - Emily Jeanne
- Neonatal Division, Department of Pediatrics, McGill University Health Center, Montreal, Quebec, Canada
| | - Myung-Kun Chung
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Shasha Lv
- Neonatal Division, Department of Pediatrics, McGill University Health Center, Montreal, Quebec, Canada
| | - Yunyun Wu
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL, USA
| | - Seonggwang Yoo
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL, USA
| | - Andreas Tzavelis
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL, USA
| | - Jacob Trueb
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL, USA
| | - Minsu Park
- Department of Polymer Science and Engineering, Dankook University, Yongin, Republic of Korea
| | - Hyoyoung Jeong
- Department of Electrical and Computer Engineering, University of California, Davis, CA, USA
| | - Efe Okunzuwa
- Division of Thoracic Surgery, Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| | - Slobodanka Smilkova
- Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL, USA
| | - Gyeongwu Kim
- Adlai E. Stevenson High School, Lincolnshire, IL, USA
| | - Junha Kim
- Department of Advanced Materials Engineering for Information and Electronics, Kyung Hee University, Gyeonggi-do, Republic of Korea
| | - Gooyoon Chung
- Department of Advanced Materials Engineering for Information and Electronics, Kyung Hee University, Gyeonggi-do, Republic of Korea
| | - Yoonseok Park
- Department of Advanced Materials Engineering for Information and Electronics, Kyung Hee University, Gyeonggi-do, Republic of Korea
| | - Anthony Banks
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL, USA
| | - Shuai Xu
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL, USA
- Sibel Health, Niles, IL, USA
| | - Guilherme M Sant'Anna
- Neonatal Division, Department of Pediatrics, McGill University Health Center, Montreal, Quebec, Canada
| | - Debra E Weese-Mayer
- Department of Pediatrics, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
- Division of Autonomic Medicine, Department of Pediatrics, Ann & Robert H. Lurie Children's Hospital of Chicago, Chicago, IL, USA
- Stanley Manne Children's Research Institute, Chicago, IL, USA
| | - Ankit Bharat
- Division of Thoracic Surgery, Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA.
| | - John A Rogers
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL, USA.
| |
Collapse
|
9
|
Abdul Sattar Shaikh A, Bhargavi MS, Kumar C P. Weighted aggregation through probability based ranking: An optimized federated learning architecture to classify respiratory diseases. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107821. [PMID: 37776709 DOI: 10.1016/j.cmpb.2023.107821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 08/29/2023] [Accepted: 09/15/2023] [Indexed: 10/02/2023]
Abstract
Background and Objective Respiratory Diseases are one of the leading chronic illnesses in the world according to the reports by World Health Organization. Diagnosing these respiratory diseases is done through auscultation where a medical professional listens to sounds of air in the lungs for anomalies through a stethoscope. This method necessitates extensive experience and can also be misinterpreted by the medical professional. To address this issue, we introduce an AI-based solution that listens to the lung sounds and classifies the respiratory disease detected. Since the research work deals with medical data that is tightly under wraps due to privacy concerns in the medical field, we introduce a Deep learning solution to classify the diseases and a custom Federated learning (FL) approach to further improve the accuracy of the deep learning model and simultaneously maintain data privacy. Federated Learning architecture maintains data privacy and facilitates a distributed learning system for medical infrastructures. Methods The approach utilizes Generative Adversarial Networks (GAN) based Federated learning approach to ensure data privacy. Generative Adversarial Networks generate new data by synthesizing new lung sounds. This new synthesized data is then converted to spectrograms and trained on a neural network to classify four lung diseases, Heart Attack and Normal breathing patterns. Furthermore, to address performance loss during FL, we also propose a new "Weighted Aggregation through Probability-based Ranking (FedWAPR)" algorithm for optimizing the FL aggregation process. The FedWAPR aggregation takes inspiration from exponential distribution function and ranks better performing clients according to it. Results and Conclusion A test accuracy of about 92% was achieved by the trained model while classifying various respiratory diseases and heart failure. Additionally, we developed a novel FedWAPR approach that significantly outperformed the FedAVG approach for the FL aggregate function. A patient can be checked for respiratory diseases using this improved learning approach without the need for extensive sensitive data recording or for making sure the data sample obtained is secure. In a decentralized training runtime, the trained model successfully classifies various respiratory diseases and heart failure using lung sounds with a test accuracy on par with a centralized model.
Collapse
Affiliation(s)
- Abdullah Abdul Sattar Shaikh
- Department of Computer Science and Engineering, Bangalore Institute of Technology, Bangalore, 560004, Karnataka, India.
| | - M S Bhargavi
- Department of Computer Science and Engineering, Bangalore Institute of Technology, Bangalore, 560004, Karnataka, India.
| | - Pavan Kumar C
- Department of Computer Science and Engineering, Indian Institute of Information Technology Dharwad, Dharwad, 580009, Karnataka, India.
| |
Collapse
|
10
|
Kim H, Koh D, Jung Y, Han H, Kim J, Joo Y. Breathing sounds analysis system for early detection of airway problems in patients with a tracheostomy tube. Sci Rep 2023; 13:21029. [PMID: 38030682 PMCID: PMC10687247 DOI: 10.1038/s41598-023-47904-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 11/20/2023] [Indexed: 12/01/2023] Open
Abstract
To prevent immediate mortality in patients with a tracheostomy tube, it is essential to ensure timely suctioning or replacement of the tube. Breathing sounds at the entrance of tracheostomy tubes were recorded with a microphone and analyzed using a spectrogram to detect airway problems. The sounds were classified into three categories based on the waveform of the spectrogram according to the obstacle status: normal breathing sounds (NS), vibrant breathing sounds (VS) caused by movable obstacles, and sharp breathing sounds (SS) caused by fixed obstacles. A total of 3950 breathing sounds from 23 patients were analyzed. Despite neither the patients nor the medical staff recognizing any airway problems, the number and percentage of NS, VS, and SS were 1449 (36.7%), 1313 (33.2%), and 1188 (30.1%), respectively. Artificial intelligence (AI) was utilized to automatically classify breathing sounds. MobileNet and Inception_v3 exhibited the highest sensitivity and specificity scores of 0.9441 and 0.9414, respectively. When classifying into three categories, ResNet_50 showed the highest accuracy of 0.9027, and AlexNet showed the highest accuracy of 0.9660 in abnormal sounds. Classifying breathing sounds into three categories is very useful in deciding whether to suction or change the tracheostomy tubes, and AI can accomplish this with high accuracy.
Collapse
Affiliation(s)
- Hyunbum Kim
- Department of Otorhinolaryngology-Head and Neck Surgery, College of Medicine, The Catholic University of Korea, 2 Sosa-dong, Wonmi-gu, Bucheon, Kyounggi-do, 14647, Republic of Korea
| | - Daeyeon Koh
- School of Mechanical Engineering, Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722, Republic of Korea
| | - Yohan Jung
- School of Mechanical Engineering, Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722, Republic of Korea
| | - Hyunjun Han
- School of Mechanical Engineering, Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722, Republic of Korea
| | - Jongbaeg Kim
- School of Mechanical Engineering, Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722, Republic of Korea.
| | - Younghoon Joo
- Department of Otorhinolaryngology-Head and Neck Surgery, College of Medicine, The Catholic University of Korea, 2 Sosa-dong, Wonmi-gu, Bucheon, Kyounggi-do, 14647, Republic of Korea.
| |
Collapse
|
11
|
Paranjpe MD, Sane SV. How do parents of wheezing children report their symptoms? A single centre cross-sectional observational study. Lung India 2023; 40:521-526. [PMID: 37961960 PMCID: PMC10723213 DOI: 10.4103/lungindia.lungindia_183_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 08/02/2023] [Accepted: 08/27/2023] [Indexed: 11/15/2023] Open
Abstract
Background Reported wheeze is of major relevance in the diagnosis and management of asthma and epidemiological studies on asthma prevalence. Our aim was to investigate the understanding of this term by parents and how they reported it to clinicians. Methods A single-centre cross-sectional observational study was carried out at a tertiary care hospital. Parents of wheezing children self-completed a written questionnaire, which was analysed to understand parental understanding of the term wheeze and the main symptoms noticed by them. Their responses were compared to the operational definition used in the ISAAC study. Results Questionnaires from 101 parents were analysed, out of which 50 children had an audible wheeze and 51 had an auscultatory wheeze. In our study, when asked about the main thing they noticed, 90 parents (89%) used non-auditory cues to identify wheeze, with the main presenting complaint being cough (n = 43, 42.6%), and only 4 (4%) reported wheezing. Even among the audible wheezers, only 7 (14%) used an auditory cue (alone or with some other cue) to describe their child's symptoms. Forty-seven parents knew the term wheeze, of which 19 parents (18.8%, N = 101) localised it to the chest, matching the epidemiological definition used in the ISAAC study. Conclusion The word wheeze was not commonly used to describe a child's symptoms in our setting, even when the child was actively wheezing. Parents often use colloquial equivalents, nonspecific terms and other clinical cues such as coughing while reporting their child's symptoms. The parental concept of "wheezing" is different from epidemiological definitions.
Collapse
Affiliation(s)
| | - Sudhir Vinod Sane
- Department of Pediatrics, Jupiter Hospital, Thane, Maharashtra, India
| |
Collapse
|
12
|
Sakama T, Ichinose M, Obara T, Shibata M, Kagawa T, Takakura H, Hirai K, Furuya H, Kato M, Mochizuki H. Effect of wheeze and lung function on lung sound parameters in children with asthma. Allergol Int 2023; 72:545-550. [PMID: 36935346 DOI: 10.1016/j.alit.2023.03.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 01/13/2023] [Accepted: 02/10/2023] [Indexed: 03/19/2023] Open
Abstract
BACKGROUND In children with asthma, there are many cases in which wheeze is confirmed by auscultation with a normal lung function, or in which the lung function is decreased without wheeze. Using an objective lung sound analysis, we examined the effect of wheeze and the lung function on lung sound parameters in children with asthma. METHODS A total of 114 children with asthma (males to females = 80: 34, median age 10 years old) were analyzed for their lung sound parameters using conventional methods, and wheeze and the lung function were checked. The effects of wheeze and the lung function on lung sound parameters were examined. RESULTS The patients with wheeze or decreased forced expiratory flow and volume in 1 s (FEV1) (% pred) showed a significantly higher sound power of respiration and expiration-to-inspiration sound power ratio (E/I) than those without wheeze and a normal FEV1 (% pred). There was no marked difference in the sound power of respiration or E/I between the patients without wheeze and a decreased FEV1 (% pred) and the patients with wheeze and a normal FEV1 (% pred). CONCLUSIONS Our data suggest that bronchial constriction in the asthmatic children with wheeze similarly exists in the asthmatic children with a decreased lung function. A lung sound analysis is likely to enable an accurate understanding of airway conditions.
Collapse
Affiliation(s)
- Takashi Sakama
- Department of Pediatrics, Tokai University Hachioji Hospital, Tokyo, Japan; Department of Pediatrics, Tokai University School of Medicine, Kanagawa, Japan
| | - Mami Ichinose
- Department of Pediatrics, Tokai University Hachioji Hospital, Tokyo, Japan; Department of Pediatrics, Tokai University School of Medicine, Kanagawa, Japan
| | - Takeru Obara
- Department of Pediatrics, Tokai University Hachioji Hospital, Tokyo, Japan; Department of Pediatrics, Tokai University School of Medicine, Kanagawa, Japan
| | - Mayuko Shibata
- Department of Pediatrics, Tokai University Hachioji Hospital, Tokyo, Japan; Department of Pediatrics, Tokai University School of Medicine, Kanagawa, Japan
| | - Takanori Kagawa
- Department of Pediatrics, Tokai University Hachioji Hospital, Tokyo, Japan; Department of Pediatrics, Tokai University School of Medicine, Kanagawa, Japan
| | - Hiromitsu Takakura
- Department of Pediatrics, Tokai University Hachioji Hospital, Tokyo, Japan; Department of Pediatrics, Tokai University School of Medicine, Kanagawa, Japan
| | - Kota Hirai
- Department of Pediatrics, Tokai University Hachioji Hospital, Tokyo, Japan; Department of Pediatrics, Tokai University School of Medicine, Kanagawa, Japan
| | - Hiroyuki Furuya
- Department of Basic Clinical Science and Public Health, Tokai University School of Medicine, Kanagawa, Japan
| | - Masahiko Kato
- Department of Pediatrics, Tokai University Hachioji Hospital, Tokyo, Japan; Department of Pediatrics, Tokai University School of Medicine, Kanagawa, Japan
| | - Hiroyuki Mochizuki
- Department of Pediatrics, Tokai University Hachioji Hospital, Tokyo, Japan; Department of Pediatrics, Tokai University School of Medicine, Kanagawa, Japan.
| |
Collapse
|
13
|
Han L, Liang W, Xie Q, Zhao J, Dong Y, Wang X, Lin L. Health Monitoring via Heart, Breath, and Korotkoff Sounds by Wearable Piezoelectret Patches. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2023; 10:e2301180. [PMID: 37607132 PMCID: PMC10558643 DOI: 10.1002/advs.202301180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 06/21/2023] [Indexed: 08/24/2023]
Abstract
Real-time monitoring of vital sounds from cardiovascular and respiratory systems via wearable devices together with modern data analysis schemes have the potential to reveal a variety of health conditions. Here, a flexible piezoelectret sensing system is developed to examine audio physiological signals in an unobtrusive manner, including heart, Korotkoff, and breath sounds. A customized electromagnetic shielding structure is designed for precision and high-fidelity measurements and several unique physiological sound patterns related to clinical applications are collected and analyzed. At the left chest location for the heart sounds, the S1 and S2 segments related to cardiac systole and diastole conditions, respectively, are successfully extracted and analyzed with good consistency from those of a commercial medical device. At the upper arm location, recorded Korotkoff sounds are used to characterize the systolic and diastolic blood pressure without a doctor or prior calibration. An Omron blood pressure monitor is used to validate these results. The breath sound detections from the lung/ trachea region are achieved a signal-to-noise ration comparable to those of a medical recorder, BIOPAC, with pattern classification capabilities for the diagnosis of viable respiratory diseases. Finally, a 6×6 sensor array is used to record heart sounds at different locations of the chest area simultaneously, including the Aortic, Pulmonic, Erb's point, Tricuspid, and Mitral regions in the form of mixed data resulting from the physiological activities of four heart valves. These signals are then separated by the independent component analysis algorithm and individual heart sound components from specific heart valves can reveal their instantaneous behaviors for the accurate diagnosis of heart diseases. The combination of these demonstrations illustrate a new class of wearable healthcare detection system for potentially advanced diagnostic schemes.
Collapse
Affiliation(s)
- Liuyang Han
- Tsinghua Shenzhen International Graduate SchoolTsinghua University518055ShenzhenChina
| | - Weijin Liang
- Tsinghua Shenzhen International Graduate SchoolTsinghua University518055ShenzhenChina
| | - Qisen Xie
- Tsinghua Shenzhen International Graduate SchoolTsinghua University518055ShenzhenChina
| | - JingJing Zhao
- Tsinghua Shenzhen International Graduate SchoolTsinghua University518055ShenzhenChina
| | - Ying Dong
- Tsinghua Shenzhen International Graduate SchoolTsinghua University518055ShenzhenChina
| | - Xiaohao Wang
- Tsinghua Shenzhen International Graduate SchoolTsinghua University518055ShenzhenChina
| | - Liwei Lin
- Department of mechanical engineeringUniversity of CaliforniaBerkeleyBerkeleyUSA
| |
Collapse
|
14
|
Huang DM, Huang J, Qiao K, Zhong NS, Lu HZ, Wang WJ. Deep learning-based lung sound analysis for intelligent stethoscope. Mil Med Res 2023; 10:44. [PMID: 37749643 PMCID: PMC10521503 DOI: 10.1186/s40779-023-00479-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 09/05/2023] [Indexed: 09/27/2023] Open
Abstract
Auscultation is crucial for the diagnosis of respiratory system diseases. However, traditional stethoscopes have inherent limitations, such as inter-listener variability and subjectivity, and they cannot record respiratory sounds for offline/retrospective diagnosis or remote prescriptions in telemedicine. The emergence of digital stethoscopes has overcome these limitations by allowing physicians to store and share respiratory sounds for consultation and education. On this basis, machine learning, particularly deep learning, enables the fully-automatic analysis of lung sounds that may pave the way for intelligent stethoscopes. This review thus aims to provide a comprehensive overview of deep learning algorithms used for lung sound analysis to emphasize the significance of artificial intelligence (AI) in this field. We focus on each component of deep learning-based lung sound analysis systems, including the task categories, public datasets, denoising methods, and, most importantly, existing deep learning methods, i.e., the state-of-the-art approaches to convert lung sounds into two-dimensional (2D) spectrograms and use convolutional neural networks for the end-to-end recognition of respiratory diseases or abnormal lung sounds. Additionally, this review highlights current challenges in this field, including the variety of devices, noise sensitivity, and poor interpretability of deep models. To address the poor reproducibility and variety of deep learning in this field, this review also provides a scalable and flexible open-source framework that aims to standardize the algorithmic workflow and provide a solid basis for replication and future extension: https://github.com/contactless-healthcare/Deep-Learning-for-Lung-Sound-Analysis .
Collapse
Affiliation(s)
- Dong-Min Huang
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China
| | - Jia Huang
- The Third People's Hospital of Shenzhen, Shenzhen, 518112, Guangdong, China
| | - Kun Qiao
- The Third People's Hospital of Shenzhen, Shenzhen, 518112, Guangdong, China
| | - Nan-Shan Zhong
- Guangzhou Institute of Respiratory Health, China State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, 510120, China.
| | - Hong-Zhou Lu
- The Third People's Hospital of Shenzhen, Shenzhen, 518112, Guangdong, China.
| | - Wen-Jin Wang
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China.
| |
Collapse
|
15
|
Kraman SS, Pasterkamp H, Wodicka GR. Smart Devices Are Poised to Revolutionize the Usefulness of Respiratory Sounds. Chest 2023; 163:1519-1528. [PMID: 36706908 PMCID: PMC10925548 DOI: 10.1016/j.chest.2023.01.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 01/10/2023] [Accepted: 01/17/2023] [Indexed: 01/26/2023] Open
Abstract
The association between breathing sounds and respiratory health or disease has been exceptionally useful in the practice of medicine since the advent of the stethoscope. Remote patient monitoring technology and artificial intelligence offer the potential to develop practical means of assessing respiratory function or dysfunction through continuous assessment of breathing sounds when patients are at home, at work, or even asleep. Automated reports such as cough counts or the percentage of the breathing cycles containing wheezes can be delivered to a practitioner via secure electronic means or returned to the clinical office at the first opportunity. This has not previously been possible. The four respiratory sounds that most lend themselves to this technology are wheezes, to detect breakthrough asthma at night and even occupational asthma when a patient is at work; snoring as an indicator of OSA or adequacy of CPAP settings; cough in which long-term recording can objectively assess treatment adequacy; and crackles, which, although subtle and often overlooked, can contain important clinical information when appearing in a home recording. In recent years, a flurry of publications in the engineering literature described construction, usage, and testing outcomes of such devices. Little of this has appeared in the medical literature. The potential value of this technology for pulmonary medicine is compelling. We expect that these tiny, smart devices soon will allow us to address clinical questions that occur away from the clinic.
Collapse
Affiliation(s)
- Steve S Kraman
- Department of Internal Medicine, Division of Pulmonary, Critical Care and Sleep Medicine, University of Kentucky, Lexington, KY.
| | - Hans Pasterkamp
- University of Manitoba, Department of Pediatrics and Child Health, Max Rady College of Medicine, University of Manitoba, Winnipeg, MB, Canada
| | - George R Wodicka
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN
| |
Collapse
|
16
|
Scaramozzino MU, Levi G, Sapone G, Romeo Plastina U. Chest Examination 3.0 With Wireless Technology in a Clinical Case Based on Literature Review. Cureus 2023; 15:e39464. [PMID: 37378239 PMCID: PMC10292082 DOI: 10.7759/cureus.39464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/25/2023] [Indexed: 06/29/2023] Open
Abstract
Physicians use auscultation as a standard method of thoracic examination: it is simple, reliable, non-invasive, and widely accepted. Artificial intelligence (AI) is the new frontier of thoracic examination as it makes it possible to integrate all available data (clinical, instrumental, laboratory, functional), allowing for objective assessments, precise diagnoses, and even the phenotypical characterization of lung diseases. Increasing the sensitivity and specificity of examinations helps provide tailored diagnostic and therapeutic indications, which also take into account the patient's clinical history and comorbidities. Several clinical studies, mainly conducted in children, have shown a good concordance between traditional and AI-assisted auscultation in detecting fibrotic diseases. On the other hand, the use of AI for the diagnosis of obstructive pulmonary disease is still debated as it gave inconsistent results when detecting certain types of lung noises, such as wet and dry crackles. Therefore, the application of AI in clinical practice needs further investigation. In particular, the pilot case report aims to address the use of this technology in restrictive lung disease, which in this specific case is pulmonary sarcoidosis. In the case we present, data integration allowed us to make the right diagnosis, avoid invasive procedures, and reduce the costs for the national health system; we show that integrating technologies can improve the diagnosis of restrictive lung disease. Randomized controlled trials will be needed to confirm the conclusions of this preliminary work.
Collapse
Affiliation(s)
- Marco Umberto Scaramozzino
- Department of Pulmonology, La Madonnina Clinic, Reggio Calabria, ITA
- Department of Thoracic Endoscopy, Tirrenia Hospital, Reggio Calabria, ITA
| | - Guido Levi
- Department of Pulmonology, ASST Spedali Civili, Brescia, ITA
- Department of Clinical and Experimental Sciences, University of Brescia, Brescia, ITA
| | - Giovanni Sapone
- Department of Cardiology, Policlinico Madonna della Consolazione, Reggio Calabria, ITA
| | - Ubaldo Romeo Plastina
- Department of Radiology, ECORAD Study of Radiology and Ultrasound, Reggio Calabria, ITA
| |
Collapse
|
17
|
Seah JJ, Zhao J, Wang DY, Lee HP. Review on the Advancements of Stethoscope Types in Chest Auscultation. Diagnostics (Basel) 2023; 13:diagnostics13091545. [PMID: 37174938 PMCID: PMC10177339 DOI: 10.3390/diagnostics13091545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 04/16/2023] [Accepted: 04/20/2023] [Indexed: 05/15/2023] Open
Abstract
Stethoscopes were originally designed for the auscultation of a patient's chest for the purpose of listening to lung and heart sounds. These aid medical professionals in their evaluation of the cardiovascular and respiratory systems, as well as in other applications, such as listening to bowel sounds in the gastrointestinal system or assessing for vascular bruits. Listening to internal sounds during chest auscultation aids healthcare professionals in their diagnosis of a patient's illness. We performed an extensive literature review on the currently available stethoscopes specifically for use in chest auscultation. By understanding the specificities of the different stethoscopes available, healthcare professionals can capitalize on their beneficial features, to serve both clinical and educational purposes. Additionally, the ongoing COVID-19 pandemic has also highlighted the unique application of digital stethoscopes for telemedicine. Thus, the advantages and limitations of digital stethoscopes are reviewed. Lastly, to determine the best available stethoscopes in the healthcare industry, this literature review explored various benchmarking methods that can be used to identify areas of improvement for existing stethoscopes, as well as to serve as a standard for the general comparison of stethoscope quality. The potential use of digital stethoscopes for telemedicine amidst ongoing technological advancements in wearable sensors and modern communication facilities such as 5G are also discussed. Based on the ongoing trend in advancements in wearable technology, telemedicine, and smart hospitals, understanding the benefits and limitations of the digital stethoscope is an essential consideration for potential equipment deployment, especially during the height of the current COVID-19 pandemic and, more importantly, for future healthcare crises when human and resource mobility is restricted.
Collapse
Affiliation(s)
- Jun Jie Seah
- Department of Otolaryngology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 119228, Singapore
| | - Jiale Zhao
- Department of Mechanical Engineering, National University of Singapore, Singapore 117575, Singapore
| | - De Yun Wang
- Department of Otolaryngology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 119228, Singapore
- Infectious Diseases Translational Research Programme, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 117545, Singapore
| | - Heow Pueh Lee
- Department of Mechanical Engineering, National University of Singapore, Singapore 117575, Singapore
| |
Collapse
|
18
|
Okamoto Y, Nguyen TV, Takahashi H, Takei Y, Okada H, Ichiki M. Highly sensitive low-frequency-detectable acoustic sensor using a piezoresistive cantilever for health monitoring applications. Sci Rep 2023; 13:6503. [PMID: 37081122 PMCID: PMC10119305 DOI: 10.1038/s41598-023-33568-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 04/14/2023] [Indexed: 04/22/2023] Open
Abstract
This study investigates a cantilever-based pressure sensor that can achieve a resolution of approximately 0.2 mPa, over the frequency range of 0.1-250 Hz. A piezoresistive cantilever with ultra-high acoustic compliance is used as the sensing element in the proposed pressure sensor. We achieved a cantilever with a sensitivity of approximately 40 times higher than that of the previous cantilever device by realizing an ultrathin (340 nm thick) structure with large pads and narrow hinges. Based on the measurement results, the proposed pressure sensor can measure acoustic signals with frequencies as low as 0.1 Hz. The proposed pressure sensor can be used to measure low-frequency pressure and sound, which is crucial for various applications, including photoacoustic-based gas/chemical sensing and monitoring of physiological parameters and natural disasters. We demonstrate the measurement of heart sounds with a high SNR of 58 dB. We believe the proposed microphone will be used in various applications, such as wearable health monitoring, monitoring of natural disasters, and realization of high-resolution photoacoustic-based gas sensors. We successfully measured the first (S1) and second (S2) cardiac sounds with frequencies of 7-100 Hz and 20-45 Hz, respectively.
Collapse
Affiliation(s)
- Yuki Okamoto
- National Institute of Advanced Industrial Science and Technology (AIST), Sensing System Research Center, Tsukuba, 305-8564, Japan.
| | - Thanh-Vinh Nguyen
- National Institute of Advanced Industrial Science and Technology (AIST), Sensing System Research Center, Tsukuba, 305-8564, Japan
| | - Hidetoshi Takahashi
- Department of Mechanical Engineering, Keio University, Yokohama, 223-8522, Japan
| | - Yusuke Takei
- National Institute of Advanced Industrial Science and Technology (AIST), Sensing System Research Center, Tsukuba, 305-8564, Japan
| | - Hironao Okada
- National Institute of Advanced Industrial Science and Technology (AIST), Sensing System Research Center, Tsukuba, 305-8564, Japan
| | - Masaaki Ichiki
- National Institute of Advanced Industrial Science and Technology (AIST), Sensing System Research Center, Tsukuba, 305-8564, Japan
| |
Collapse
|
19
|
Song W, Han J. Patch-level contrastive embedding learning for respiratory sound classification. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
20
|
Chetupalli SR, Krishnan P, Sharma N, Muguli A, Kumar R, Nanda V, Pinto LM, Ghosh PK, Ganapathy S. Multi-Modal Point-of-Care Diagnostics for COVID-19 Based on Acoustics and Symptoms. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2023; 11:199-210. [PMID: 36909300 PMCID: PMC9994626 DOI: 10.1109/jtehm.2023.3250700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 12/05/2022] [Accepted: 02/22/2023] [Indexed: 03/14/2023]
Abstract
BACKGROUND The COVID-19 pandemic has highlighted the need to invent alternative respiratory health diagnosis methodologies which provide improvement with respect to time, cost, physical distancing and detection performance. In this context, identifying acoustic bio-markers of respiratory diseases has received renewed interest. OBJECTIVE In this paper, we aim to design COVID-19 diagnostics based on analyzing the acoustics and symptoms data. Towards this, the data is composed of cough, breathing, and speech signals, and health symptoms record, collected using a web-application over a period of twenty months. METHODS We investigate the use of time-frequency features for acoustic signals and binary features for encoding different health symptoms. We experiment with use of classifiers like logistic regression, support vector machines and long-short term memory (LSTM) network models on the acoustic data, while decision tree models are proposed for the symptoms data. RESULTS We show that a multi-modal integration of inference from different acoustic signal categories and symptoms achieves an area-under-curve (AUC) of 96.3%, a statistically significant improvement when compared against any individual modality ([Formula: see text]). Experimentation with different feature representations suggests that the mel-spectrogram acoustic features performs relatively better across the three kinds of acoustic signals. Further, a score analysis with data recorded from newer SARS-CoV-2 variants highlights the generalization ability of the proposed diagnostic approach for COVID-19 detection. CONCLUSION The proposed method shows a promising direction for COVID-19 detection using a multi-modal dataset, while generalizing to new COVID variants.
Collapse
Affiliation(s)
- Srikanth Raj Chetupalli
- LEAP LaboratoryDepartment of Electrical EngineeringIndian Institute of Science Bengaluru 560012 India
| | - Prashant Krishnan
- LEAP LaboratoryDepartment of Electrical EngineeringIndian Institute of Science Bengaluru 560012 India
| | - Neeraj Sharma
- LEAP LaboratoryDepartment of Electrical EngineeringIndian Institute of Science Bengaluru 560012 India
| | - Ananya Muguli
- LEAP LaboratoryDepartment of Electrical EngineeringIndian Institute of Science Bengaluru 560012 India
| | - Rohit Kumar
- LEAP LaboratoryDepartment of Electrical EngineeringIndian Institute of Science Bengaluru 560012 India
| | - Viral Nanda
- P. D. Hinduja National Hospital and Medical Research Center Mumbai 400016 India
| | - Lancelot Mark Pinto
- P. D. Hinduja National Hospital and Medical Research Center Mumbai 400016 India
| | - Prasanta Kumar Ghosh
- LEAP LaboratoryDepartment of Electrical EngineeringIndian Institute of Science Bengaluru 560012 India
| | - Sriram Ganapathy
- LEAP LaboratoryDepartment of Electrical EngineeringIndian Institute of Science Bengaluru 560012 India
| |
Collapse
|
21
|
Ichinose M, Obara T, Shibata M, Kagawa T, Sakama T, Takakura H, Hirai K, Furuya H, Kato M, Mochizuki H. Clinical application of a lung sound analysis in infants with respiratory syncytial virus acute bronchiolitis. Pediatr Int 2023; 65:e15605. [PMID: 37615369 DOI: 10.1111/ped.15605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 06/11/2023] [Accepted: 06/09/2023] [Indexed: 08/25/2023]
Abstract
BACKGROUND Objective investigation of the characteristics of acute bronchiolitis in infants is important for its diagnosis and treatment. METHODS Lung sound data of 50 patients diagnosed with respiratory syncytial virus (RSV) acute bronchiolitis (m:f = 29:21, median of age 7 months), 20 patients with RSV acute respiratory tract infections without acute bronchiolitis (m:f = 10:10, 5 months) and 38 age-matched control infants (m:f = 23:15, 8 months) were analyzed using a conventional method and compared. Furthermore, the relationships between lung sound parameters and clinical symptoms (clinical score, length of hospital stay and SpO2 level) in the bronchiolitis and the non-bronchiolitis patients were examined. RESULTS Results of lung sound analysis showed that the inspiratory sound power of patients with RSV respiratory tract infections was low and the expiratory sound power was high compared with those of the controls. When the patients with RSV respiratory tract infections were divided into the bronchiolitis and non-bronchiolitis groups, the expiratory/inspiratory ratio of the bronchiolitis patients was greater than that of the non-bronchiolitis patients. There was no difference in the clinical symptoms, clinical score and length of hospital stay between the bronchiolitis and non-bronchiolitis patients, except for the SpO2 level on admission. CONCLUSION Lung sound analysis confirmed that patients with RSV acute bronchiolitis present with marked airway narrowing. Considering these results as a characteristic of acute bronchiolitis, it would be meaningful to reflect it in the improvement of diagnosis, treatment and subsequent management.
Collapse
Affiliation(s)
- Mami Ichinose
- Department of Pediatrics, Tokai University Hachioji Hospital, Hachioji, Japan
- Tokyo Metropolitan Children's Medical Center, Fuchu, Japan
| | - Takeru Obara
- Department of Pediatrics, Tokai University Hachioji Hospital, Hachioji, Japan
- Department of Pediatrics, Tokai University School of Medicine, Isehara, Japan
| | - Mayuko Shibata
- Department of Pediatrics, Tokai University Hachioji Hospital, Hachioji, Japan
- Department of Pediatrics, Tokai University School of Medicine, Isehara, Japan
| | - Takanori Kagawa
- Department of Pediatrics, Tokai University Hachioji Hospital, Hachioji, Japan
- Department of Pediatrics, Tokai University School of Medicine, Isehara, Japan
| | - Takashi Sakama
- Department of Pediatrics, Tokai University Hachioji Hospital, Hachioji, Japan
- Department of Pediatrics, Tokai University School of Medicine, Isehara, Japan
| | - Hiromitsu Takakura
- Department of Pediatrics, Tokai University Hachioji Hospital, Hachioji, Japan
- Department of Pediatrics, Tokai University School of Medicine, Isehara, Japan
| | - Kota Hirai
- Department of Pediatrics, Tokai University Hachioji Hospital, Hachioji, Japan
- Department of Pediatrics, Tokai University School of Medicine, Isehara, Japan
| | - Hiroyuki Furuya
- Department of Basic Clinical Science and Public Health, Tokai University School of Medicine, Isehara, Japan
| | - Masahiko Kato
- Department of Pediatrics, Tokai University Hachioji Hospital, Hachioji, Japan
- Department of Pediatrics, Tokai University School of Medicine, Isehara, Japan
| | - Hiroyuki Mochizuki
- Department of Pediatrics, Tokai University Hachioji Hospital, Hachioji, Japan
- Department of Pediatrics, Tokai University School of Medicine, Isehara, Japan
| |
Collapse
|
22
|
Cinyol F, Baysal U, Köksal D, Babaoğlu E, Ulaşlı SS. Incorporating support vector machine to the classification of respiratory sounds by Convolutional Neural Network. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
23
|
Ghulam Nabi F, Sundaraj K, Shahid Iqbal M, Shafiq M, Planiappan R. A telemedicine software application for asthma severity levels identification using wheeze sounds classification. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.11.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
24
|
Kuruma K, Otomo T, Sakama T, Akiyama K, Takakura H, Toyama D, Hirai K, Furuya H, Kato M, Mochizuki H. Breath sound analyses of infants with respiratory syncytial virus acute bronchiolitis. Pediatr Pulmonol 2022; 57:2320-2326. [PMID: 35670233 DOI: 10.1002/ppul.26034] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 06/02/2022] [Accepted: 06/06/2022] [Indexed: 11/11/2022]
Abstract
INTRODUCTION The reliability of a breath sound analysis using an objective method in infants has been reported. OBJECTIVE Breath sounds of infants with respiratory syncytial virus (RSV) acute bronchiolitis were analyzed via a breath sound spectrogram to evaluate their characteristics and examine their relationship with the severity. SUBJECTS AND METHODS We evaluated the inspiratory and expiratory breath sound parameters of 33 infants diagnosed with RSV acute bronchiolitis. The sound powers of inspiration and expiration were evaluated at the acute phase and recovery phase of infection. Furthermore, the relationship between the breath sound parameters and the clinical severity of acute bronchiolitis was examined. RESULTS Analyses of the breath sound spectrogram showed that the power of expiration as well as the expiration-to-inspiration sound ratio in the mid-frequency (E/I MF) was increased in the acute phase and decreased during the recovery phase. The E/I MF was inversely correlated with the SpO2 and positively correlated with the severity score. CONCLUSION In infants with RSV acute bronchiolitis, the sound power of respiration was large at the acute phase, significantly decreasing in the recovery phase. In 61% of participants, nonuniform, granular bands were shown in the low-pitched region of the expiratory spectrogram.
Collapse
Affiliation(s)
- Kenta Kuruma
- Department of Pediatrics, Tokai University Hachioji Hospital, Hachioji, Tokyo, Japan.,Tokyo Metropolitan Children's Medical Center, Fuchu, Japan
| | - Tomofumi Otomo
- Department of Pediatrics, Tokai University Hachioji Hospital, Hachioji, Tokyo, Japan.,Department of Pediatrics, Tokai University School of Medicine, Tokyo, Japan
| | - Takashi Sakama
- Department of Pediatrics, Tokai University Hachioji Hospital, Hachioji, Tokyo, Japan.,Department of Pediatrics, Tokai University School of Medicine, Tokyo, Japan
| | - Kosuke Akiyama
- Department of Pediatrics, Tokai University Hachioji Hospital, Hachioji, Tokyo, Japan.,Department of Pediatrics, Tokai University School of Medicine, Tokyo, Japan
| | - Hiromitsu Takakura
- Department of Pediatrics, Tokai University Hachioji Hospital, Hachioji, Tokyo, Japan.,Department of Pediatrics, Tokai University School of Medicine, Tokyo, Japan
| | - Daisuke Toyama
- Department of Pediatrics, Tokai University Hachioji Hospital, Hachioji, Tokyo, Japan.,Department of Pediatrics, Tokai University School of Medicine, Tokyo, Japan
| | - Kota Hirai
- Department of Pediatrics, Tokai University Hachioji Hospital, Hachioji, Tokyo, Japan.,Department of Pediatrics, Tokai University School of Medicine, Tokyo, Japan
| | - Hiroyuki Furuya
- Department of Basic Clinical Science and Public Health, Tokai University School of Medicine, Tokyo, Japan
| | - Masahiko Kato
- Department of Pediatrics, Tokai University Hachioji Hospital, Hachioji, Tokyo, Japan.,Department of Pediatrics, Tokai University School of Medicine, Tokyo, Japan
| | - Hiroyuki Mochizuki
- Department of Pediatrics, Tokai University Hachioji Hospital, Hachioji, Tokyo, Japan.,Department of Pediatrics, Tokai University School of Medicine, Tokyo, Japan
| |
Collapse
|
25
|
Dori G, Bachner-Hinenzon N, Kasim N, Zaidani H, Perl SH, Maayan S, Shneifi A, Kian Y, Tiosano T, Adler D, Adir Y. A novel infrasound and audible machine-learning approach for the diagnosis of COVID-19. ERJ Open Res 2022; 8:00152-2022. [PMID: 36284830 PMCID: PMC9501643 DOI: 10.1183/23120541.00152-2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 07/29/2022] [Indexed: 12/15/2022] Open
Abstract
The COVID-19 outbreak has rapidly spread around the world, causing a global public health and economic crisis. A critical limitation in detecting COVID-19 related pneumonia is that it is often manifested as a “silent pneumonia”, i.e., pulmonary auscultation, using a standard stethoscope, sounds "normal". Chest CT is the gold standard for detecting COVID-19 pneumonia; however, radiation exposure, availability and cost preclude its utilization as a screening tool for COVID-19 pneumonia. In this study we hypothesized that COVID-19 pneumonia, “silent” to the human ear using a standard stethoscope, is detectable using a full spectrum auscultation device that contains a machine-learning analysis.Lung sounds signals were acquired, using a novel full spectrum (3–2,000Hz) stethoscope, from 164 patients with COVID-19 pneumonia, 61 non-COVID-19 pneumonia and 141 healthy subjects. A machine-learning classifier was constructed, and the data was classified into 3 groups: 1. Normal lung sounds 2. COVID-19 pneumonia 3. Non-COVID-19 pneumonia.Standard auscultation found that 72% of the non-COVID-19 pneumonia patients had abnormal lung sounds, compared to only 25% for the COVID-19 pneumonia patients. The classifier's sensitivity and specificity for the detection of COVID-19 pneumonia were 97% and 93%, respectively, when analyzing the sound and infrasound data, and they were reduced to 93% and 80% without the infrasound data (p<0.01 difference in ROC with and without infrasound).This study reveals that useful clinical information exists in the infrasound spectrum of COVID-19 related pneumonia, and machine-learning analysis applied to the full spectrum of lung sounds is useful in its detection.
Collapse
|
26
|
Şanlıbaba İ. Similarity measurement of fuzzy entropies of respiratory sounds and risk measurement according to credibility distributions. Soft comput 2022. [DOI: 10.1007/s00500-022-07415-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
27
|
Neili Z, Sundaraj K. A comparative study of the spectrogram, scalogram, melspectrogram and gammatonegram time-frequency representations for the classification of lung sounds using the ICBHI database based on CNNs. BIOMED ENG-BIOMED TE 2022; 67:367-390. [PMID: 35926850 DOI: 10.1515/bmt-2022-0180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Accepted: 06/21/2022] [Indexed: 11/15/2022]
Abstract
In lung sound classification using deep learning, many studies have considered the use of short-time Fourier transform (STFT) as the most commonly used 2D representation of the input data. Consequently, STFT has been widely used as an analytical tool, but other versions of the representation have also been developed. This study aims to evaluate and compare the performance of the spectrogram, scalogram, melspectrogram and gammatonegram representations, and provide comparative information to users regarding the suitability of these time-frequency (TF) techniques in lung sound classification. Lung sound signals used in this study were obtained from the ICBHI 2017 respiratory sound database. These lung sound recordings were converted into images of spectrogram, scalogram, melspectrogram and gammatonegram TF representations respectively. The four types of images were fed separately into the VGG16, ResNet-50 and AlexNet deep-learning architectures. Network performances were analyzed and compared based on accuracy, precision, recall and F1-score. The results of the analysis on the performance of the four representations using these three commonly used CNN deep-learning networks indicate that the generated gammatonegram and scalogram TF images coupled with ResNet-50 achieved maximum classification accuracies.
Collapse
Affiliation(s)
- Zakaria Neili
- Electronics Department, University of Badji Mokhtar Annaba, Annaba, Algeria
| | - Kenneth Sundaraj
- Faculty of Electronics and Computer Engineering, Universiti Teknikal Malaysia Melaka, Melaka, Malaysia
| |
Collapse
|
28
|
Borwankar S, Verma JP, Jain R, Nayyar A. Improvise approach for respiratory pathologies classification with multilayer convolutional neural networks. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:39185-39205. [PMID: 35505670 PMCID: PMC9047583 DOI: 10.1007/s11042-022-12958-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 02/16/2022] [Accepted: 03/09/2022] [Indexed: 06/01/2023]
Abstract
Every respiratory-related checkup includes audio samples collected from the individual, collected through different tools (sonograph, stethoscope). This audio is analyzed to identify pathology, which requires time and effort. The research work proposed in this paper aims at easing the task with deep learning by the diagnosis of lung-related pathologies using Convolutional Neural Network (CNN) with the help of transformed features from the audio samples. International Conference on Biomedical and Health Informatics (ICBHI) corpus dataset was used for lung sound. Here a novel approach is proposed to pre-process the data and pass it through a newly proposed CNN architecture. The combination of pre-processing steps MFCC, Melspectrogram, and Chroma CENS with CNN improvise the performance of the proposed system, which helps to make an accurate diagnosis of lung sounds. The comparative analysis shows how the proposed approach performs better with previous state-of-the-art research approaches. It also shows that there is no need for a wheeze or a crackle to be present in the lung sound to carry out the classification of respiratory pathologies.
Collapse
Affiliation(s)
- Saumya Borwankar
- Institute of Technology, Nirma University, Ahmedabad, Gujarat India
| | | | - Rachna Jain
- IT department, Bhagwan Parshuram Institute of Technology, New Delhi, India
| | - Anand Nayyar
- Graduate School, Faculty of Information Technology, Duy Tan University, Da Nang, 550000 Vietnam
| |
Collapse
|
29
|
Ahmed S, Sultana S, Khan AM, Islam MS, Habib GMM, McLane IM, McCollum ED, Baqui AH, Cunningham S, Nair H. Digital auscultation as a diagnostic aid to detect childhood pneumonia: A systematic review. J Glob Health 2022; 12:04033. [PMID: 35493777 PMCID: PMC9024283 DOI: 10.7189/jogh.12.04033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Background Frontline health care workers use World Health Organization Integrated Management of Childhood Illnesses (IMCI) guidelines for child pneumonia care in low-resource settings. IMCI guideline pneumonia diagnostic criterion performs with low specificity, resulting in antibiotic overtreatment. Digital auscultation with automated lung sound analysis may improve the diagnostic performance of IMCI pneumonia guidelines. This systematic review aims to summarize the evidence on detecting adventitious lung sounds by digital auscultation with automated analysis compared to reference physician acoustic analysis for child pneumonia diagnosis. Methods In this review, articles were searched from MEDLINE, Embase, CINAHL Plus, Web of Science, Global Health, IEEExplore database, Scopus, and the ClinicalTrial.gov databases from the inception of each database to October 27, 2021, and reference lists of selected studies and relevant review articles were searched manually. Studies reporting diagnostic performance of digital auscultation and/or computerized lung sound analysis compared against physicians’ acoustic analysis for pneumonia diagnosis in children under the age of 5 were eligible for this systematic review. Retrieved citations were screened and eligible studies were included for extraction. Risk of bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. All these steps were independently performed by two authors and disagreements between the reviewers were resolved through discussion with an arbiter. Narrative data synthesis was performed. Results A total of 3801 citations were screened and 46 full-text articles were assessed. 10 studies met the inclusion criteria. Half of the studies used a publicly available respiratory sound database to evaluate their proposed work. Reported methodologies/approaches and performance metrics for classifying adventitious lung sounds varied widely across the included studies. All included studies except one reported overall diagnostic performance of the digital auscultation/computerised sound analysis to distinguish adventitious lung sounds, irrespective of the disease condition or age of the participants. The reported accuracies for classifying adventitious lung sounds in the included studies varied from 66.3% to 100%. However, it remained unclear to what extent these results would be applicable for classifying adventitious lung sounds in children with pneumonia. Conclusions This systematic review found very limited evidence on the diagnostic performance of digital auscultation to diagnose pneumonia in children. Well-designed studies and robust reporting are required to evaluate the accuracy of digital auscultation in the paediatric population.
Collapse
Affiliation(s)
- Salahuddin Ahmed
- Usher Institute, University of Edinburgh, Edinburgh, UK
- Projahnmo Research Foundation, Dhaka, Bangladesh
| | | | - Ahad M Khan
- Usher Institute, University of Edinburgh, Edinburgh, UK
- Projahnmo Research Foundation, Dhaka, Bangladesh
| | - Mohammad S Islam
- Usher Institute, University of Edinburgh, Edinburgh, UK
- Child Health Research Foundation, Dhaka, Bangladesh
| | - GM Monsur Habib
- Usher Institute, University of Edinburgh, Edinburgh, UK
- Bangladesh Primary Care Respiratory Society, Khulna, Bangladesh
| | | | - Eric D McCollum
- Global Program for Pediatric Respiratory Sciences, Eudowood Division of Paediatric Respiratory Sciences, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
- Department of International Health, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland, USA
| | - Abdullah H Baqui
- Department of International Health, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland, USA
| | - Steven Cunningham
- Department of Child Life and Health, Centre for Inflammation Research, University of Edinburgh, Edinburgh, UK
| | - Harish Nair
- Usher Institute, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
30
|
A Neural Network-Based Method for Respiratory Sound Analysis and Lung Disease Detection. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12083877] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Background: Respiratory sound analysis represents a research topic of growing interest in recent times. In fact, in this area, there is the potential to automatically infer the abnormalities in the preliminary stages of a lung dysfunction. Methods: In this paper, we propose a method to analyse respiratory sounds in an automatic way. The aim is to show the effectiveness of machine learning techniques in respiratory sound analysis. A feature vector is gathered directly from breath audio and, thus, by exploiting supervised machine learning techniques, we detect if the feature vector is related to a patient affected by a lung disease. Moreover, the proposed method is able to characterise the lung disease in asthma, bronchiectasis, bronchiolitis, chronic obstructive pulmonary disease, pneumonia, and lower or upper respiratory tract infection. Results: A retrospective experimental analysis on 126 patients with 920 recording sessions showed the effectiveness of the proposed method. Conclusion: The experimental analysis demonstrated that it is possible to detect lung disease by exploiting machine learning techniques. We considered several supervised machine learning algorithms, obtaining the most interesting performance with the neural network model, with an F-Measure of 0.983 in lung disease detection and equal to 0.923 in lung disease characterisation, increasing the state-of-the-art performance.
Collapse
|
31
|
Kim Y, Hyon Y, Lee S, Woo SD, Ha T, Chung C. The coming era of a new auscultation system for analyzing respiratory sounds. BMC Pulm Med 2022; 22:119. [PMID: 35361176 PMCID: PMC8969404 DOI: 10.1186/s12890-022-01896-1] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Accepted: 03/20/2022] [Indexed: 01/28/2023] Open
Abstract
Auscultation with stethoscope has been an essential tool for diagnosing the patients with respiratory disease. Although auscultation is non-invasive, rapid, and inexpensive, it has intrinsic limitations such as inter-listener variability and subjectivity, and the examination must be performed face-to-face. Conventional stethoscope could not record the respiratory sounds, so it was impossible to share the sounds. Recent innovative digital stethoscopes have overcome the limitations and enabled clinicians to store and share the sounds for education and discussion. In particular, the recordable stethoscope made it possible to analyze breathing sounds using artificial intelligence, especially based on neural network. Deep learning-based analysis with an automatic feature extractor and convoluted neural network classifier has been applied for the accurate analysis of respiratory sounds. In addition, the current advances in battery technology, embedded processors with low power consumption, and integrated sensors make possible the development of wearable and wireless stethoscopes, which can help to examine patients living in areas of a shortage of doctors or those who need isolation. There are still challenges to overcome, such as the analysis of complex and mixed respiratory sounds and noise filtering, but continuous research and technological development will facilitate the transition to a new era of a wearable and smart stethoscope.
Collapse
Affiliation(s)
- Yoonjoo Kim
- Division of Pulmonology and Critical Care Medicine, Department of Internal Medicine, College of Medicine, Chungnam National University, Daejeon, 34134, Korea
| | - YunKyong Hyon
- Division of Industrial Mathematics, National Institute for Mathematical Sciences, 70, Yuseong-daero 1689 beon-gil, Yuseong-gu, Daejeon, 34047, Republic of Korea
| | - Sunju Lee
- Division of Industrial Mathematics, National Institute for Mathematical Sciences, 70, Yuseong-daero 1689 beon-gil, Yuseong-gu, Daejeon, 34047, Republic of Korea
| | - Seong-Dae Woo
- Division of Pulmonology and Critical Care Medicine, Department of Internal Medicine, College of Medicine, Chungnam National University, Daejeon, 34134, Korea
| | - Taeyoung Ha
- Division of Industrial Mathematics, National Institute for Mathematical Sciences, 70, Yuseong-daero 1689 beon-gil, Yuseong-gu, Daejeon, 34047, Republic of Korea.
| | - Chaeuk Chung
- Division of Pulmonology and Critical Care Medicine, Department of Internal Medicine, College of Medicine, Chungnam National University, Daejeon, 34134, Korea. .,Infection Control Convergence Research Center, Chungnam National University School of Medicine, Daejeon, 35015, Republic of Korea.
| |
Collapse
|
32
|
Automatic diagnosis of COVID-19 disease using deep convolutional neural network with multi-feature channel from respiratory sound data: Cough, voice, and breath. ALEXANDRIA ENGINEERING JOURNAL 2022; 61:1319-1334. [PMCID: PMC8214159 DOI: 10.1016/j.aej.2021.06.024] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 05/25/2021] [Accepted: 06/15/2021] [Indexed: 06/01/2023]
Abstract
The problem of respiratory sound classification has received good attention from the clinical scientists and medical researcher’s community in the last year to the diagnosis of COVID-19 disease. The Artificial Intelligence (AI) based models deployed into the real-world to identify the COVID-19 disease from human-generated sounds such as voice/speech, dry cough, and breath. The CNN (Convolutional Neural Network) is used to solve many real-world problems with Artificial Intelligence (AI) based machines. We have proposed and implemented a multi-channeled Deep Convolutional Neural Network (DCNN) for automatic diagnosis of COVID-19 disease from human respiratory sounds like a voice, dry cough, and breath, and it will give better accuracy and performance than previous models. We have applied multi-feature channels such as the data De-noising Auto Encoder (DAE) technique, GFCC (Gamma-tone Frequency Cepstral Coefficients), and IMFCC (Improved Multi-frequency Cepstral Coefficients) methods on augmented data to extract the deep features for the input of the CNN. The proposed approach improves system performance to the diagnosis of COVID-19 disease and provides better results on the COVID-19 respiratory sound dataset.
Collapse
|
33
|
Kranthi Kumar L, Alphonse P. COVID-19 disease diagnosis with light-weight CNN using modified MFCC and enhanced GFCC from human respiratory sounds. THE EUROPEAN PHYSICAL JOURNAL. SPECIAL TOPICS 2022; 231:3329-3346. [PMID: 35096278 PMCID: PMC8785156 DOI: 10.1140/epjs/s11734-022-00432-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 12/18/2021] [Indexed: 06/02/2023]
Abstract
In the last 2 years, medical researchers and clinical scientists have paid close attention to the problem of respiratory sound classification to classify COVID-19 disease symptoms. In the physical world, very few AI-based (Artificial Intelligence) techniques are often used to detect COVID-19/SARS-CoV-2 respiratory disease symptoms from the human respiratory system-generated acoustic sounds such as acoustic voice sound, breathing (inhale and exhale) sounds, and cough sound. We propose a light-weight Convolutional Neural Network (CNN) with Modified-Mel-frequency Cepstral Coefficient (M-MFCC) using different depths and kernel sizes to classify COVID-19 and other respiratory sound disease symptoms such as Asthma, Pertussis, and Bronchitis. The proposed network outperforms conventional feature extraction models and existing Deep Learning (DL) models for COVID-19/SARS-CoV-2 classification accuracy in the range of 4-10%. The model's performance is compared with the COVID-19 crowdsourced benchmark dataset and gives a competitive performance. We applied different receptive fields and depths in the proposed model to get different contextual information that should aid in classification. And our experiments suggested 1 × 12 receptive fields and a depth of 5-Layer for the light-weight CNN to extract and identify the features from respiratory sound data. The model is also trained and tested with different modalities of data to showcase its effectiveness in classification.
Collapse
Affiliation(s)
- Lella Kranthi Kumar
- Health Analytics Research Labs, Department of Computer Applications, NIT Tiruchirappalli, Tiruchirappalli, Tamil Nadu 620015 India
| | - P.J.A. Alphonse
- Health Analytics Research Labs, Department of Computer Applications, NIT Tiruchirappalli, Tiruchirappalli, Tamil Nadu 620015 India
| |
Collapse
|
34
|
Fraiwan M, Fraiwan L, Alkhodari M, Hassanin O. Recognition of pulmonary diseases from lung sounds using convolutional neural networks and long short-term memory. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2022; 13:4759-4771. [PMID: 33841584 PMCID: PMC8019351 DOI: 10.1007/s12652-021-03184-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Accepted: 03/25/2021] [Indexed: 05/03/2023]
Abstract
UNLABELLED In this paper, a study is conducted to explore the ability of deep learning in recognizing pulmonary diseases from electronically recorded lung sounds. The selected data-set included a total of 103 patients obtained from locally recorded stethoscope lung sounds acquired at King Abdullah University Hospital, Jordan University of Science and Technology, Jordan. In addition, 110 patients data were added to the data-set from the Int. Conf. on Biomedical Health Informatics publicly available challenge database. Initially, all signals were checked to have a sampling frequency of 4 kHz and segmented into 5 s segments. Then, several preprocessing steps were undertaken to ensure smoother and less noisy signals. These steps included wavelet smoothing, displacement artifact removal, and z-score normalization. The deep learning network architecture consisted of two stages; convolutional neural networks and bidirectional long short-term memory units. The training of the model was evaluated based on a k-fold cross-validation scheme of tenfolds using several performance evaluation metrics including Cohen's kappa, accuracy, sensitivity, specificity, precision, and F1-score. The developed algorithm achieved the highest average accuracy of 99.62% with a precision of 98.85% in classifying patients based on the pulmonary disease types using CNN + BDLSTM. Furthermore, a total agreement of 98.26% was obtained between the predictions and original classes within the training scheme. This study paves the way towards implementing deep learning models in clinical settings to assist clinicians in decision making related to the recognition of pulmonary diseases. SUPPLEMENTARY INFORMATION The online version supplementary material available at 10.1007/s12652-021-03184-y.
Collapse
Affiliation(s)
- M. Fraiwan
- Department of Computer Engineering, Jordan University of Science and Technology, P.O. Box 3030, Irbid, 22110 Jordan
| | - L. Fraiwan
- Department of Biomedical Engineering, Jordan University of Science and Technology, P.O. Box 3030, Irbid, 22110 Jordan
| | - M. Alkhodari
- Department of Electrical and Computer Engineering, Abu Dhabi University, Abu Dhabi, UAE
| | - O. Hassanin
- Department of Electrical and Computer Engineering, Abu Dhabi University, Abu Dhabi, UAE
| |
Collapse
|
35
|
Haider NS, Behera A. Computerized lung sound based classification of asthma and chronic obstructive pulmonary disease (COPD). Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2021.12.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
36
|
Pahar M, Klopper M, Reeve B, Warren R, Theron G, Niesler T. Automatic cough classification for tuberculosis screening in a real-world environment. Physiol Meas 2021; 42. [PMID: 34649231 DOI: 10.1088/1361-6579/ac2fb8] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 10/14/2021] [Indexed: 11/12/2022]
Abstract
Objective.The automatic discrimination between the coughing sounds produced by patients with tuberculosis (TB) and those produced by patients with other lung ailments.Approach.We present experiments based on a dataset of 1358 forced cough recordings obtained in a developing-world clinic from 16 patients with confirmed active pulmonary TB and 35 patients suffering from respiratory conditions suggestive of TB but confirmed to be TB negative. Using nested cross-validation, we have trained and evaluated five machine learning classifiers: logistic regression (LR), support vector machines, k-nearest neighbour, multilayer perceptrons and convolutional neural networks.Main Results.Although classification is possible in all cases, the best performance is achieved using LR. In combination with feature selection by sequential forward selection, our best LR system achieves an area under the ROC curve (AUC) of 0.94 using 23 features selected from a set of 78 high-resolution mel-frequency cepstral coefficients. This system achieves a sensitivity of 93% at a specificity of 95% and thus exceeds the 90% sensitivity at 70% specificity specification considered by the World Health Organisation (WHO) as a minimal requirement for a community-based TB triage test.Significance.The automatic classification of cough audio sounds, when applied to symptomatic patients requiring investigation for TB, can meet the WHO triage specifications for the identification of patients who should undergo expensive molecular downstream testing. This makes it a promising and viable means of low cost, easily deployable frontline screening for TB, which can benefit especially developing countries with a heavy TB burden.
Collapse
Affiliation(s)
- Madhurananda Pahar
- Department of Electrical and Electronic Engineering, Stellenbosch University, South Africa
| | - Marisa Klopper
- SAMRC Centre for Tuberculosis Research, Division of Molecular Biology and Human Genetics, DSI/NRF Centre of Excellence for Biomedical Tuberculosis Research, Faculty of Medicine and Health Sciences, Stellenbosch University, South Africa
| | - Byron Reeve
- SAMRC Centre for Tuberculosis Research, Division of Molecular Biology and Human Genetics, DSI/NRF Centre of Excellence for Biomedical Tuberculosis Research, Faculty of Medicine and Health Sciences, Stellenbosch University, South Africa
| | - Rob Warren
- SAMRC Centre for Tuberculosis Research, Division of Molecular Biology and Human Genetics, DSI/NRF Centre of Excellence for Biomedical Tuberculosis Research, Faculty of Medicine and Health Sciences, Stellenbosch University, South Africa
| | - Grant Theron
- SAMRC Centre for Tuberculosis Research, Division of Molecular Biology and Human Genetics, DSI/NRF Centre of Excellence for Biomedical Tuberculosis Research, Faculty of Medicine and Health Sciences, Stellenbosch University, South Africa
| | - Thomas Niesler
- Department of Electrical and Electronic Engineering, Stellenbosch University, South Africa
| |
Collapse
|
37
|
Rocha BM, Pessoa D, Cheimariotis GA, Kaimakamis E, Kotoulas SC, Tzimou M, Maglaveras N, Marques A, de Carvalho P, Paiva RP. Detection of squawks in respiratory sounds of mechanically ventilated COVID-19 patients. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:512-516. [PMID: 34891345 DOI: 10.1109/embc46164.2021.9630734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Mechanically ventilated patients typically exhibit abnormal respiratory sounds. Squawks are short inspiratory adventitious sounds that may occur in patients with pneumonia, such as COVID-19 patients. In this work we devised a method for squawk detection in mechanically ventilated patients by developing algorithms for respiratory cycle estimation, squawk candidate identification, feature extraction, and clustering. The best classifier reached an F1 of 0.48 at the sound file level and an F1 of 0.66 at the recording session level. These preliminary results are promising, as they were obtained in noisy environments. This method will give health professionals a new feature to assess the potential deterioration of critically ill patients.
Collapse
|
38
|
|
39
|
Nikolaizik W, Wuensch L, Bauck M, Gross V, Sohrabi K, Weissflog A, Hildebrandt O, Koehler U, Weber S. Pilot study on nocturnal monitoring of crackles in children with pneumonia. ERJ Open Res 2021; 7:00284-2021. [PMID: 34853781 PMCID: PMC8628192 DOI: 10.1183/23120541.00284-2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 09/09/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND The clinical diagnosis of pneumonia is usually based on crackles at auscultation, but it is not yet clear what kind of crackles are the characteristic features of pneumonia in children. Lung sound monitoring can be used as a "longtime stethoscope". Therefore, it was the aim of this pilot study to use a lung sound monitor system to detect crackles and to differentiate between fine and coarse crackles in children with acute pneumonia. The change of crackles during the course of the disease shall be investigated in a follow-up study. PATIENTS AND METHODS Crackles were recorded overnight from 22:00 to 06:00 h in 30 children with radiographically confirmed pneumonia. The data for a total of 28 800 recorded 30-s epochs were audiovisually analysed for fine and coarse crackles. RESULTS Fine crackles and coarse crackles were recognised in every patient with pneumonia, but the number of epochs with and without crackles varied widely among the different patients: fine crackles were detected in 40±22% (mean±sd), coarse crackles in 76±20%. The predominant localisation of crackles as recorded during overnight monitoring was in accordance with the radiographic infiltrates and the classical auscultation in most patients. The distribution of crackles was fairly equal throughout the night. However, there were time periods without any crackle in the single patients so that the diagnosis of pneumonia might be missed at sporadic auscultation. CONCLUSION Nocturnal monitoring can be beneficial to reliably detect fine and coarse crackles in children with pneumonia.
Collapse
Affiliation(s)
- Wilfried Nikolaizik
- Dept of Pediatric Pulmonology, Children's Hospital, Philipps-University, Marburg, Germany
| | - Lisa Wuensch
- Dept of Pediatric Pulmonology, Children's Hospital, Philipps-University, Marburg, Germany
| | - Monika Bauck
- Dept of Pediatric Pulmonology, Children's Hospital, Philipps-University, Marburg, Germany
| | - Volker Gross
- Faculty of Health Sciences, University of Applied Sciences, Giessen, Germany
| | - Keywan Sohrabi
- Faculty of Health Sciences, University of Applied Sciences, Giessen, Germany
| | | | - Olaf Hildebrandt
- Division of Respiratory and Critical Care Medicine, Philipps-University, Marburg, Germany
| | - Ulrich Koehler
- Division of Respiratory and Critical Care Medicine, Philipps-University, Marburg, Germany
| | - Stefanie Weber
- Dept of Pediatric Pulmonology, Children's Hospital, Philipps-University, Marburg, Germany
| |
Collapse
|
40
|
Hui X, Zhou J, Sharma P, Conroy TB, Zhang Z, Kan EC. Wearable RF Near-Field Cough Monitoring by Frequency-Time Deep Learning. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2021; 15:756-764. [PMID: 34310320 DOI: 10.1109/tbcas.2021.3099865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Coughing is a common symptom for many respiratory disorders, and can spread droplets of various sizes containing bacterial and viral pathogens. Mild coughs are usually overlooked in the early stage, not only because they are barely noticeable by the person and the people around, but also because the present recording method is not comfortable, private, or reliable for long-term monitoring. In this paper, a wearable radio-frequency (RF) sensor is presented to recognize the mild cough signal directly from the local trachea vibration characteristics, and can isolate interferences from nearby people. The sensor operates at the ultra-high-frequency band, and can couple the RF energy to the upper respiratory track by the near field of the sensing antenna. The retrieved tissue vibration caused by the cough airflow burst can then be analyzed by a convolutional neural network trained on the frequency-time spectra. The sensing antenna design is analyzed for performance improvement. During the human study of 5 participants over 100 minutes of prescribed routines, the overall recognition ratio is above 90% and the false positive ratio during other routines is below 2.09%.
Collapse
|
41
|
Shuvo SB, Ali SN, Swapnil SI, Hasan T, Bhuiyan MIH. A Lightweight CNN Model for Detecting Respiratory Diseases From Lung Auscultation Sounds Using EMD-CWT-Based Hybrid Scalogram. IEEE J Biomed Health Inform 2021; 25:2595-2603. [PMID: 33373309 DOI: 10.1109/jbhi.2020.3048006] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Listening to lung sounds through auscultation is vital in examining the respiratory system for abnormalities. Automated analysis of lung auscultation sounds can be beneficial to the health systems in low-resource settings where there is a lack of skilled physicians. In this work, we propose a lightweight convolutional neural network (CNN) architecture to classify respiratory diseases from individual breath cycles using hybrid scalogram-based features of lung sounds. The proposed feature-set utilizes the empirical mode decomposition (EMD) and the continuous wavelet transform (CWT). The performance of the proposed scheme is studied using a patient independent train-validation-test set from the publicly available ICBHI 2017 lung sound dataset. Employing the proposed framework, weighted accuracy scores of 98.92% for three-class chronic classification and 98.70% for six-class pathological classification are achieved, which outperform well-known and much larger VGG16 in terms of accuracy by absolute margins of 1.10% and 1.11%, respectively. The proposed CNN model also outperforms other contemporary lightweight models while being computationally comparable.
Collapse
|
42
|
Ozmen GC, Safaei M, Lan L, Inan OT. A Novel Accelerometer Mounting Method for Sensing Performance Improvement in Acoustic Measurements From the Knee. JOURNAL OF VIBRATION AND ACOUSTICS 2021; 143:031006. [PMID: 34168416 PMCID: PMC8208483 DOI: 10.1115/1.4048554] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 09/08/2020] [Accepted: 09/09/2020] [Indexed: 06/13/2023]
Abstract
In this study, we propose a new mounting method to improve accelerometer sensing performance in the 50 Hz-10 kHz frequency band for knee sound measurement. The proposed method includes a thin double-sided adhesive tape for mounting and a 3D-printed custom-designed backing prototype. In our mechanical setup with an electrodynamic shaker, the measurements showed a 13 dB increase in the accelerometer's sensing performance in the 1-10 kHz frequency band when it is mounted with the craft tape under 2 N backing force applied through low-friction tape. As a proof-of-concept study, knee sounds of healthy subjects (n = 10) were recorded. When the backing force was applied, we observed statistically significant (p < 0.01) incremental changes in spectral centroid, spectral roll-off frequencies, and high-frequency (1-10 kHz) root-mean-square (RMS) acceleration, while low-frequency (50 Hz-1 kHz) RMS acceleration remained unchanged. The mean spectral centroid and spectral roll-off frequencies increased from 0.8 kHz and 4.15 kHz to 1.35 kHz and 5.9 kHz, respectively. The mean high-frequency acceleration increased from 0.45 mgRMS to 0.9 mgRMS with backing. We showed that the backing force improves the sensing performance of the accelerometer when mounted with the craft tape and the proposed backing prototype. This new method has the potential to be implemented in today's wearable systems to improve the sensing performance of accelerometers in knee sound measurements.
Collapse
Affiliation(s)
- Goktug C. Ozmen
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332
| | - Mohsen Safaei
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332
| | - Lan Lan
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA 30332
| | - Omer T. Inan
- School of Electrical and Computer Engineering; Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA 30332
| |
Collapse
|
43
|
Yokota T, Fukuda K, Someya T. Recent Progress of Flexible Image Sensors for Biomedical Applications. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2021; 33:e2004416. [PMID: 33527511 DOI: 10.1002/adma.202004416] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 08/29/2020] [Indexed: 06/12/2023]
Abstract
Flexible image sensors have attracted increasing attention as new imaging devices owing to their lightness, softness, and bendability. Since light can measure inside information from outside of the body, optical-imaging-based approaches, such as X-rays, are widely used for disease diagnosis in hospitals. Unlike conventional sensors, flexible image sensors are soft and can be directly attached to a curved surface, such as the skin, for continuous measurement of biometric information with high accuracy. Therefore, they are expected to gain wide application to wearable devices, as well as home medical care. Herein, the application of such sensors to the biomedical field is introduced. First, their individual components, photosensors, and switching elements, are explained. Then, the basic parameters used to evaluate the performance of each of these elements and the image sensors are described. Finally, examples of measuring the dynamic and static biometric information using flexible image sensors, together with relevant real-world measurement cases, are presented. Furthermore, recent applications of the flexible image sensors in the biomedical field are introduced.
Collapse
Affiliation(s)
- Tomoyuki Yokota
- Department of Electrical Engineering and Information Systems, Graduate School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
| | - Kenjiro Fukuda
- Center for Emergent Matter Science & Thin-Film Device Laboratory, RIKEN, 2-1 Hirosawa, Wako, Saitama, 351-0198, Japan
| | - Takao Someya
- Department of Electrical Engineering and Information Systems, Graduate School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
- Center for Emergent Matter Science & Thin-Film Device Laboratory, RIKEN, 2-1 Hirosawa, Wako, Saitama, 351-0198, Japan
| |
Collapse
|
44
|
Sen I, Saraclar M, Kahya YP. Differential Diagnosis of Asthma and COPD Based on Multivariate Pulmonary Sounds Analysis. IEEE Trans Biomed Eng 2021; 68:1601-1610. [PMID: 33400647 DOI: 10.1109/tbme.2021.3049288] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Asthma and chronic obstructive pulmonary disease (COPD) can be confused in clinical diagnosis due to overlapping symptoms. The purpose of this study is to develop a method based on multivariate pulmonary sounds analysis for differential diagnosis of the two diseases. METHODS The recorded 14-channel pulmonary sound data are mathematically modeled using multivariate (or, vector) autoregressive (VAR) model, and the model parameters are fed to the classifier. Separate classifiers are assumed for each of the six sub-phases of flow cycle, namely, early/mid/late inspiration and expiration, and the six decisions are combined to reach the final decision. Parameter classification is performed in the Bayesian framework with the assumption of Gaussian mixture model (GMM) for the likelihoods, and the six sub-phase decisions are combined by voting, where the weights are learned by a linear support vector machine (SVM) classifier. Fifty subjects are incorporated in the study, 30 being diagnosed with asthma and 20 with COPD. RESULTS The highest accuracy of the classifier is 98 percent, corresponding to correct classification rates of 100 and 95 percent for asthma and COPD, respectively. The prominent sub-phase to differentiate between the two diseases is found to be mid-inspiration. CONCLUSION The methodology proves to be promising in terms of asthma-COPD differentiation based on acoustic information. The results also reveal that the six sub-phases are not equally pertinent in the differentiation. SIGNIFICANCE Pulmonary sounds analysis may be a complementary tool in clinical practice for differential diagnosis of asthma and COPD, especially in the absence of reliable spirometric testing.
Collapse
|
45
|
Rivera-Sepulveda A, Isona M. Assessing Resident Diagnostic Skills Using a Modified Bronchiolitis Score. ACTA ACUST UNITED AC 2021; 18:11-16. [PMID: 33679039 DOI: 10.7199/ped.oncall.2021.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Background Resident milestones are objective instruments that assess the resident's growth, progression in knowledge, and clinical diagnostic reasoning; but they rely on the subjective appraisal of the supervising attending. Little is known about the use of standardized instruments that may complement the evaluation of resident diagnostic skills in the academic setting. Objectives Evaluate a modified bronchiolitis severity assessment tool by appraising the inter-rater variability and reliability between pediatric attendings and pediatric residents. Methods Cross-sectional study of children under 24 months of age who presented to a Community Hospital's Emergency Department with bronchiolitis between January-June 2014. A paired pediatric attending and resident evaluated each patient. Evaluation included age-based respiratory rate (RR), retractions, peripheral saturation, and auscultation. Cohen's kappa (K) measured inter-rater agreement. Inter-rater reliability (IRR) was assessed using a one-way random, average measures intra-class correlation (ICC) to evaluate the degree of consistency and magnitude of disagreement between inter-raters. Value of >0.6 was considered substantial for kappa and good internal consistency for ICC. Results Twenty patients were evaluated. Analysis showed fair agreement for the presence of retractions (K=0.31), auscultation (K=0.33), and total score (K=0.3). The RR (ICC=0.97), SpO2 (ICC=1.0), auscultation (ICC=0.77), and total score (ICC=0.84) were scored similarly across both raters, indicating excellent IRR. Identification of retractions had the least agreement across all statistical analysis. Conclusion The use of a standardized instrument, in conjunction with a trained resident-teaching staff, can help identify deficiencies in clinical competencies among residents and facilitate the learning process for the identification of pertinent clinical findings.
Collapse
Affiliation(s)
- Andrea Rivera-Sepulveda
- Pediatrics, Emergency Medicine, Nemours Children's Hospital, Orlando, FL, United States.,University of Puerto Rico Medical Sciences Campus, School of Health Professions and School of Medicine, San Juan, Puerto Rico
| | - Muguette Isona
- San Juan City Hospital, Emergency Department, San Juan, Puerto Rico
| |
Collapse
|
46
|
De La Torre Cruz J, Cañadas Quesada FJ, Ruiz Reyes N, García Galán S, Carabias Orti JJ, Peréz Chica G. Monophonic and Polyphonic Wheezing Classification Based on Constrained Low-Rank Non-Negative Matrix Factorization. SENSORS (BASEL, SWITZERLAND) 2021; 21:1661. [PMID: 33670892 PMCID: PMC7957792 DOI: 10.3390/s21051661] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 02/17/2021] [Accepted: 02/22/2021] [Indexed: 11/21/2022]
Abstract
The appearance of wheezing sounds is widely considered by physicians as a key indicator to detect early pulmonary disorders or even the severity associated with respiratory diseases, as occurs in the case of asthma and chronic obstructive pulmonary disease. From a physician's point of view, monophonic and polyphonic wheezing classification is still a challenging topic in biomedical signal processing since both types of wheezes are sinusoidal in nature. Unlike most of the classification algorithms in which interference caused by normal respiratory sounds is not addressed in depth, our first contribution proposes a novel Constrained Low-Rank Non-negative Matrix Factorization (CL-RNMF) approach, never applied to classification of wheezing as far as the authors' knowledge, which incorporates several constraints (sparseness and smoothness) and a low-rank configuration to extract the wheezing spectral content, minimizing the acoustic interference from normal respiratory sounds. The second contribution automatically analyzes the harmonic structure of the energy distribution associated with the estimated wheezing spectrogram to classify the type of wheezing. Experimental results report that: (i) the proposed method outperforms the most recent and relevant state-of-the-art wheezing classification method by approximately 8% in accuracy; (ii) unlike state-of-the-art methods based on classifiers, the proposed method uses an unsupervised approach that does not require any training.
Collapse
Affiliation(s)
- Juan De La Torre Cruz
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Francisco Jesús Cañadas Quesada
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Nicolás Ruiz Reyes
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Sebastián García Galán
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Julio José Carabias Orti
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Gerardo Peréz Chica
- Pneumology Clinical Management Unit of the University Hospital of Jaen, Av. del Ejercito Espanol, 10, 23007 Jaen, Spain;
| |
Collapse
|
47
|
Horimasu Y, Ohshimo S, Yamaguchi K, Sakamoto S, Masuda T, Nakashima T, Miyamoto S, Iwamoto H, Fujitaka K, Hamada H, Sadamori T, Shime N, Hattori N. A machine-learning based approach to quantify fine crackles in the diagnosis of interstitial pneumonia: A proof-of-concept study. Medicine (Baltimore) 2021; 100:e24738. [PMID: 33607819 PMCID: PMC7899847 DOI: 10.1097/md.0000000000024738] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 01/17/2021] [Indexed: 01/05/2023] Open
Abstract
Fine crackles are frequently heard in patients with interstitial lung diseases (ILDs) and are known as the sensitive indicator for ILDs, although the objective method for analyzing respiratory sounds including fine crackles is not clinically available. We have previously developed a machine-learning-based algorithm which can promptly analyze and quantify the respiratory sounds including fine crackles. In the present proof-of-concept study, we assessed the usefulness of fine crackles quantified by this algorithm in the diagnosis of ILDs.We evaluated the fine crackles quantitative values (FCQVs) in 60 participants who underwent high-resolution computed tomography (HRCT) and chest X-ray in our hospital. Right and left lung fields were evaluated separately.In sixty-seven lung fields with ILDs in HRCT, the mean FCQVs (0.121 ± 0.090) were significantly higher than those in the lung fields without ILDs (0.032 ± 0.023, P < .001). Among those with ILDs in HRCT, the mean FCQVs were significantly higher in those with idiopathic pulmonary fibrosis than in those with other types of ILDs (P = .002). In addition, the increased mean FCQV was associated with the presence of traction bronchiectasis (P = .003) and honeycombing (P = .004) in HRCT. Furthermore, in discriminating ILDs in HRCT, an FCQV-based determination of the presence or absence of fine crackles indicated a higher sensitivity compared to a chest X-ray-based determination of the presence or absence of ILDs.We herein report that the machine-learning-based quantification of fine crackles can predict the HRCT findings of lung fibrosis and can support the prompt and sensitive diagnosis of ILDs.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | - Hironobu Hamada
- Physical Analysis and Therapeutic Sciences, Graduate School of Biomedical and Health Sciences, Hiroshima University 1-2-3 Kasumi, Minami-ku, Hiroshima City, Hiroshima, Japan
| | | | | | | |
Collapse
|
48
|
Bandyopadhyaya I, Islam MA, Bhattacharyya P, Saha G. Automatic lung sound cycle extraction from single and multichannel acoustic recordings. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
49
|
Tabatabaei SAH, Fischer P, Schneider H, Koehler U, Gross V, Sohrabi K. Methods for Adventitious Respiratory Sound Analyzing Applications Based on Smartphones: A Survey. IEEE Rev Biomed Eng 2021; 14:98-115. [PMID: 32746364 DOI: 10.1109/rbme.2020.3002970] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Detection and classification of adventitious acoustic lung sounds plays an important role in diagnosing, monitoring, controlling and, caring the patients with lung diseases. Such systems can be presented as different platforms like medical devices, standalone software or smartphone application. Ubiquity of smartphones and widespread use of the corresponding applications make such a device an attractive platform for hosting the detection and classification systems for adventitious lung sounds. In this paper, the smartphone-based systems for automatic detection and classification of the adventitious lung sounds are surveyed. Such adventitious sounds include cough, wheeze, crackle and, snore. Relevant sounds related to abnormal respiratory activities are considered as well. The methods are shortly described and the analyzing algorithms are explained. The analysis includes detection and/or classification of the sound events. A summary of the main surveyed methods together with the classification parameters and used features for the sake of comparison is given. Existing challenges, open issues and future trends will be discussed as well.
Collapse
|
50
|
Fraiwan L, Hassanin O, Fraiwan M, Khassawneh B, Ibnian AM, Alkhodari M. Automatic identification of respiratory diseases from stethoscopic lung sound signals using ensemble classifiers. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2020.11.003] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|